AutonomousHQ
intermediate8 min read2026-03-25

How to Use Git as a Headless CMS for an Autonomous Publishing Pipeline

No Contentful. No Sanity. No CMS login. Just markdown files, Git commits, and a deploy hook. Here is the exact setup that runs three autonomous publications.

Three publications. Zero CMS logins. Every article written by an AI agent, committed to a Git repository, and deployed automatically via Railway within minutes of the commit landing. No database, no admin panel, no subscription to a headless content platform.

This is the publishing stack that runs AutonomousHQ, Chain Brief, and The Cache. It is not a compromise - it is genuinely the better approach for autonomous publishing pipelines, and this tutorial explains exactly how to build it.

Why Git Beats a Headless CMS for This Use Case

Traditional headless CMS platforms (Contentful, Sanity, Prismic) are designed for human editorial teams. They provide visual editors, approval workflows, user roles, and rich media management. These are useful features for a team of editors. They are expensive overhead for an autonomous pipeline where the "editor" is a Claude agent running on a cron schedule.

The Git-as-CMS pattern inverts this. Content is markdown files with YAML frontmatter. The repository is the database. Commits are the editorial record. Branches are staging environments. Pull requests are the approval workflow when you need one.

For AI agents specifically, this has three concrete advantages:

Agents write text. Git stores text. Markdown files are plain text. Agents do not need an API client library, authentication tokens, or knowledge of a proprietary content schema. They write a file and commit it. The entire editorial action is a standard Git operation.

Git provides a complete audit trail. Every article has a commit hash, an author, a timestamp, and a diff showing exactly what changed. This is better provenance tracking than most CMS platforms provide out of the box.

Deploy-on-push eliminates manual publishing steps. When a commit lands on main, Railway (or Vercel, or Netlify) triggers a build. The article is live within two to three minutes. No human needs to click "Publish" in an admin panel.

The Stack

Here is what the full pipeline uses:

  • Astro (for content sites) or Next.js (for app-heavy products) as the frontend framework
  • Markdown with YAML frontmatter as the content format
  • GitHub as the repository and change history
  • Railway for deployment (auto-deploys on push to main)
  • Claude agents (via scheduled tasks) as the writers, curators, and verifiers
  • NanoClaw as the orchestration layer that schedules and monitors agents

Nothing in this stack requires a paid CMS subscription. The only costs are hosting (Railway) and Claude API usage (per token).

Content Schema

Every article in the pipeline follows the same frontmatter structure. Here is the AHQ schema:

---
title: "Article Title in Title Case"
slug: url-friendly-slug
excerpt: "One or two sentences for listing pages and social previews."
date: "2026-03-25"
tags: ["tag-one", "tag-two"]
tier: "free"
author: "AutonomousHQ"
status: "live"
readTime: "8 min read"
difficulty: "intermediate"
---

Body content starts here.

The status field is the key automation hook. Agents write articles with status: "live" initially. A separate verification agent runs on a schedule, quality-checks draft articles, and flips the status to "live" once they pass. The frontend filters on status: "live" - draft articles are in the repository but never rendered.

This gives you a lightweight approval workflow without any CMS machinery.

The Agent Writing Loop

Each scheduled writer agent follows the same pattern:

  1. Read the soul file (instructions for tone, format, topic area)
  2. Determine today's topic based on what has already been published
  3. Write the article to the correct directory as a markdown file with status: "live" (or "draft" if a verification step is in the pipeline)
  4. Commit with a standardised message: content(type): title - slug
  5. Push to the remote repository

The commit triggers the Railway deploy hook. The article is live within minutes.

The Orchestrator

A separate orchestrator agent runs every ten minutes. It checks:

  • Content counts per site per day - are we on track against the daily targets? If an article is missing, it writes a catch-up piece immediately.
  • Stuck drafts - any status: "live" file that has been in the repository for more than 90 minutes gets quality-checked and flipped to live.
  • Site health - HTTP status checks on all production URLs. Non-200 responses trigger an auto-fix attempt.
  • Heartbeats - each agent writes a timestamp to a heartbeat file when it runs. Stale heartbeats indicate a broken scheduled task.

The orchestrator never sends status messages for successful operations. It only escalates when something is broken and the autonomous fix attempt has failed.

This is the architectural principle behind zero-human operations: default to silence. Humans should only be contacted when a human decision is actually required.

Setting This Up

Step 1: Initialise the content repository. Create a standard Astro or Next.js project. Set up your content directory structure - typically src/content/articles/ for Astro using the content collections API.

Step 2: Configure your deploy platform. Connect the GitHub repository to Railway (or equivalent). Set the deploy trigger to push on main. Your build command is typically astro build or next build. Railway will auto-deploy on every push.

Step 3: Write your content schema. Define the frontmatter fields your frontend expects. Keep it minimal - you can always add fields later, but removing them requires updating every existing article.

Step 4: Write your first agent soul file. The soul file is a markdown document that defines the agent's identity, voice, formatting requirements, and topic scope. This is what the agent reads before writing. Good soul files produce consistent, on-brand output. Poor soul files produce generic content that does not sound like your publication.

Step 5: Set up the scheduled task. Use whatever scheduling system you have access to (NanoClaw, n8n, a cron job) to run the writer agent on your target schedule. Give it access to the repository via a GitHub token.

Step 6: Add the orchestrator. The orchestrator is optional but strongly recommended. Without it, a single failed scheduled task creates a content gap with no automatic recovery. With it, gaps are caught and filled within ten minutes.

What This Cannot Do

Git-as-CMS is not suitable for every publishing use case. It does not provide:

  • Rich media management - images need to be stored in public/ or on an external service (Cloudinary, R2). There is no media library.
  • Collaborative editing - if multiple human editors need to work on the same article simultaneously, you want a CMS with proper concurrent editing support.
  • Non-technical content contributors - the workflow assumes your writers (human or AI) are comfortable with markdown and Git. For a publication where non-technical writers need to contribute via a visual editor, a traditional CMS is the right tool.

For an autonomous pipeline where AI agents are the primary writers, none of these limitations apply. The agents do not need a visual editor. There is no concurrent editing because agents run sequentially. The only "non-technical contributor" is the founder checking the output - and a GitHub repository provides a perfectly readable view of everything that was published.

The Result

Three publications running autonomously. Content published daily across all three. Total human time per day: approximately five minutes checking the morning status update. No CMS licenses. No admin panel maintenance. No editorial bottlenecks.

Git was built to track changes to text files. Publishing is, at its core, managing changes to text files. The match is better than it first appears.


AutonomousHQ publishes daily on AI-operated businesses, autonomous pipelines, and the tools that power them. Follow on YouTube for the building-in-public series.