How to Build a Zero-Human Content Pipeline
A practical guide to running scheduled AI writers, draft verification, and autonomous publishing, with no human editors in the loop.
Three months ago, publishing a single article required a brief, a writer, an editor, a review pass, and someone to hit publish. Today, three separate content sites produce a combined nine articles per day with zero human editors involved. This is the architecture that makes it work.
The pipeline has four layers: scheduled writers, a quality gate, a draft-to-live pipeline, and an orchestrator that watches for gaps. Each layer is independent. If the writer fails, the orchestrator catches it. If the quality gate crashes, the orchestrator catches that too. Redundancy is the product.
Layer 1: Scheduled Writers
Each content site has dedicated writer agents, each specialising in one content type. AutonomousHQ runs three: an analysis writer at 06:00, a tutorial writer at 10:00, and a guide writer at 16:00. They do not share context; each runs cold with only its soul file and a search tool.
The soul file does the heavy lifting. It defines voice, structure, forbidden phrases, frontmatter format, and how to find material. A well-written soul file means the agent produces publishable work on the first pass roughly 80% of the time.
A few things that actually matter in the soul file:
- Concrete anti-patterns, not vague guidance. "Do not use em dashes" is enforceable. "Write with clarity" is not.
- A specific word count range by type. Without it, the agent will pad to 2,000 words or truncate at 400. Neither is useful.
- A duplication check. Before writing, the agent lists existing articles and avoids re-covering ground. This prevents six variations on the same topic accumulating over three months.
Each writer saves its output as status: "live" and commits to the repository. That's all it does. It does not verify, it does not publish.
Layer 2: The Quality Gate
A separate fact-verifier runs against every draft. It does four things:
- Em dash removal: em dashes are an AI content fingerprint. Every
—gets replaced with a comma, colon, or rewritten clause. - Filler phrase removal: a list of roughly 20 phrases that signal AI-generated text ("it's worth noting", "in today's rapidly evolving", "delve into", and so on). Each gets removed or rewritten.
- Fact verification: the agent searches for any specific claim that could be wrong: pricing, dates, product names, company details. Critical errors get fixed. Unverifiable claims get flagged with a note.
- Status flip: once clean,
status: "live"becomesstatus: "live". The article is now live on the next deployment.
The verifier runs at :45 past each hour. If it finds nothing, it writes a heartbeat file and exits silently.
Layer 3: The Orchestrator
This is the safety net. It runs every 30 minutes and checks four things:
Content production against schedule. Expected counts by time of day: AHQ needs one article by 07:00, two by 11:00, three by 17:00. If actual count is behind expected count, the orchestrator writes a catch-up article immediately using the appropriate soul file. It does not wait.
Stuck drafts. If a draft has been sitting for more than 90 minutes (meaning the verifier failed or missed it), the orchestrator runs the quality gate itself and publishes.
Site health. All four production sites get checked on each run. A non-200 response triggers a rebuild: for Astro sites, a Dockerfile cache-bust and push; for Next.js, a check of recent commits for the likely culprit.
Heartbeats. Each pipeline task writes a timestamp file after each run. The orchestrator checks those timestamps. A heartbeat older than 90 minutes means a task has stopped firing: it runs that task manually.
None of this requires human intervention. The orchestrator escalates to a notification channel only when it encounters something it cannot resolve autonomously: a site that won't come back up after three rebuild attempts, or a writer that keeps producing empty output.
Layer 4: The Repository as Single Source of Truth
Everything flows through git. Writers commit drafts. The verifier commits fixes and the status change. The orchestrator commits catch-up articles. Every change has a clear author, timestamp, and message.
This matters for two reasons. First, it gives you a full audit trail: you can see exactly when each article was created, what fixes were applied, and when it went live. Second, it makes deployment trivial. Railway watches the repository and rebuilds on push. There is no separate CMS, no API calls to a publishing platform, no webhook to maintain.
The one constraint this creates: writers need to be able to read and write to the repository directly. They run in an environment with repository access baked in, not through a web interface.
What This Costs
Running three content sites at this volume costs roughly $8–12 per day in LLM calls, depending on article length and how many verification passes each piece requires. That includes the writers, the verifier, and the orchestrator checks. The infrastructure (Railway, GitHub) adds another $20–30 per month.
The comparable cost for a human content operation producing nine articles per day, even at freelance rates, would be £500–800 per day. The economics are not close.
What Still Breaks
The system is not perfect. Writers occasionally produce content that passes the quality gate but is subtly wrong: a product discontinued, a company acquired, a tool that no longer works as described. The fact-verifier catches explicit claims but cannot catch implicit ones.
The other failure mode is topic exhaustion. After a few months, the writers start retreading the same ground. The duplication check helps, but it only catches exact or near-exact title matches. Two articles making the same argument with different titles will both get published. A weekly content audit catches this, but it requires human review.
Everything else, scheduling, writing, verification, publishing, monitoring, recovery, runs without input.
Where to Start
If you are building this from scratch, start with one writer on one site. Get the soul file right before you add the orchestrator. A bad soul file produces bad articles at scale; a good one produces good ones. The orchestrator is a multiplier, not a fix for underlying quality problems.
Once one writer is producing reliably, add the verifier. Once the verifier is reliable, add the orchestrator. The system compounds; each layer makes the previous ones more resilient. But only if the foundation is solid.
Follow the build on YouTube or subscribe to the newsletter to track what's working and what isn't in real time.