How to Build an AI Content Pipeline with GitHub Actions
Automate your entire content workflow end-to-end using GitHub Actions and AI APIs — no servers, no babysitting, just scheduled publishing.
Most content operations still rely on a human sitting down, opening a tool, and manually producing work. But with GitHub Actions, a language model API, and a static-site generator, you can wire together a pipeline that writes, validates, commits, and publishes content on a schedule — automatically, every day.
This tutorial walks you through building exactly that.
What You'll Build
A GitHub Actions workflow that:
- Triggers on a cron schedule
- Calls an AI API to generate a piece of content
- Validates the output against a schema
- Commits the file to the repository
- Triggers a deployment to your site
You'll end up with a reusable pattern you can adapt for articles, tutorials, product descriptions, or any structured text output.
Prerequisites
- A GitHub repository with a static-site project (Next.js, Astro, Eleventy — anything that reads Markdown files)
- An AI API key (Anthropic, OpenAI, or compatible)
- Basic familiarity with GitHub Actions YAML syntax
Step 1: Store Your Secrets
Go to your repository → Settings → Secrets and variables → Actions → New repository secret.
Add:
AI_API_KEY— your API keyDEPLOY_TOKEN— a token with push access (a fine-grained GitHub token scoped to the repo works well)
Never hardcode credentials in workflow files. Actions secrets are injected as environment variables at runtime and are not exposed in logs.
Step 2: Write the Content-Generation Script
Create a script at .github/scripts/generate-content.js:
import Anthropic from "@anthropic-ai/sdk";
import fs from "fs";
import path from "path";
const client = new Anthropic({ apiKey: process.env.AI_API_KEY });
const today = new Date().toISOString().slice(0, 10);
const prompt = `Write a 700-word article for a site about AI automation.
Topic: a practical tip for teams automating business processes with AI in 2026.
Format: Markdown frontmatter followed by article body.
Frontmatter fields: title, slug, excerpt (15–25 words), date (${today}), publishedDate (${today}), tags (array), tier ("free"), status ("live"), author ("AutonomousHQ"), readTime ("6 min read").
Slug must match the filename you'd use (lowercase, hyphens, no special chars).
Return only the raw Markdown, no commentary.`;
const message = await client.messages.create({
model: "claude-opus-4-5",
max_tokens: 1500,
messages: [{ role: "user", content: prompt }],
});
const content = message.content[0].text.trim();
// Extract slug from frontmatter
const slugMatch = content.match(/^slug:\s*["']?([a-z0-9-]+)["']?/m);
if (!slugMatch) {
console.error("No slug found in generated content");
process.exit(1);
}
const slug = slugMatch[1];
const outputPath = path.join("web/content/articles", `${slug}.md`);
// Don't overwrite if it already exists
if (fs.existsSync(outputPath)) {
console.log(`File already exists: ${outputPath} — skipping`);
process.exit(0);
}
fs.writeFileSync(outputPath, content, "utf8");
console.log(`Written: ${outputPath}`);
This script keeps things simple: one API call, one file written, one exit code. The workflow layer handles retries and commit logic.
Step 3: Create the Workflow File
Create .github/workflows/daily-content.yml:
name: Daily Content Generation
on:
schedule:
- cron: "0 7 * * *" # 07:00 UTC daily
workflow_dispatch: # allow manual runs
jobs:
generate:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- name: Checkout
uses: actions/checkout@v4
with:
token: ${{ secrets.DEPLOY_TOKEN }}
- name: Set up Node
uses: actions/setup-node@v4
with:
node-version: "20"
- name: Install dependencies
run: npm ci
- name: Generate content
env:
AI_API_KEY: ${{ secrets.AI_API_KEY }}
run: node .github/scripts/generate-content.js
- name: Validate output
run: |
for f in $(git ls-files --others --exclude-standard web/content/articles/); do
grep -q '^slug:' "$f" || { echo "Missing slug: $f"; exit 1; }
grep -q '^excerpt:' "$f" || { echo "Missing excerpt: $f"; exit 1; }
grep -q 'status: "live"' "$f" || { echo "Missing status: $f"; exit 1; }
done
- name: Commit and push
run: |
git config user.email "bot@yourorg.com"
git config user.name "Content Bot"
git add web/content/articles/
git diff --cached --quiet || git commit -m "content: daily article — $(date -u +%Y-%m-%d)"
git push
The validate step runs before the commit. If the AI output is malformed — missing a required field, wrong structure — the workflow fails, nothing is pushed, and you get a notification via GitHub's built-in failure alerts.
Step 4: Add Deployment
If your site deploys on push (Vercel, Netlify, Cloudflare Pages), the push in the final step will trigger it automatically. No extra configuration needed.
If you're using a custom deployment step, add it after the push:
- name: Deploy
run: npm run deploy
env:
DEPLOY_KEY: ${{ secrets.DEPLOY_KEY }}
Step 5: Test It
Trigger the workflow manually via Actions → Daily Content Generation → Run workflow. Watch the logs. On the first run you'll likely need to tune your prompt to get consistent frontmatter formatting.
A few prompt tips that help:
- Ask the model to return only raw Markdown with no preamble
- Specify exact field names and their expected formats
- Provide the date explicitly so the model doesn't hallucinate one
- Ask for the slug in the prompt and validate it programmatically
Hardening for Production
Once the basic pipeline works, a few additions make it production-grade:
Idempotency: The script above already checks whether the file exists before writing. This means re-running the workflow on the same day won't create duplicates.
Retry logic: Wrap the API call in a retry loop (3 attempts with exponential backoff). API calls occasionally fail transiently; a retry loop keeps your pipeline from failing on a 529.
Slack or Discord alerts: Add a final step that posts to a webhook if the workflow fails. You want to know if a day's content was missed.
Content review queue: Instead of committing directly to main, commit to a content/draft branch and open a pull request. A human (or another automated check) can approve before it goes live.
What This Pattern Enables
Once you have a working pipeline for one content type, replicating it for others takes under an hour. The same structure — cron trigger, generation script, validation, commit — works for tutorials, changelogs, product descriptions, social posts, and internal reports.
The pipeline doesn't replace editorial judgment. But it reliably handles the production layer: the mechanical work of creating, formatting, and publishing structured text. That frees the humans in your organisation to focus on strategy, quality review, and the decisions that actually require judgement.
Start with one content type. Get it stable. Then expand.