How to Build an AI SEO Blog Pipeline That Publishes Without You
Set up a fully automated AI pipeline that researches keywords, writes SEO blog posts, and publishes them to your site without any manual work.
Running a blog that consistently ranks in search takes time -- time most solo operators and small teams simply do not have. This tutorial walks you through building an AI pipeline that handles keyword research, article drafting, SEO formatting, and publishing on a schedule. Once it is set up, you do not touch it.
What You Will Build
A pipeline that:
- Pulls keyword opportunities from a Google Sheet
- Uses an AI model to write a structured, SEO-optimized article
- Formats the output as Markdown with frontmatter
- Commits the file to a GitHub repo (your headless CMS)
- Triggers a deploy via a webhook
Tools used: Make (Macromates), Claude API (or OpenAI), GitHub API, Google Sheets, Vercel or Netlify.
Time to set up: 2 to 3 hours. Ongoing cost: Under $20/month at moderate volume.
Step 1: Set Up Your Keyword Queue in Google Sheets
Create a Google Sheet with these columns:
| keyword | status | priority | target_url | notes | |---------|--------|----------|------------|-------| | how to automate invoicing | pending | high | /tutorials/automate-invoicing | | | best no-code tools 2026 | pending | medium | /tutorials/no-code-tools | |
keywordis the primary search term you want to rank for.statusstarts aspending, then changes toprocessingand thenpublished.prioritylets you control what gets written first.
Keep 10 to 20 keywords in the queue at all times. When one publishes, add another.
Step 2: Get Your API Keys
You need three keys before building the automation:
- Claude API key from console.anthropic.com (or OpenAI if you prefer).
- GitHub Personal Access Token with
reposcope from GitHub Settings > Developer Settings. - Google Sheets API access via a service account -- download the JSON credentials from Google Cloud Console.
Store all three in Make's built-in credential vault under Connections. Never paste raw keys into module fields.
Step 3: Build the Make Scenario
Create a new scenario in Make with these modules in order:
Module 1: Schedule Trigger
Use the Schedule module set to run once daily at 6am UTC. This is your pipeline's heartbeat.
Module 2: Google Sheets -- Search Rows
Configure it to search your keyword sheet for rows where status = pending and priority = high. Set the limit to 1. This ensures one article publishes per day, which is a sustainable cadence for most sites.
Module 3: HTTP -- Call Claude API
Use an HTTP module with:
- URL:
https://api.anthropic.com/v1/messages - Method: POST
- Headers:
x-api-key: {{your_key}},anthropic-version: 2023-06-01,content-type: application/json - Body:
{
"model": "claude-opus-4-5",
"max_tokens": 4096,
"messages": [
{
"role": "user",
"content": "Write a comprehensive, SEO-optimized blog post for the keyword: '{{keyword}}'. Requirements: 900-1200 words, H2 and H3 subheadings, no fluff, practical advice with concrete steps. Output valid Markdown only. Start with a frontmatter block including title, excerpt (one sentence, 15-30 words), date (today), tags, and readTime. No em dashes."
}
]
}
Map {{keyword}} from the Google Sheets module output.
Module 4: GitHub -- Create a File
Use the GitHub app in Make or the HTTP module to call the GitHub Contents API:
- URL:
https://api.github.com/repos/YOUR_ORG/YOUR_REPO/contents/content/blog/{{slug}}.md - Method: PUT
- Body:
{
"message": "content: auto-publish {{keyword}}",
"content": "{{base64_encode(article_body)}}",
"branch": "main"
}
To generate the slug, add a Text module before this step that lowercases the keyword and replaces spaces with hyphens: {{lowercase(replace(keyword, " ", "-"))}}.
For content, Make's base64 encode function is {{toBase64(article_body)}}.
Module 5: Google Sheets -- Update a Row
Mark the row's status as published and log the target_url. This prevents the same keyword from being processed again.
Module 6: HTTP -- Trigger Deploy Webhook
If you use Vercel, go to Project Settings > Git > Deploy Hooks and create a hook URL. Add an HTTP module that sends a POST request to that URL. Netlify has the same feature under Site Settings > Build Hooks.
Your pipeline is now complete.
Step 4: Improve Article Quality With a Better Prompt
The default prompt produces decent articles. These additions produce better ones:
Add a competitor context block:
Also note: the top-ranking articles for this keyword cover [X, Y, Z].
Cover these topics but go deeper with more specific examples.
You can automate this by adding a SerpAPI call before the Claude module and extracting the top 3 result titles to inject into the prompt.
Specify your brand voice:
Tone: direct, practical, no corporate language.
The reader is a solo operator or small team building lean.
Write like you are explaining something to a smart peer, not pitching a product.
Request internal links:
Where relevant, suggest 2-3 internal links in the format [anchor text](/slug).
Use these existing URLs: /tutorials/automate-invoicing, /tutorials/no-code-tools
Step 5: Add a Review Step (Optional but Recommended)
If you want a human checkpoint before publishing, add a step between the Claude call and the GitHub commit:
- Add a Gmail or Slack module that sends you the draft article.
- Replace the Schedule trigger with a Google Sheets Watcher that monitors a
review_approvedcolumn. - When you approve a row (set the column to
yes), the scenario resumes from that row.
This gives you final say without requiring you to do any writing.
Step 6: Monitor and Tune
After your pipeline runs for two weeks, check:
- Publish rate: Did one article go out per day? Check the Make scenario history for errors.
- Index rate: Use Google Search Console to see how many new posts are getting indexed.
- Token costs: Log into the Anthropic console and check usage. At claude-opus-4-5 rates, one 1,000-word article costs roughly $0.10 to $0.30.
- Article quality: Read 3 to 5 posts manually. If they feel generic, tighten the prompt. Specificity in the prompt is the single biggest lever on quality.
Common failure points:
- GitHub API returns 422 if a file already exists at that path. Add slug deduplication logic.
- Make's base64 function can fail on special characters. Use a webhook intermediary (a small Cloudflare Worker or Vercel Edge Function) to handle encoding if needed.
- Claude occasionally produces articles without frontmatter. Add a Make filter that checks for
---at the start of the response and routes failures to a Slack alert.
What This Pipeline Does Not Do
This setup does not handle image generation (add a Replicate or fal.ai module for that), link building, or social distribution. Those are separate pipelines worth building once this one is stable.
It also does not guarantee rankings. SEO takes time regardless of tooling. What this pipeline does is remove the execution bottleneck so you can publish consistently without the manual overhead.
Next Steps
Once this pipeline is running:
- Add a second queue for content refreshes (articles older than 6 months)
- Connect a SerpAPI module to auto-detect ranking drops and queue rewrites
- Build a social distribution pipeline that takes the published URL and schedules Twitter and LinkedIn posts
The core pattern -- queue in a spreadsheet, AI generates content, GitHub stores it, webhook deploys it -- extends to almost any content type. Adapt it for changelogs, product docs, or landing pages.