AutonomousHQ
intermediate11 min read2026-03-22

How to Build a Multi-Agent Research Pipeline

One agent searches, one agent writes, one agent checks the facts. How to split a content workflow across multiple Claude agents using NanoClaw's team feature — and why splitting it beats trying to do everything in one prompt.

Running one large agent prompt that does everything — research, write, verify, publish — sounds efficient. In practice it produces mediocre output across the board. The research phase is shallow because the agent is already "thinking" about the write-up. The writing is generic because the agent didn't go deep enough on the sources. The fact-check is half-hearted because the agent already committed to the draft.

The better approach: separate agents with separate jobs. One agent finds and assesses sources. One agent writes from those sources. One agent checks the output before it ships. Each agent has a focused prompt, a clear scope, and no responsibility for the other stages.

This tutorial shows how to build this pipeline using NanoClaw's team feature. The result: a research-to-publish workflow that produces noticeably better output than a single monolithic agent, at similar cost.


What you need

  • NanoClaw running on a server (see the NanoClaw setup tutorial if you haven't done this)
  • An Anthropic API key with access to Claude Sonnet
  • A directory where agents can read and write intermediate files
  • About 30 minutes

The pipeline design

Three agents, one shared workspace:

[Research Agent] → sources.json → [Writing Agent] → draft.md → [Fact-Check Agent] → final.md

Research Agent — given a topic, searches the web, identifies 5-8 credible sources, extracts key facts, quotes, and data points, and saves a structured JSON file.

Writing Agent — reads sources.json, writes a complete draft in the correct format and voice, saves it as draft.md.

Fact-Check Agent — reads draft.md and sources.json, verifies every claim against the sources, flags any unverifiable statement, outputs a reviewed final.md with inline notes.

The agents run sequentially. Each one waits for the previous agent's output file before starting. This is the simplest coordination model — no message-passing, just files.


Step 1: Set up the shared workspace

All three agents need access to the same directory. On your server:

mkdir -p /workspace/pipeline/research
mkdir -p /workspace/pipeline/research/runs

Each run gets its own subdirectory so you can review past outputs:

mkdir -p /workspace/pipeline/research/runs/$(date +%Y-%m-%d)

The agents will read and write to this path. Make sure the NanoClaw container has this directory mounted — check your group's container config if you run into permission errors.


Step 2: Create the Research Agent group

In Claude Code, create a new NanoClaw group:

/new-group research-agent

Open ~/.claude/groups/research-agent/CLAUDE.md and set the agent's instructions:

# Research Agent

You are a research agent. Your job is to find credible, specific source material on a given topic.

## Input

You receive a topic as a plain text prompt.

## Process

1. Search the web for recent, authoritative sources on the topic (aim for the last 6 months unless historical context is needed)
2. Visit each promising source URL and read the content
3. Extract: the key claim or data point, the source URL, the publication date, and a direct quote if available
4. Assess each source for credibility: prefer primary sources, named authors, verifiable data over opinion pieces and anonymous posts
5. Select the 5-8 strongest sources

## Output

Save a file to /workspace/pipeline/research/runs/[YYYY-MM-DD]/sources.json with this structure:

```json
{
  "topic": "[topic]",
  "research_date": "[ISO date]",
  "sources": [
    {
      "title": "[article or page title]",
      "url": "[full URL]",
      "date": "[publication date]",
      "key_claim": "[one sentence summary of what this source contributes]",
      "quote": "[direct quote if available, otherwise null]",
      "credibility": "high | medium | low",
      "credibility_reason": "[why you rated it this way]"
    }
  ],
  "summary": "[2-3 sentence summary of what the research found overall]"
}
```

Rules

  • Do not invent sources. Every entry must be a URL you actually visited.
  • If a source is paywalled or returns a 404, note it and skip it.
  • If you cannot find 5 credible sources, stop and say so rather than padding with weak sources.
  • Do not start writing the article. Your job ends when sources.json is saved.

---

## Step 3: Create the Writing Agent group

/new-group writing-agent


Open `~/.claude/groups/writing-agent/CLAUDE.md`:

```markdown
# Writing Agent

You are a content writing agent. You write articles based on structured research.

## Input

Read /workspace/pipeline/research/runs/[YYYY-MM-DD]/sources.json

## Process

1. Read all sources carefully
2. Identify the 3-5 most important points the article should make
3. Draft an article that makes those points using concrete evidence from the sources
4. Every factual claim must be traceable to a source in sources.json — include the URL inline as a markdown link

## Output format

```markdown
---
title: "[article title]"
date: "2026-03-22"
sources_file: "runs/[YYYY-MM-DD]/sources.json"
---

[article body — 600-900 words]

Save to /workspace/pipeline/research/runs/[YYYY-MM-DD]/draft.md

Writing rules

  • No em dashes
  • No vague scene-setting openers. Start with the most important thing.
  • Short paragraphs — 2-4 sentences maximum
  • Concrete numbers and named tools where possible
  • Do not editorialize beyond what the sources support
  • Do not pad. 600 words of good content beats 900 words of filler.

---

## Step 4: Create the Fact-Check Agent group

/new-group factcheck-agent


Open `~/.claude/groups/factcheck-agent/CLAUDE.md`:

```markdown
# Fact-Check Agent

You are a fact-checking agent. You review article drafts for accuracy and flag anything unverifiable.

## Input

Read both:
- /workspace/pipeline/research/runs/[YYYY-MM-DD]/draft.md
- /workspace/pipeline/research/runs/[YYYY-MM-DD]/sources.json

## Process

For every factual claim in the draft:

1. Identify which source in sources.json supports it
2. If a claim is supported: mark it VERIFIED
3. If a claim cannot be traced to a source: mark it UNVERIFIED and note why
4. If a claim contradicts a source: mark it INCORRECT and explain the discrepancy

## Output

Save /workspace/pipeline/research/runs/[YYYY-MM-DD]/final.md — a copy of the draft with inline annotations:

- Add `<!-- VERIFIED: [source URL] -->` after each verified claim
- Add `<!-- UNVERIFIED: [reason] -->` after each unverifiable claim
- Add `<!-- INCORRECT: [what the source actually says] -->` after any incorrect claim

At the end of the file, add a summary section:

```markdown
## Fact-check summary

- Verified claims: [N]
- Unverified claims: [N] — [list them]
- Incorrect claims: [N] — [list them]
- Recommendation: PUBLISH | REVISE | REJECT

Rules

  • Be strict. "Probably true" is not VERIFIED.
  • Do not rewrite the article. Annotate only.
  • If you cannot access a source URL to verify it, note that rather than assuming the claim is false.

---

## Step 5: Wire the pipeline together

The simplest coordination: a single orchestrator prompt that triggers each agent in sequence. You can do this manually or schedule it.

**Manual trigger:**

In Claude Code, send to each group in order:

In research-agent: Research the topic "AI agent orchestration tools in 2026" and save sources to /workspace/pipeline/research/runs/2026-03-22/sources.json

[wait for research-agent to complete]

In writing-agent: Read the sources at /workspace/pipeline/research/runs/2026-03-22/sources.json and write a 700-word article. Save the draft to /workspace/pipeline/research/runs/2026-03-22/draft.md

[wait for writing-agent to complete]

In factcheck-agent: Fact-check the draft at /workspace/pipeline/research/runs/2026-03-22/draft.md against sources at /workspace/pipeline/research/runs/2026-03-22/sources.json. Save the annotated output to /workspace/pipeline/research/runs/2026-03-22/final.md


**Scheduled pipeline:**

To automate end-to-end, schedule a task in the research-agent group that chains all three steps:

Schedule a daily task at 7am:

  1. Research "AI agent news this week" and save to /workspace/pipeline/research/runs/$(date +%Y-%m-%d)/sources.json
  2. Then tell writing-agent to write a draft from that sources file
  3. Then tell factcheck-agent to review and produce final.md
  4. Send a Telegram message when final.md is ready for review

NanoClaw's task scheduler handles the cron. Each agent runs in sequence; the next step only fires after the previous file appears.

---

## What this costs to run

| Component | Cost |
|---|---|
| Research Agent (web search + analysis, ~3,000 tokens) | ~$0.003 per run |
| Writing Agent (~4,000 tokens output) | ~$0.006 per run |
| Fact-Check Agent (~2,000 tokens) | ~$0.002 per run |
| **Total per article** | **~$0.01** |

A daily pipeline running 365 times per year costs roughly $3.65 in API fees. The VPS is $4-6/month. Total: under $10/month for a daily article pipeline.

---

## The quality difference

The single-agent version produces plausible articles. The three-agent version produces articles you can trust.

The research agent has one job and goes deep on it — it doesn't shortcut the source search because it's "already thinking" about the structure. The writing agent starts with a structured brief rather than a blank page, so it produces more coherent arguments. The fact-check agent catches what the writing agent glossed over.

The UNVERIFIED annotations in the final output are particularly useful. They show you exactly where the draft is soft — where you need to either find a source or cut the claim. Without them, you're trusting that the writing agent didn't hallucinate a statistic.

---

## Next steps

- Add a fourth agent that converts fact-checked articles into Beehiiv newsletter drafts via the API
- Build a routing layer that picks the research topic automatically from an RSS feed
- Add a human-in-the-loop step: email yourself the fact-check summary before the article goes live

The [NanoClaw GitHub repo](https://github.com/qwibitai/nanoclaw) has documentation on team coordination and agent-to-agent messaging for more complex pipeline designs.

---

**AutonomousHQ is built on a pipeline like this.** Sign up to the newsletter for weekly updates on what we are building and what is breaking.