Zero SEO experience. I had Claude generate 25K impressions in 30 days.

Zero SEO Experience. 25K Impressions in 30 Days.

The AI-native content system I build for SaaS founders who need to rank without hiring an SEO team


This guide will show you how to go from 0 to 25k Impressions in 30 days with zero SEO experience


The result

A SaaS founder hired me to build a content system. Thirty days after we shipped, the site had 25,000 Google impressions.

No agency. No backlink campaign.

What we built wasn't a blog workflow. It was a content operating system:

  • 20 articles a day, sourced from their existing transcripts

  • AI owns writing, planning, QA, linking, image prompting

  • Scripts only handle transport: API calls, file writes, uploads

  • Every article gated on QA pass before upload

This is the architecture I install.


Got a question?

If you have any questions or want to know if your site is conversion optimized. Feel free to DM me or schedule a call: https://cal.com/will-debause-u9q25n/15min


The core principle: AI-native, not automation-native

I've watched a lot of founders try to scale SEO with AI. Most fail the same way.

They write scripts to generate content. Scripts to check quality. Scripts to place links. Three weeks later the system produces 100 articles of slop and the founder is debugging a QA script that flags nothing.

The trap is always the same: trying to solve AI judgment problems with code.

The systems I build do the opposite. Scripts stay dumb. They move files, call APIs, upload drafts. That's it. The AI does every piece of work that requires reading, reasoning, or writing.

Two rules make it work:

  1. Scripts are transport only. API calls, file writes, uploads, deterministic packaging gates. Nothing else.

  2. AI owns writing, QA, linking, and editorial judgment. Not a separate LLM API. Not a QA script. The agent, in session, reading every article.

If script logic starts making editorial choices, I delete the script. That's the anti-drift rule.

The System I Install

1. Source-first pipeline

Every article is sourced from a real transcript. No articles written from general knowledge.

This is the piece most founder SEO setups skip, and it's why their content reads like everyone else's. Transcripts give the AI real material to work from, which gives Google real signal to rank.

For the client in the story, we set up two source pools (604 transcripts total). One transcript produces 10 articles. Pull one from each pool per day and you're at 20 articles/day.

A queue tracker records which transcripts have been processed. Selection is cluster-driven: fill gaps in underserved content clusters first.

2. The daily plan

Every day runs off a single plan.json file. Status lifecycle:

Scripts enforce transitions. Can't upload without qa_pass. Can't publish without images_done. The AI owns everything that happens inside a state.

3. Keyword data (DataForSEO, batched)

Pull once per keyword per day. Cap: 20 pulls/day. Cost: ~$0.003/article.

Data gets written to plan.json and reused across rewrites. No re-pulling.

Non-negotiable rule I enforce: collected SEO data must actually be used in the workflow. Before drafting, each plan item gets a seo_notes field summarizing how the keyword data will shape the article. Upload is blocked if seo_notes is empty.

This closes the loop most founder SEO workflows leave open. People pull keyword data and never reference it in drafts. Now they have to.

4. The pre-write gate

Before any drafting begins, we lock:

  • Title

  • Slug

  • Meta description

  • Canonical entities

  • Internal link targets

  • FAQ destination

Five minutes of thinking that saves thirty minutes of rewriting.

5. Drafting sub-agents

Sub-agents draft. The main agent owns final judgment.

Every drafting sub-agent gets a standard handoff that forces it to read the brand voice spec, the article output spec, and the image prompt spec before writing a word. The sub-agent writes the article and embeds inline image markers where a visual genuinely helps:

<!-- IMAGE: inline-1 | alt: ... | prompt: ... -->
<!-- IMAGE: inline-1 | alt: ... | prompt: ... -->
<!-- IMAGE: inline-1 | alt: ... | prompt: ... -->
<!-- IMAGE: inline-1 | alt: ... | prompt: ... -->
<!-- IMAGE: inline-1 | alt: ... | prompt: ... -->

The sub-agent does not claim QA pass. Self-reported QA gets treated as unreliable by design.

6. Main-agent QA (the gate most systems skip)

The main agent reads every completed draft in full against a QA checklist. Separate pass. Not a drafting-time self-check.

Checks that get missed without this pass:

  • Math verification (any walkthrough with numbers gets computed line by line)

  • FAQ answer length

  • Dead link detection

  • Same-cluster link count

  • Frontmatter-to-plan consistency (audience level, cluster)

  • Image placement rules

If anything fails, the article goes back to sub-agents for fixes. The main agent re-reads before setting qa_notes.passes = true.

This one gate is why the system produces 20 articles/day without shipping slop.

7. On-page SEO (applied, not just documented)

After QA pass, the AI writes:

  • Title tag (60 chars, keyword early)

  • Meta description (155 chars, includes a promise)

  • H2 structure matching search intent

  • 2–3 internal links to other posts on the site

  • FAQ schema block

  • Alt text on every image

In Framer, these map directly to CMS fields via the API. FAQ schema ships as JSON-LD or a custom-code component.

8. Image generation (AI-native, two-pass)

Images run after QA, not before. QA validates the prompts themselves.

Two-pass protocol per inline image:

  • Pass 1 — Fact Lock: pull exact labels, values, numbers from the article section

  • Pass 2 — Render: generate with locked facts + layout constraints

No invented labels. No typos. No value mismatches. If pass 2 hallucinates, I issue a correction prompt that fixes text only and preserves layout.

Cover prompts get written in-frontmatter during drafting. The script reads them verbatim. Scripts do not judge prompt quality.

9. Upload gate

publish.mjs uploads to Framer CMS as draft only. Live publish requires a separate explicit mechanism.

Upload is blocked unless:

  • plan.json item has non-empty seo_notes

  • qa_notes.passes = true

  • First upload requires status images_done

The script does not verify editorial quality. It verifies structural state.

10. The allowed-scripts list (exhaustive)

Only these scripts exist in the system I install:

Script

Purpose

dataforseo-pull.mjs

Fetch keyword data. API transport only.

generate-images.mjs

Send AI-written prompts to image model. Transport only.

publish.mjs

Upload to Framer CMS. Transport + deterministic gates.

validate-draft-package.mjs

Pre-upload schema check (missing fields, FAQ count, H1 in body).

Any script not on this list is policy drift and gets deleted. This one rule prevents 90% of the slow death these systems die.

The 5 Drift Traps I Refuse to Build

1. QA scripts.
The temptation is overwhelming. I don't write them. QA is comprehension. Scripts can't read.

2. Link-strategy scripts.
Back-linking and orphan detection are editorial judgment. The AI decides where links go during drafting and QA.

3. Sub-agent self-QA.
Sub-agents under-report issues. Every system that skips main-agent re-read ships broken math in walkthroughs within two weeks.

4. Not using the SEO data you paid for.
Pulling DataForSEO and never referencing it in drafts is the single most common leak. The seo_notes field plus upload gate fix this.

5. Image-count quotas.
Adding inline images to hit a number produces noise. The AI decides whether a visual materially improves clarity. Zero is a valid answer.

What Running This Costs

At 20 articles/day:

  • DataForSEO keyword data: ~$0.003/article

  • Cover images: ~$0.005 each

  • Inline images: ~$0.095 each (the AI decides how many are needed, often zero)

  • AI API costs for drafting + QA: variable, modest at scale

  • Framer CMS: existing subscription

Roughly $5/day total at full cadence.

How I Install It

If you're a SaaS founder reading this, you fall into one of two buckets:

Build it yourself. If you have an engineer and want the full architecture, I can walk you through it. The principles above are the whole system. The details (the exact article-output-spec.md, the QA checklist, the handoff prompts) are the hard part.

I install it for you. This is what I actually do. I build the full system against your source material, your brand voice, your Framer site. First 30 days of output is shipped by me running the system. After that, your team (or no team) runs it.

What I need to start:

  1. Source material you own (transcripts, calls, internal docs)

  2. A Framer, Webflow, or Next.js site with CMS API access

  3. A decision on cadence — some clients want 20/day, some want 5/day. The system scales down without losing structure.

I build sites that visitors to customers.

https://cal.com/will-debause-u9q25n/15min

.say hello

Join the waitlist.

.say hello

Join the waitlist.