How I Run a Weekly AEO Growth Loop (and Why It Compounds)
Joey Alvandi·April 23, 2026
I'm Director of Growth Marketing at Navless, an AI-native B2B startup. Every week, I run a three-step growth loop to close AEO and GEO content gaps across the surfaces our buyers actually use: ChatGPT, Perplexity, Claude, Google AI Overviews, and our own website. The active time I spend on the loop is about an hour per week. The rest of the cycle, the engagement data from what I shipped feeds back in as input for next week's loop.
This article is the tactical walkthrough I wish I'd had when I started. Every growth marketer I talk to already gets why AEO matters. What they ask me about is the mechanics: what tools, what cadence, what I'm clicking on Monday morning.
The short version, front-loaded
Step 1: Navless identifies gaps in my AI search visibility: specific prompts where my category competitors get cited and I don't, or where my citation is weaker than it should be.
Step 2: Navless generates on-brand content fixes for those gaps, structured so human readers and AI systems can both parse them, and written in the Navless voice I've configured.
Step 3: I publish from inside Navless. The publish action attaches the AEO metadata and schema, pushes the content into our website Guide Agent's knowledge graph, and drops it onto our blog page at the same time.
That's the loop. The reason it compounds is that every piece of content I publish through Step 3 becomes new training data for Step 1: new engagement signals, new query coverage, new pages the knowledge graph can reason over. Week 12 is measurably better than Week 1 because the graph is denser and the gap detection gets sharper.
Here's how each step actually works.
The growth marketer's problem with AEO in 2026
Most AEO workflows I've seen at other B2B startups look like this. Someone runs a monthly audit in a spreadsheet: a list of buyer-relevant prompts, a column for "does ChatGPT cite us," another for Perplexity, another for Google AI Overviews. A junior marketer copies the responses, flags the gaps, and the team debates what to write next. A writer drafts something. It lands in a content calendar. It ships six weeks later, if it ships at all. By then, the citations have moved.
That workflow breaks for three reasons. The audit is stale before the sprint starts. The gap-to-content handoff loses specificity. "We need a page on X" isn't the same as "here's the exact structured answer ChatGPT is looking for when a buyer asks this specific prompt." And the published page rarely carries the metadata, schema, or structural cues that AI systems actually use to decide what to cite.
A growth loop solves all three by collapsing detection, generation, and publishing into one platform where each step feeds the next.
Step 1: Identifying the content gaps
Every Monday morning I open Navless and look at one view: AI Search Brand Rank against our category competitors, broken down by prompt cluster.
Navless monitors how we appear across the major AI platforms (ChatGPT, Perplexity, Claude, Google AI Overviews) for the prompts our actual buyers use. The prompts aren't hypothetical. They're clustered from the questions real buyers ask during evaluations, pulled through our CRM and our own Guide Agent's visitor logs.
What I'm looking for is specific. I'm not asking "are we visible." I'm asking:
- Where we're uncited. Prompts where a competitor shows up and we don't, and where the prompt is directly relevant to a buying stage we care about (evaluation, comparison, differentiation).
- Where we're cited but weakly. Prompts where we appear in the answer but as a secondary option, or with outdated framing.
- Where the query volume is growing. Prompts that are gaining traction in AI platforms. That's a signal that a new category question is forming and early movers will own the citation.
I prioritize based on a simple filter: how close is this prompt to a buying decision, and how far behind are we. A prompt like "best vendor for X use case" sits much closer to the decision than "what is X," so winning it matters more. A competitor with a 40-point Brand Rank lead is harder to catch this week than one with a 5-point lead. I pick three to five gaps to close per cycle.
The whole scan takes 1 to 3 minutes. If I did this manually across four AI platforms, it would take me a full day and the data would be inconsistent because AI responses drift.
Step 2: Generating on-brand content fixes
Once I've picked the gaps, I generate fixes from inside Navless.
The generation is more targeted than "AI writes a blog post." For each gap, Navless produces a structured content brief and a draft tuned to the exact structural features AI platforms use to cite. That means:
- An extractable one-paragraph answer at the top of the piece, written so an AI system can lift it as a direct citation without needing to synthesize across paragraphs.
- Named entities and statistics with sources, because AI platforms weight content that carries verifiable claims over content that makes unsourced assertions.
- Typed relationships between concepts: what this topic is, what it relates to, what it's a prerequisite for, what it compares against. This is the raw material the Guide Agent's knowledge graph uses in Step 3.
- A structure aligned to the buyer's question. Not "introduction, three vague sections, conclusion." A direct answer, followed by the reasoning, followed by the edge cases.
The drafts ship in the brand voice I configured during onboarding. I spent about 30 minutes with our forward deploy engineer setting voice guardrails: the vocabulary to use, the vocabulary to avoid, the cadence, the level of formality, the stance on common objections. Every generated draft pulls from that configuration.
This is the same discipline experienced AI practitioners already apply when they switch from ChatGPT to Claude, or stand up any new AI tool. You don't judge the output quality until you've given the system the context it needs to perform well. AI is powerful, and it works best when you invest up front in telling it exactly how you want it to think and sound. 30 minutes on day one saves me hours of editing on every piece after.
I review every draft thoroughly before it moves to Step 3. This takes about 10 minutes per piece. I reject or heavily rewrite roughly one in ten. Usually because the angle is right but the opening doesn't land, or because I want to swap in a sharper first-party stat, or because the piece needs a concrete customer example that the generator couldn't know about. For higher-stakes pieces, like a pillar post, a comparison page, or anything touching pricing or positioning, I route the draft to a teammate for review right from inside Navless before publishing. All in, I easily ship 4 to 6 high-quality pieces per week.
I'd rather have a draft I sharpen than a blank page. A blank page is a week of delay. A draft is 10 minutes of editing.
Step 3: Publishing with schema, knowledge graph, and blog in one action
This is the step that separates a growth loop from a growth to-do list.
Inside Navless, I hit publish. Three things happen at once:
AEO metadata and schema get attached automatically. The page ships with schema.org markup appropriate to its type (Article, FAQ, HowTo, Comparison, whichever matches the content structure). Author entities are attached. Source citations are wired in as structured data. The meta tags are optimized for AI platform crawlers as well as traditional search. I don't hand-write any of this.
The content pushes into our Guide Agent's knowledge graph. The Guide Agent on our website runs on GraphRAG, a knowledge graph where concepts connect to concepts through typed relationships, and content is attached to concepts based on what it teaches, what it requires, and what it's related to. Every new piece I publish becomes a new set of nodes and edges in that graph. The next visitor who asks the Guide Agent a question that touches those nodes gets a better answer than the visitor who asked the same question yesterday.
The content hits our blog page. Same action, no separate CMS upload. The blog page stays populated for human readers landing from email, social, or direct traffic. The same content, structured for AI citation, is also genuinely useful for a person reading it.
One click. One piece of content. Three surfaces reinforced. This is the specific mechanic that makes the loop a loop and not a sequence.
What this replaced
My pre-Navless workflow had eight steps and three tools: a manual AI prompt audit in a spreadsheet, a brief written in a Google Doc, a draft in another Doc, a round of editing in Slack, a copy-paste into our CMS, a manual schema plugin, a separate AEO monitoring tool, and no feedback loop at all from the page back to future content decisions.
The loop version has three steps, one tool, and a feedback mechanism that sharpens gap detection every week. The weekly time spend dropped from roughly a day and a half to about an hour of hands-on work. More importantly, the output ships in the same week it's scoped.
What I measure
I look at four numbers every Friday.
AI Search Brand Rank against our top five category competitors, tracked across the four major AI platforms. The goal is movement on the specific prompts I targeted that week. If I closed a gap on a comparison prompt, I want to see citation within 7 to 10 days.
Organic session time on the blog posts I shipped. What I'm watching here is whether the piece is holding the readers AI sent us. Volume is secondary. Across Navless customer data between September 2025 and January 2026, Guide deployments drove a 129% increase in session time and a 30% reduction in bounce rate (Navless customer data, Sep 2025 – Jan 2026). On our own blog we're roughly in that range, though I don't publish our own numbers here.
Guide Agent engagement on the concepts the new content added to the knowledge graph. When visitors ask the agent about a topic I just shipped, do they get a useful answer? Do they go deeper or drop off?
Downstream conversion on AI-referred traffic. Semrush's July 2025 study found that LLM-referred visitors convert at roughly 4.4 times the rate of traditional organic search. The baseline is high. My job is making sure I don't degrade it with content that gets cited but underwhelms on the click-through.
Why it compounds
The reason this is a loop and not a workflow is that every output feeds the next input.
Every page I publish adds nodes and edges to the knowledge graph, which makes the Guide Agent's answers better, which increases session depth, which generates more buyer-intent signal, which sharpens next week's gap detection, which produces more precise content fixes, which compound in the graph.
The first week of the loop is linear. By week 8, the graph is dense enough that a single new page often improves answers on three or four adjacent topics without me doing anything. By week 12, the gap detection in Step 1 surfaces opportunities I wouldn't have spotted manually: intersections of topics where buyers are looking for combined answers that no single piece covers.
This is the unlock. The work in Week 1 pays for Week 1. The work in Week 12 pays for Week 12 and makes Week 13, 14, and 15 easier.
What I'd watch out for
Two things I've learned the hard way.
First, the generated drafts are only as good as the brand voice configuration. Treat that initial 30-minute setup like you'd treat configuring any serious AI tool. If you rush it, you'll feel the drift in every draft and spend the savings on editing. If you invest in it properly on day one, the system genuinely gets out of your way.
Second, the loop amplifies whatever content philosophy is underneath it. If the philosophy is "ship as much as possible," the graph gets cluttered and the Guide Agent starts giving shallow answers to deep questions. I run the loop at 4 to 6 pieces per week, not 15 or 20. Density in the graph matters more than volume. A growth loop built on mediocre content is a volume problem pretending to be a strategy.
The loop works because detection, generation, and publishing collapse into one system where each step specifically sets up the next. There's a discipline to it about what gets published. Nothing magical about the mechanism.
Sources cited.
- Navless customer data (Sep 2025 – Jan 2026)
- Semrush, LLM-referred traffic conversion study (July 2025)