Home Our Process About Knowledge Base
Categories
Measurement & Attribution Growth Strategy Positioning & Messaging Capital Efficiency Marketing Operations
Articles
Flat CAC at 8x Scale Isn't a Paid Ads Win. It's a Measurement Win. How We Built a Content Engine That Sounds Like a Human (Because It Started With One) What Marketing Mix Modeling Actually Tells You Your Marketing Problem Is Probably a Business Strategy Problem
Free Resources
Prompt Library CAC Audit Template Positioning Diagnostic
FAQ Book a Conversation

TL;DR: AI content at volume is now the floor, not the ceiling. The competitive advantage is genuine expertise expressed clearly — and AI can only deliver that if the expertise is extracted from a human first. The knowledge base you’re reading was built using an interview-first process designed to do exactly that. This article documents how and why.


What You’ll Learn


1. The Problem With Prompting First

The standard approach to AI-assisted content is to brief the tool on a topic and generate. The output is a synthesis of everything publicly available on the subject — competent, structured, and indistinguishable from the average of everything else that has been written about it.

That’s the problem. The average of all published thinking on B2B marketing measurement is a forgettable summary. It covers the topic. It uses the right terms. It says nothing that required anyone to have actually done the work.

Google’s guidance on AI-generated content is explicit about this. Google evaluates content on E-E-A-T criteria — Experience, Expertise, Authoritativeness, and Trustworthiness — and AI-generated content that lacks first-hand experience and original insight consistently underperforms content that demonstrates it. As of early 2025, sites populated primarily with generic AI content have seen significant ranking losses following Google’s scaled content abuse enforcement.

Volume without differentiation is not a content strategy. It is a liability.


2. What Differentiates Content That Compounds

The content that builds authority over time has one thing in common: it says something that required knowing more than a generalist knows. A specific number from a real campaign. An opinion that makes practitioners nod and agencies uncomfortable. A framework that came from doing the work, not describing it.

That material cannot be synthesized from public sources. It exists inside someone’s professional history — in the experiments they ran, the decisions they made, the moments when the data contradicted the assumption and they had to choose what to believe.

Research on AI content effectiveness consistently shows that the best results come from a hybrid approach: AI handling structure, prose quality, and SEO framing, while human expertise provides the substance that cannot be replicated. The 39% of marketers who report increased organic traffic from AI content are largely using it this way — not as a replacement for expertise but as an accelerant for expressing it.


3. The Interview-First Method

The process used to build this knowledge base starts with a structured interview before any article is drafted.

The interview is designed to surface three things that generic prompting cannot produce: specific experiences (the VoIP.ms campaign where attribution was exposed as wrong; the client situation where the real problem turned out to be positioning), unconventional beliefs (marketing strategy is business strategy; the CPG playbook applies directly to SaaS), and concrete detail (the keyword-to-cohort mapping method; the holdout experiment structure).

The questions are not generic. They are designed to find the edges of the practitioner’s thinking — the places where the view diverges from the consensus, and where the divergence came from experience rather than preference.

The interview answers become the article guardrails. Every article is written against that material. The AI handles structure, SEO framing, and clarity of expression. The view, the examples, and the opinions come from the interview. The result reads like a practitioner wrote it because one did.


4. The Seven-Step Process

The full workflow used to produce this knowledge base:

Step 1 — Define the content architecture. Before writing anything, map the full scope: categories, articles, publishing cadence. Each category should reflect a pillar of genuine expertise, not just a keyword cluster.

Step 2 — Build the platform first. The infrastructure (CMS, navigation, URL structure) serves the content strategy. Build it before writing — not alongside it.

Step 3 — Conduct the structured interview. Multi-round, designed to extract specific experiences, beliefs, and frameworks. This is the step most teams skip. It is the one that determines whether the output differentiates.

Step 4 — Use interview answers as article guardrails. Each article is written against the interview material, not against a topic brief. The unique perspective comes from the human; the structure and prose come from the tool.

Step 5 — Validate before publishing. Every article reviewed in a staging environment before it goes live. The practitioner approves; the AI doesn’t ship unilaterally.

Step 6 — Publish on a defined cadence. Consistency signals reliability to search engines and readers alike. Burst publishing followed by silence does neither.

Step 7 — Convert expertise into resources. The strongest frameworks become standalone downloadable assets — useful on their own, tied to the knowledge base, and designed to generate the kind of engagement that validates the expertise they came from.


This process was built to solve a real problem: how to produce content at the speed AI enables while maintaining the specificity that makes content worth reading. The interview-first approach is not theoretical — it is the process used to produce every article in this knowledge base, including this one.


FAQ

Q: Won’t competitors just copy this process and close the gap?

The process is copyable. The expertise isn’t. Two companies using the same interview-first method will produce very different content if one practitioner has built pricing elasticity models for a Fortune 500 consumer goods company and the other hasn’t. The method extracts what’s there. What’s there is what differentiates.

Q: Does Google penalize AI-assisted content?

Google evaluates content quality, not production method. Content that demonstrates genuine expertise, provides original analysis, and is useful to the reader it targets performs well regardless of how it was produced. Content that is generic, unoriginal, or produced at scale without human expertise — regardless of method — is what Google’s guidance targets. The distinction is not human vs. AI. It is substantive vs. hollow.

Q: How long does the interview process take?

The structured interview used for this knowledge base ran across three rounds of three questions each — roughly 90 minutes of conversation total. That input produced the source material for 30+ articles. At a traditional agency rate for original thought leadership, the equivalent research and ideation would cost significantly more and produce significantly less.


Additional Resources

From the Zaitz Marketing Knowledge Library:

External Reading:

Want a second read on your measurement setup?

Start with a Growth Architecture Review. We will map your channel mix, audit your attribution, and show you where the real leverage is.

Book a Conversation →