TL;DR: Marketing Mix Modeling estimates the aggregate contribution of each channel to revenue using regression on historical spend and outcome data. It does not tell you which customer saw which ad, and it cannot replace incrementality testing or attribution. What it does do — when built correctly — is give you a defensible basis for budget reallocation decisions that your attribution dashboard will never provide.
What You’ll Learn
- What MMM actually measures at a technical level, and what the math assumes
- Why the adstock and saturation curve specifications matter more than most vendors admit
- Where MMM fails: the four specific questions it cannot answer
- How to triangulate MMM results with holdout experiments before touching your budget
Quick Overview: The Measurement Tool Everyone Is Rediscovering
Marketing Mix Modeling is having a moment. Google released Meridian broadly in 2025. Meta has Robyn. Every measurement vendor has repackaged their MMM offering with a “privacy-first” badge. After a decade in the wilderness — during which time user-level tracking made it seem unnecessary — MMM is once again being positioned as the solution to every B2B SaaS CMO’s measurement headaches.
This article is not going to tell you MMM is the answer. It is going to tell you precisely what MMM measures, what it cannot measure, and how to use it as one layer in a measurement stack without making expensive mistakes. We will cover: (1) what the model is actually doing under the hood, (2) why specification choices drive the output more than the data does, (3) the four things MMM structurally cannot tell you, (4) where it belongs in a full measurement stack alongside incrementality testing, and (5) how to operationalize it without a full data science team.
1. What MMM Is Actually Doing
Marketing Mix Modeling is a time-series regression. At its simplest, you are regressing a dependent variable — typically revenue or pipeline — against a set of independent variables representing your marketing spend across channels, plus controls for seasonality, price, distribution, and macro conditions. The model estimates a coefficient for each channel that represents its marginal contribution to the outcome variable, holding all other variables constant.
The reason MMM fell out of favor in the 2010s is the same reason it is back now: it requires only aggregate data. You do not need user-level identifiers, cookies, or device graphs. You need weekly (or daily) spend by channel, and weekly (or daily) outcomes. That simplicity is both its strength and its fundamental limitation.
The Bayesian variant — which is what Google Meridian uses, and what any serious MMM implementation should use — places prior distributions over the model parameters. This forces you to encode your beliefs about what is plausible before the model touches your data. If you believe that a doubling of LinkedIn spend cannot realistically triple revenue within two weeks, you build that constraint into your priors. This is not a workaround; it is epistemically correct. Bayesian priors prevent the model from fitting noise, particularly in short time series where frequentist regression produces nonsensical coefficients.
Example: A B2B SaaS company with 24 months of weekly data and four paid channels runs a standard OLS regression. The model assigns a negative coefficient to their top-of-funnel podcast sponsorship because the spend spiked during Q4 when seasonality was already suppressing close rates. A Bayesian model with a non-negativity prior on media response and a seasonality component decomposed separately would not make this error.
2. Adstock and Saturation: The Specifications That Drive the Output
Before you trust any MMM output, you need to understand two transformations that happen before regression even begins: adstock and saturation.
Adstock models the carryover effect of advertising. When you run a LinkedIn campaign in week one, some of its influence persists into weeks two and three as prospects recall the message, discuss it internally, or return to your site. Adstock applies a geometric decay to your spend variable — the rate of decay is a parameter you must specify or estimate. Set it too low and your model undervalues channels with long consideration cycles. Set it too high and you are essentially claiming your paid social from six months ago is still driving pipeline today.
For B2B SaaS with sales cycles of 60 to 180 days, adstock decay rates are genuinely difficult to estimate from 24 months of data alone. This is an area where your Bayesian priors need to be informed by actual pipeline velocity data from your CRM — not by a vendor’s default configuration.
Saturation curves model diminishing returns to spend. The standard choice is either a Hill function or a logistic function, both of which produce an S-shaped or concave curve that flattens as spend increases. The key parameter is the inflection point — where on the curve is your current spend? If the model places your Google Ads spend well past the inflection point, it will recommend cutting that channel significantly. But if the saturation parameter is misspecified, the recommendation is wrong regardless of how clean your data is.
Most CMOs who receive an MMM output from a vendor or agency never see these specifications. They see a bar chart showing channel contributions and a budget optimizer showing how to reallocate spend. The bar chart is a downstream output of modeling decisions that are invisible in the presentation. Before acting on any MMM recommendation, ask for the adstock decay rates and saturation curve parameters, and ask why those specific values were chosen.
Example: Two MMM runs on identical data, one using an adstock decay rate of 0.4 and one using 0.7 for branded search, will produce materially different contribution estimates for branded search — and therefore different budget recommendations. Neither is obviously correct without calibration against a holdout experiment.
3. What MMM Cannot Tell You
This is the section that rarely appears in vendor decks.
MMM cannot tell you the incremental contribution of a marginal dollar. It tells you the average contribution over the historical period modeled. These are not the same number. If you have been running Google Ads at roughly $50K per month for two years, the model estimates the average contribution of that spend. It cannot tell you what happens if you increase to $60K or cut to $40K, because that regime is outside the variation in your training data. The budget optimizer built on top of the model is extrapolating. Treat its recommendations as directional, not precise.
MMM cannot resolve the incrementality question. A channel can appear to have high contribution in an MMM because it correlates with revenue periods, not because it caused them. Branded search is the classic case. If you spend on branded search consistently, the model will assign positive coefficients to it — but a holdout experiment removing branded search spend for four weeks might show minimal revenue impact, because those searches would have happened anyway. MMM and incrementality testing answer different questions. You need both.
MMM cannot attribute at the customer level. If your goal is to understand which touches influenced a specific deal’s path to close, MMM is the wrong tool. It operates at the aggregate level by design. This makes it irrelevant for account-based marketing analyses, pipeline influence questions, or anything that requires understanding the sequence of interactions for a given prospect. For those questions, you need either a properly specified multi-touch model on your CRM and ad data, or a propensity scoring approach using first-party behavioral signals — not MMM.
MMM degrades fast with structural breaks. If you entered a new market, changed your pricing, launched a new product tier, or meaningfully shifted your channel mix partway through your modeling window, the data before the break and the data after the break are measuring different underlying realities. Pooling them into a single regression produces a model that accurately describes neither period. Most B2B SaaS companies at growth stage have at least one structural break every 18 months. This is one of the most underappreciated limitations of applying MMM to high-velocity growth businesses.
4. Where MMM Belongs in a Full Measurement Stack
The honest positioning of MMM is as a budget allocation tool, not as a source of truth about marketing effectiveness. It answers one question well: given historical patterns of spend and outcomes, which channels appear to have had the most contribution, and where does our saturation curve suggest we are over- or under-investing?
That question is worth answering. Most B2B SaaS companies are allocating budget based on attribution models that inflate the apparent contribution of lower-funnel, last-touch channels. MMM, because it uses aggregate time-series data rather than click paths, is structurally less susceptible to last-touch bias. A channel like out-of-home advertising or a podcast sponsorship that never appears in a click path will still get a coefficient in MMM if its spend correlates with revenue changes.
But MMM should sit alongside, not replace, two other measurement layers. The first is incrementality testing — holdout experiments that directly measure the causal lift of a channel by removing it for a defined population or geography and comparing outcomes against a control. Where MMM tells you a channel’s contribution looks high historically, an incrementality test tells you whether that contribution is real or artifactual. The second is your CRM pipeline data, which gives you the account-level and cohort-level signals that MMM cannot see. Understanding your true CAC at the cohort level, segmented by channel and sales motion, provides the ground truth against which MMM outputs should be validated.
The difference-in-differences framework is particularly useful here: when you run a geo-based holdout (activating a channel in some DMAs and not others), you can use the pre-period and post-period data from both groups to estimate a causal treatment effect that you can then compare to what your MMM estimated for that channel’s contribution. When the estimates are close, your MMM is calibrated. When they diverge significantly, your MMM specification is likely wrong and should not be used for budget decisions until corrected.
5. MMM and the Privacy-First Argument — What’s Real
The industry argument for MMM in 2026 is largely framed around privacy: cookies are dead, iOS restrictions have degraded MTA, and aggregate modeling is the future. This framing is mostly correct but slightly oversold.
It is true that user-level tracking has degraded meaningfully, and aggregate models like MMM are structurally privacy-durable because they never touch individual-level data. It is also true that server-side tagging and first-party data architectures, while important, do not fully restore the signal lost to cookie deprecation — particularly for cross-device attribution and anonymous research phases.
What is slightly oversold is the implication that MMM solves the measurement problem created by tracking degradation. MMM always had the limitations described in Section 3. Those limitations existed when cookies were plentiful, and they exist now. The privacy tailwind has made MMM more relatively attractive compared to user-level MTA, but it has not changed what MMM can and cannot measure. A CMO who replaces a broken MTA dashboard with an MMM that has misspecified adstock parameters has not improved their measurement infrastructure — they have replaced one source of misleading numbers with another.
6. Operationalizing MMM Without a Data Science Team
You do not need a team of data scientists to run a reasonable MMM. What you do need is clean data, reasonable priors, and someone who can read model diagnostics critically.
Google Meridian is open-source and runs in Python. It uses a Bayesian framework built on NumPyro. A competent marketing analyst with Python skills can configure and run a basic model. The harder part is the specification work: setting priors, choosing the adstock and saturation functional forms, and validating the output against holdout experiments.
The practical floor for a credible MMM is approximately 18 months of weekly data across your channels. Below that, the time series is too short to estimate adstock decay reliably, and your confidence intervals will be too wide to support budget decisions. If you are below $2M in annual marketing spend, the ROI on full MMM is marginal; a simpler approach using cohort-level CAC analysis from your CRM and occasional geo holdout tests will give you more actionable signal per dollar of measurement investment.
How to Use MMM Without Getting Burned
-
Audit your data before modeling. Pull weekly spend by channel and weekly pipeline or revenue for 18–24 months. Identify any structural breaks — pricing changes, product launches, market expansions. Flag those periods; you may need to model them separately or add a dummy variable.
-
Set priors deliberately. Before the model runs, write down what you believe about each channel’s adstock decay and saturation. Use CRM pipeline velocity data to inform adstock. These priors constrain the model and prevent it from producing implausible coefficients.
-
Decompose seasonality separately. Do not let the model conflate seasonal revenue patterns with channel effectiveness. B2B SaaS has strong Q4 and Q1 budget cycles. These need to be modeled explicitly as seasonal components, or high-Q4 channel spend will appear to “cause” Q4 revenue.
-
Run a calibration holdout before acting on budget recommendations. Before reallocating spend based on MMM output, run a geo-based holdout on your highest-spending channel for four weeks. Compare the model’s predicted contribution against the observed revenue difference between treatment and control. If they align within 20%, your model is reasonably calibrated.
-
Rerun quarterly, not annually. MMM is not a one-time project. Your channel mix, spend levels, and competitive environment change. A model built on Q4 2025 data that has not been updated will progressively misrepresent your current situation.
-
Treat the budget optimizer as a conversation starter, not a prescription. The optimizer tells you where the model thinks you are over- or under-indexed. Treat it as a hypothesis to test. Reallocate incrementally and measure the result.
FAQ
Q: Does MMM work for B2B SaaS with long sales cycles? It can, but it requires careful treatment of adstock decay. In B2B with 90-to-180-day sales cycles, the lag between a marketing exposure and the closed revenue it contributed to can span multiple modeling periods. Bayesian priors informed by CRM velocity data are essential here.
Q: How is MMM different from multi-touch attribution? Multi-touch attribution operates at the individual user level — it assigns credit to touchpoints along a specific prospect’s journey. MMM operates at the aggregate level — it estimates channel contribution from time-series patterns in spend and outcomes. MTA degrades as user-level tracking degrades. MMM is unaffected by cookie or iOS restrictions because it never uses individual-level data.
Q: Should I use Google Meridian or build a custom model? Meridian is a strong starting point. It is open-source, Bayesian, and actively maintained. For most B2B SaaS companies under $50M ARR, Meridian with thoughtful prior specification will produce outputs at least as good as a custom model — and considerably cheaper.
Q: How often should I rerun my MMM? At minimum, quarterly. If you make significant budget shifts — more than a 20% change in any channel — rerun sooner.
Q: Can MMM tell me which specific campaigns performed best? No. MMM estimates channel-level contribution — Google Paid Search as a whole, LinkedIn as a whole. It cannot decompose performance within a channel by campaign, audience, or creative.
Q: What data do I actually need to run an MMM? At minimum: weekly marketing spend by channel (18–24 months), and weekly revenue or pipeline. You do not need individual-level customer data of any kind — that is the privacy advantage of MMM.
Related Reading
- Why Your Attribution Model Is Lying to You — How last-click and platform-reported ROAS systematically overstate channel contribution, and what to use instead.
- What Is Incrementality in Marketing? — The mechanics of holdout experiments and why they are the only way to establish causal lift from a channel.
- Why Your CAC Is Wrong — The measurement errors that inflate or deflate customer acquisition cost, and how to calculate it correctly at the cohort level.
Zaitz Marketing builds measurement infrastructure that replaces attribution dashboards with honest contribution analysis. If you want to see how this applies to your current setup, book a Growth Architecture Review.
Additional Resources
External Reading:
- Google Meridian Documentation — Official docs for Google’s open-source Bayesian MMM library, including quickstart guides and model specification walkthroughs
- Meta Robyn: An Analyst’s Guide to MMM — Meta’s practical guide to building and interpreting MMM; covers adstock, saturation curves, and budget optimisation with worked examples
- Jin et al. — Bayesian Methods for Media Mix Modeling (Google Research) — The foundational paper behind Google’s approach to carryover and shape effects; explains the math that drives both Meridian and Robyn
- Nielsen: What Marketers Need to Know About MMM — A practitioner overview of MMM methodology from one of the field’s longest-standing measurement firms
Want a second read on your measurement setup?
Start with a Growth Architecture Review. We will map your channel mix, audit your attribution, and show you where the real leverage is.
Book a Conversation →