Is your paid media actually driving conversions? Or just taking credit?
Attribution models tell you who touched what. Incrementality tells you what actually worked. Attribution's data export gives you everything you need to run incrementality tests — no separate platform, no additional cost.
The Problem
Attribution tells you who touched the ball. Incrementality tells you who scored the goal.
Your Google Ads campaign claims 200 conversions this month. But how many of those people would have converted anyway — through organic search, direct visits, or brand awareness? If you paused the campaign and only lost 40 conversions, your true incremental contribution is 40, not 200. That's an incremental CPA 5x higher than what the platform reports.
Dedicated incrementality platforms charge enterprise pricing to answer this question. With Attribution's data export and an LLM, you can answer it yourself.
Types of Incrementality Tests
Three ways to measure what's actually working
Geo holdout tests
Pause spend in one region while maintaining it in a comparable region. Compare conversion rates. The difference is your incremental lift. The gold standard for causal measurement.
Best for: Channels with enough spend to split geographically
Time-based on/off tests
Turn a channel off for a defined period, then back on. Compare before, during, and after. Simpler than geo holdouts but less rigorous — external factors can confound results.
Best for: Channels with fast feedback loops (paid search, paid social)
Synthetic control analysis
Use historical data to estimate what would have happened without a channel, then compare to what actually happened. No need to pause spend — works on existing data.
Best for: Teams that can't or won't pause spend during a test
Why This Data Matters
Platform data can't measure its own incrementality
Using Meta's, Google's, or TikTok's own conversion data to measure whether their ads are incremental is a fundamental conflict of interest. Attribution's data removes that conflict. Conversions are tracked by Attribution's first-party tracking technology, independently from the platforms being tested.
And because Attribution tracks cost and ROAS at the user level, you can go beyond "is this channel incremental?" to "is it incremental at a price that makes sense?" A channel might genuinely drive conversions that wouldn't have happened otherwise, but if the incremental CPA is 3x your target, it's still a bad investment.
This is the "matches your bank account" principle applied to incrementality testing.
Prompt Your LLM
Design, run, and analyze incrementality tests in a conversation.
You don't need to know the statistics. The LLM handles test design, data requirements, analysis, and interpretation. You just need your Attribution data and a question.
Verdict: LinkedIn Ads are incremental (p=0.023) but at 2.1x the attributed CPA. The channel is driving real conversions, but fewer than platform reporting suggests. Consider reducing spend by 30–40% and monitoring incremental CPA weekly. The savings can be reallocated to channels with better incremental efficiency.
Dedicated incrementality platforms charge enterprise pricing for this analysis. With Attribution's data export and an LLM, you can run it yourself — as many times as you want, on any channel, at no extra cost.
Questions You Can Answer
Run these tests continuously, not once a year
Is this channel driving incremental value or just taking credit?
Pause a channel in one region and measure the true conversion drop. If conversions barely change, the channel was taking credit for organic demand.
Should I scale or cut this channel?
Channels with high incremental lift and low saturation should get more budget. Channels with low incrementality are candidates for cuts.
Is branded search actually incremental?
The classic test: pause branded search in one geo and see if organic picks up the slack. Many brands discover branded search has very low incrementality.
How much of Meta's reported ROAS is real?
Compare Meta's platform-reported conversions to the incremental lift from a geo test. The ratio is your incrementality factor — use it to calibrate all future Meta reporting.
Does upper-funnel spend drive lower-funnel conversions?
Test whether pausing awareness campaigns (display, video, CTV) impacts conversion campaigns downstream. This is the hardest question in marketing and incrementality testing is the only way to answer it.
What's the post-treatment effect?
After you resume spend in the holdout region, how quickly do conversions recover? A long recovery suggests the channel builds demand. Instant recovery suggests it mostly captures existing intent.
Attribution + LLM vs. dedicated incrementality platforms
Dedicated incrementality platforms provide excellent testing with expert support. Attribution gives you the data to do it yourself, as often as you want.
Frequently asked questions
Everything you need to know about the product and billing.
Yes. Attribution's Data Export Tool provides daily spend by channel, daily conversions, and visitor-level data with geography — everything needed to run geo holdout tests, time-based on/off experiments, and synthetic control analyses. You can design and analyze tests using an LLM like Claude or ChatGPT, or use statistical tools like Google's CausalImpact package. No separate incrementality platform or license is required.
No. Attribution's Data Export Tool is available as an add-on for any plan. Once you have the export, you can run unlimited incrementality tests on the data at no additional cost. Dedicated incrementality platforms typically charge $50,000 to $200,000 or more per year, often with limits on the number of tests per contract.
Yes. Describe your situation to Claude or ChatGPT — which channel you want to test, your spend level, your geographic footprint — and the LLM will design the test protocol, specify what data to pull from your Attribution export, and analyze the results when the test is complete. The LLM handles the statistics: difference-in-differences, synthetic control modeling, p-values, confidence intervals, and plain-English recommendations.
Platform-native incrementality tools use the platform's own conversion data to measure the platform's own impact — a fundamental conflict of interest. Attribution's data export provides independently measured conversions tracked by Attribution's first-party tracking technology, not by the ad platform being tested. This eliminates bias and produces trustworthy incrementality measurements.
A typical geo holdout test runs for 2–4 weeks with an 8-week pre-period baseline for comparison. Time-based on/off tests can be shorter (1–2 weeks). Synthetic control analysis works on existing historical data with no test period required. The LLM can recommend the optimal test duration based on your traffic volume and statistical power requirements.
For geo holdout tests: daily conversions by geography (available from the events table with IP-based location) and daily spend by channel (from the amounts table). For time-based tests: daily conversions and daily spend by channel over the test period and a baseline period. For synthetic control: daily conversions and spend by channel over at least 6 months of historical data.