Incrementality Testing

Is your paid media actually driving conversions? Or just taking credit?

Attribution tells you who touched the ball. Incrementality tells you who scored the goal.

Your Google Ads campaign claims 200 conversions this month. But how many of those people would have converted anyway — through organic search, direct visits, or brand awareness? If you paused the campaign and only lost 40 conversions, your true incremental contribution is 40, not 200. That's an incremental CPA 5x higher than what the platform reports.

Dedicated incrementality platforms charge enterprise pricing to answer this question. With Attribution's data export and an LLM, you can answer it yourself.

Three ways to measure what's actually working

Geo holdout tests

Pause spend in one region while maintaining it in a comparable region. Compare conversion rates. The difference is your incremental lift. The gold standard for causal measurement.

Best for: Channels with enough spend to split geographically

Time-based on/off tests

Turn a channel off for a defined period, then back on. Compare before, during, and after. Simpler than geo holdouts but less rigorous — external factors can confound results.

Best for: Channels with fast feedback loops (paid search, paid social)

Synthetic control analysis

Use historical data to estimate what would have happened without a channel, then compare to what actually happened. No need to pause spend — works on existing data.

Best for: Teams that can't or won't pause spend during a test

Platform data can't measure its own incrementality

Using Meta's, Google's, or TikTok's own conversion data to measure whether their ads are incremental is a fundamental conflict of interest. Attribution's data removes that conflict. Conversions are tracked by Attribution's first-party tracking technology, independently from the platforms being tested.

And because Attribution tracks cost and ROAS at the user level, you can go beyond "is this channel incremental?" to "is it incremental at a price that makes sense?" A channel might genuinely drive conversions that wouldn't have happened otherwise, but if the incremental CPA is 3x your target, it's still a bad investment.

This is the "matches your bank account" principle applied to incrementality testing.

Design, run, and analyze incrementality tests in a conversation.

You don't need to know the statistics. The LLM handles test design, data requirements, analysis, and interpretation. You just need your Attribution data and a question.

Step 1: Design the test
Prompt:
I'm uploading a CSV from Attribution's data export with daily spend by channel and daily conversions by country. I want to test whether our LinkedIn Ads spend is truly incremental. Design an incrementality test. Tell me what type, how long, and how to analyze the results.
Step 2: Analyze the results
Prompt:
I ran the geo holdout test on LinkedIn Ads. I paused LinkedIn spend in the UK for 4 weeks while keeping it running in the US. I'm uploading a CSV with daily conversions by country for the test period and the 8 weeks before. Please: 1. Estimate the incremental lift 2. Calculate statistical significance 3. Show confidence intervals 4. Calculate the incremental CPA 5. Tell me if LinkedIn is worth the spend 6. Present results as a dashboard
Claude
I've analyzed your geo holdout test comparing UK (treatment — LinkedIn paused) vs US (control — LinkedIn running) over 4 weeks with an 8-week pre-period baseline.
Test duration
4 weeks
Pre-period
8 weeks
Treatment
UK (paused)
Control
US (running)
Incremental lift
+12.4%
p-value
0.023
Confidence
97.7%
Statistical significance
Yes
US (control) UK (treatment — LinkedIn paused) UK expected (synthetic control)
Week 1Lift: 10.9%
US: 48/dayUK: 41/day
Week 2Lift: 12.2%
US: 52/dayUK: 43/day
Week 3Lift: 9.3%
US: 45/dayUK: 39/day
Week 4Lift: 8.3%
US: 51/dayUK: 44/day
+4.1%
+12.4%
+20.7%
0%
+30%
95% confidence interval: +4.1% to +20.7% incremental lift
Platform-reported CPA
$160
Based on LinkedIn-attributed conversions
Incremental CPA
$340
Based on conversions that wouldn't have happened otherwise

Verdict: LinkedIn Ads are incremental (p=0.023) but at 2.1x the attributed CPA. The channel is driving real conversions, but fewer than platform reporting suggests. Consider reducing spend by 30–40% and monitoring incremental CPA weekly. The savings can be reallocated to channels with better incremental efficiency.

→ What if I cut LinkedIn spend by 40% — what's the projected conversion loss?
→ Run the same analysis for our Google Ads branded campaigns
→ Design a time-based test for our TikTok spend
→ Show me the post-treatment window — did conversions recover after pausing?

Dedicated incrementality platforms charge enterprise pricing for this analysis. With Attribution's data export and an LLM, you can run it yourself — as many times as you want, on any channel, at no extra cost.

Run these tests continuously, not once a year

Is this channel driving incremental value or just taking credit?

Pause a channel in one region and measure the true conversion drop. If conversions barely change, the channel was taking credit for organic demand.

Should I scale or cut this channel?

Channels with high incremental lift and low saturation should get more budget. Channels with low incrementality are candidates for cuts.

Is branded search actually incremental?

The classic test: pause branded search in one geo and see if organic picks up the slack. Many brands discover branded search has very low incrementality.

How much of Meta's reported ROAS is real?

Compare Meta's platform-reported conversions to the incremental lift from a geo test. The ratio is your incrementality factor — use it to calibrate all future Meta reporting.

Does upper-funnel spend drive lower-funnel conversions?

Test whether pausing awareness campaigns (display, video, CTV) impacts conversion campaigns downstream. This is the hardest question in marketing and incrementality testing is the only way to answer it.

What's the post-treatment effect?

After you resume spend in the holdout region, how quickly do conversions recover? A long recovery suggests the channel builds demand. Instant recovery suggests it mostly captures existing intent.

Attribution + LLM vs. dedicated incrementality platforms

Dedicated incrementality platforms provide excellent testing with expert support. Attribution gives you the data to do it yourself, as often as you want.

Enterprise contracts ($50K–$200K+/year)
Data export add-on — no additional incrementality cost
Tests designed by vendor's team
Tests designed by your LLM in minutes — iterate instantly
Vendor controls the methodology
Full transparency — you see every calculation, every p-value
Results delivered on vendor's timeline
Results analyzed in real-time, as fast as you can prompt
Limited number of tests per contract
Unlimited tests — run as many as you want, on any channel
Conversions measured by vendor's system
Conversions measured by Attribution's neutral first-party tracking

Frequently asked questions

Everything you need to know about the product and billing.

Yes. Attribution's Data Export Tool provides daily spend by channel, daily conversions, and visitor-level data with geography — everything needed to run geo holdout tests, time-based on/off experiments, and synthetic control analyses. You can design and analyze tests using an LLM like Claude or ChatGPT, or use statistical tools like Google's CausalImpact package. No separate incrementality platform or license is required.

No. Attribution's Data Export Tool is available as an add-on for any plan. Once you have the export, you can run unlimited incrementality tests on the data at no additional cost. Dedicated incrementality platforms typically charge $50,000 to $200,000 or more per year, often with limits on the number of tests per contract.

Yes. Describe your situation to Claude or ChatGPT — which channel you want to test, your spend level, your geographic footprint — and the LLM will design the test protocol, specify what data to pull from your Attribution export, and analyze the results when the test is complete. The LLM handles the statistics: difference-in-differences, synthetic control modeling, p-values, confidence intervals, and plain-English recommendations.

Platform-native incrementality tools use the platform's own conversion data to measure the platform's own impact — a fundamental conflict of interest. Attribution's data export provides independently measured conversions tracked by Attribution's first-party tracking technology, not by the ad platform being tested. This eliminates bias and produces trustworthy incrementality measurements.

A typical geo holdout test runs for 2–4 weeks with an 8-week pre-period baseline for comparison. Time-based on/off tests can be shorter (1–2 weeks). Synthetic control analysis works on existing historical data with no test period required. The LLM can recommend the optimal test duration based on your traffic volume and statistical power requirements.

For geo holdout tests: daily conversions by geography (available from the events table with IP-based location) and daily spend by channel (from the amounts table). For time-based tests: daily conversions and daily spend by channel over the test period and a baseline period. For synthetic control: daily conversions and spend by channel over at least 6 months of historical data.