
In today’s performance-driven marketing world, the pressure to prove ROI is intense. Marketers are constantly asked: What did this campaign actually do? Did it move the needle — or just make noise?
While traditional models like marketing mix modeling (MMM) help answer some of these questions, they aren’t always the best fit. When you’re working with:
- a new product
- a new brand
- a new channel or tactic with no benchmarks
- a new target audience
… you don’t have the luxury of prior performance to anchor your assumptions. You’re essentially trying to use the past to predict a future that doesn’t look like the past at all.
And even if you do have early signals — such as reach, clickthroughs, response rates or visits from the platforms — how do you know what would have happened anyway if you hadn’t launched the campaign?
You don’t.
This is where test-and-control, or lift testing, becomes not just helpful, but essential. Lift testing is a measurement approach that doesn’t just estimate impact but proves it with statistical rigor. There’s something deeply satisfying about knowing — not just assuming — that your marketing is working. If you want to innovate with confidence, test-and-control isn’t optional.
How lift testing works — and why it’s so powerful
At its core, test-and-control is about designing in-market experiments that isolate your media’s true impact. You do this by identifying matched markets or groups — one that receives the campaign (the test group) and one that doesn’t (the control group). When designed correctly, any significant difference in outcomes between the two can be attributed to the campaign itself.
Here’s what makes this approach so effective:
- It’s real-world-proof: You’re not relying on modeled assumptions — you’re seeing actual market behavior, in real time.
- It filters out the noise: External factors such as seasonality, competitive activity or economic shifts are accounted for, because both test-and-control groups experience them.
- It’s channel-agnostic: You can apply this to programmatic, social, CTV, audio and more.
A real-world scenario: testing new media tactics
Let me walk you through an example. In a recent effort to test the impact of upper-funnel media, we worked with a brand that had historically invested in proven performance channels including YouTube, paid search and social media. While we saw strong potential in building awareness to drive longer-term growth, the client was understandably hesitant to invest in upper-funnel channels without tangible results. Waiting six months or more to collect enough data for modeling wasn’t going to cut it.
Rather than wait for long-term modeled results, the team designed a geo-lift test across 50 designated market areas (DMAs). Programmatic ads ran in 40 test markets and were withheld in 10 control markets. These markets were selected and grouped using machine learning clustering, ensuring similar trends in impressions, website activity and account sign-ups. No additional media or promotions were layered on, helping simulate a controlled environment.
The result? Real, statistically significant results. Not just correlation or assumed attribution. Actual lift. Across the 12-week experiment, 3 out of 5 market clusters showed measurable, statistically significant lift in engagement and business conversions — up to 15% in some cases. With test-and-control, the insights are immediate and the impact is real.
Wait, isn’t the real world messy? (Yes. And that’s the point.)
It’s important to acknowledge something: This isn’t a lab experiment. Test-and-control in the real world doesn’t happen in a vacuum.
There are always outside factors at play. Things like:
- Competitive media in the same markets
- Regional consumer behaviors and seasonal nuances
- Economic shifts or local news events
- Varying brand awareness by geography
We can’t control them all. And we don’t pretend to.
But here’s the thing: By matching test and control markets through clustering based on similar media histories, seasonality trends and business dynamics, we can isolate lift directionally and reliably.
The question isn’t: Did media cause 100% of this result?
The better question is: Did media contribute enough impact to be worth scaling?
We’re not chasing flawless causality — we’re chasing confident action.
So, why aren’t more marketers doing this?
I wonder why. The answer often comes down to legacy thinking and inertia. Many teams default to MMM or last-click attribution, even when they know the results are limited.
Modeling, platform measurement and lift testing don’t compete they complement. My advice:
- Use MMM for long-term planning.
- Use attribution for day-to-day optimizations.
- When you need to convince the C-suite to scale a bold idea, prove it via lift testing.
Final thought: Proof beats assumption
In an accountability-obsessed marketing world, it’s tempting to chase fast metrics and fancy dashboards. But when the stakes are high — when you’re launching, testing or shifting strategy — you need more than indicators. You need evidence.
It’s not just about measurement. It’s about building a culture of experimentation, making braver decisions and doing more of what actually works.
So, next time you’re faced with a high-stakes media investment or a new idea you’re not 100% sure will work, don’t guess. Test it.
Because the best marketing doesn’t just tell a story. It proves one.
Want to learn more and continue the conversation about lift testing? Rise can help.