Attribution and Incrementality Testing

Why attribution and incrementality testing are essential to measurement systems in marketing

Attribution and Incrementality Testing
Written by
Rachel Fox
Published on
January 5, 2023

Through both supporting clients and our own experiences in house we understand how painful marketing measurement can be. We're all keen to understand the effectiveness of our marketing channels and ultimately know the optimal way to invest across the marketing mix. Are we spending the right amount of money in the right places? Is there wastage we could cut out? Would we grow faster if we invested more in this particular channel or objective? These are common questions we hear from our clients and have asked ourselves over the years.

Through extensive research across the industry, we came to realise there are a lot of criticisms of existing measurement strategies but few suggestions of solutions that work in practice. We've found that there's no single perfect solution, after all, the problem we are seeking to solve is an incredibly complex one; seeking to understand which touch points influence a consumer to purchase when we can't necessarily track all touch points or know which were seen by the consumer is ambitious.

We've found that seeking to gain small, regular improvements to our understanding of the channel mix through using multiple solutions to be the most effective.  We've tested and disregarded a number of options so you don't have to, and as the title suggests, we believe that a combination of attribution and incrementality testing gives you the most robust output at the fastest rate. Read on for the pros and cons of each and why they work together harmoniously.


Attribution assigns credit to one or more touch points for driving a conversion.  This can provide granular insight on which specific assets and messages drive revenues most efficiently and is particularly useful for performance marketers. This allows for multivariate testing and direct performance comparisons to be made between variants. For example a retailer could run 2 sets of ads, one set with a 2 for 1 message, another with a 50% off sale message and assess which message drove a lower cost per sale or higher ROI.

When designed well, attribution can create a consistent approach to measurement across multiple channels, particularly in the digital space where detailed tracking can be dynamically applied. It is an always-on measurement solution, allowing for trend based analysis and collection of data over a long period of time. Other direct channels such as direct mail can be included in attribution models provided some assumptions are made about their priority relative to digital channels.

However, all attribution models have limitations; standard models rely on broad assumptions on how a customer is influenced which can be challenging to determine with real evidence. Data driven and multi-touch attribution models require a reliable input of data for all touch points for each customer journey. This is difficult to achieve for a variety of reasons; cross device challenges, availability of data, changes to privacy and cookie usage to name a few. If conversions can happen offline, either in store or through a call centre this cannot always be captured by the attribution model and will likely result in some channels being under-credited. It is important to note that if the input to a model is missing significant data, the output of the model will be unreliable.

Lastly, one of the key differences between attribution and incrementality, is that attribution assumes that touch points in a user's journey deserve some credit for the conversion. What if a consumer bought an umbrella because it was raining? Or invested in a beauty product based on a friends recommendation? Environmental factors, word of mouth, brand loyalty and even convenience can all play a part in a consumer's decision to purchase but these simply aren't factored into attribution models.

Put simply, human psychology and behaviour is difficult to predict, and a single set of rules and assumptions cannot realistically be applied to a broad demographic of humans. That being said, the comparative performance assessments that can be made between assets which can be successfully tracked and attributed remains powerful. Thinking about the message test example above, this is still a fair and useful comparison to make since both messages are equally impacted by the flaws we know exist within attribution. It is therefore still fair to declare a winner based on which has the lower cost per sale. Maintaining an attribution model allows us to run low level tests at a creative or offer level which are known to drive significant shifts in performance.

Incrementality testing:

When performed over a large enough sample size to achieve statistical significance, incrementality testing reliably reveals a causal relationship between the media and the KPI measured. By withholding some customers from advertising and exposing others, incrementality testing isolates and measures what happens as a direct result of users' exposure to the particular media being tested. This is particularly useful for validating attribution at a channel level.

Provided the sample size is big enough to achieve statistical significance, multi-celled incrementality tests can be run. These are designed to reveal the impact of a combination of 2 variables, versus 1 and a control group. For example you may have a hypothesis that buying both TV advertising and YouTube is more effective than TV alone. In this test users in Cell A would be exposed to TV and YouTube advertising, Cell B would be exposed to TV advertising only and Cell C would be a control group which is not exposed to TV or YouTube advertising.

The key benefit of an incrementality test is both the control and exposed groups are equally affected by factors such as brand equity, word of mouth, loyalty etc. So the difference in results between the two can provide a clear indication of the results caused by exposure to the advertising tested.  

For example, let's say an advertiser is confident in Channel X based on the results observed through attribution. The blue line in figure 1 represents the daily trend in sales using last click attribution when the advertiser spends on channel X. On average there are 300 sales per day recorded, coming to a total of 9,312 sales across the entire month. If the spend for the month was £1m then according to last click attribution, channel X drove all 9,312 sales resulting in a CPA of £107. How many of those sales were incremental though? Imagine if in another scenario, in the exact same conditions, we hadn't run any advertising through channel X and observed the results graphed as the orange line. This indicates that an average of 100 sales a day are achieved, irrespective of the presence of channel X, suggesting channel X is in fact only driving an average of 200 sales per day, not 300. This would mean sales generated as a direct result of channel X are in fact 6,200, resulting in a CPA of £161; a 50% increase on the CPA we thought we were achieving through last click attribution. Insights like this inform us whether we are under or over-crediting a channel and provide a measure of how much by. When channels are over-credited there may be cost efficiencies that can be made, and further experiments can be designed to measure this. When channels are under-credited, we can increase the margin of acceptable CPAs and stress test scaling investment in the channel.

Figure 1

A side benefit to regular incrementality testing is the regular collection of data against a control group, this can help provide a sense of the shift in brand awareness and favourability over time. Of course, there are a range of other ways incrementality tests can be used, including to measure the benefit of increased investment in a particular channel or surveying both control and exposed groups to understand the impact on brand metrics.

50:50 split tests are generally considered the gold standard but more practical smaller control groups of 10-20% can be used and results rescaled for comparison.  It may take longer for statistical significance to be reached but is often more palatable to the business.  Most advertising platforms now offer randomized control group features.  If these aren't available or you chose to run a multi-celled incrementality test, there are other methods for defining your control such as geographical location.

This is often the only option when testing a channel like TV where spots are bought regionally.  Thorough pre-experiment analysis is key to determining the ideal set up, minimising cross contamination, and ensuring results remain fair and robust.  In general, incrementality tests are not quick to set up, nor do they yield results quickly. Unlike attribution, this is generally not an always on solution as it is impractical to withhold users from advertising consistently. Instead it provides marketers with a snapshot in time of how the media tested performs given the rest of the channel mix and the current conditions. In years to come when conditions and the marketing mix have changed, the same results may not be observed.  It is also generally impractical and ill advised to run multiple tests in parallel so channel level tests to validate attribution generally need to be run sequentially which takes time. Despite the higher setup effort, incrementality testing is critical in our eyes as it is the only method to definitively measure the impact on a KPI as a direct result of advertising. This is particularly pertinent with so many platforms offering effective data driven bidding solutions, this could easily result in channels being over-credited through attribution models.

Why Marketers need to use both

Put simply, the two are complementary and when used together, edge our understanding closer to the true picture of performance. Attribution gives us a detailed but imperfect view of performance. Despite these imperfections, attribution is still essential to make fair performance comparisons within the same channel and objective. By remaining conscious of the limitations of attribution and planning appropriate incrementality tests we can better understand the causal relationship between a particular channel and conversions and revenue. This provides a measure of whether attribution under or over-credits a particular channel and how much by. From this, adjusted CPA targets for particular channels can be inferred, giving marketers the confidence to adjust the distribution of budget across channels, knowing that overall sales numbers will benefit from the change. This allows marketers to account for the imperfections in attribution and actively work to leverage channels based on their true performance without losing the detailed view of performance and ability to make granular optimisations. Using both in a constant test and learn fashion can support both marketing effectiveness and growth objectives successfully.

How we can help

Are we spending the right amount of money in the right places?

Is there wastage we could cut out?

Would we grow faster if we invested more in this particular channel or objective?

If you find yourself asking the same types of questions, we'd love to help answer them with you.

Through our significant practitioner experience Loop Horizon have supported clients in successfully applying the techniques mentioned in this article, alongside a broader range of data-driven Marketing strategy and delivery support.

We work with our clients collaboratively to understand their specific needs and the environment they are operating in, to develop the best approach to enable them to find the answers they're looking for. Not only do we design the appropriate strategy but we deliver the end-to-end process through activation, embedding and any required updates to data and technology to enable marketers with the visibility of accurate reporting and optimisation tools that they need.

If you find yourself resonating with these challenges in your own business, do get in touch to find out more about how we can support you.