“Great, so the course helped,” the other replied.
Everyone loves a good stats joke, right?
The fact that correlation does not infer causality, as illustrated by the joke above, is a fundamental statistical principle, and it’s pretty important to understand when evaluating online media. It’s ironic then that it’s usually completely ignored.
Standard online media performance measurement assumes that if someone converts in some way on a brand’s website, then the last piece of online advertising served to them must have been what drove the conversion.
Fixed attribution models, while they do look at more than just the last piece of activity served, tell the same lie in a slightly different way by assuming the effect of any channel/placement/keyword to be uniformly based on its position in a customer’s purchase journey.
Moving credit from the last piece of activity to the first piece, or spreading it throughout the journey based on a predefined, fixed model, does nothing to help answer the underlying causality issue.
So, how can we look beyond what’s served and understand what causes conversions?
The problem with all fixed models of attribution is that they only look at a very small proportion of a brand’s online advertising, namely, those ads that were served to users who ended up converting.
Sad as it may seem to marketers, the vast majority of online media activity (and indeed all media activity) is served to users who don’t end up becoming customers of their brand. Including these users in the analysis is the key to beginning to answer the causality question.
At MPG Media Contacts, we can do this using Artemis, which is our proprietary attribution modelling tool that holds data on all online media activity, right down to the cookie level. This provides a huge wealth of behavioural and media data. Obviously, having all of that data is fantastic, but what really sets the solution apart is what it’s able to do with it.
Using a bespoke algorithm we are able to compare the journeys of users who convert with those of people who don’t. This means we are able to identify and assign credit to the sites/placements/keywords that are more likely to appear in user journeys that end in a conversion than they are to appear in journeys that don’t.
Just as important in the process, is the ability to identify the pieces of activity that are just as likely to appear whether a user converts or not, and down-weight accordingly the credit they receive. By looking at the data in this way, you can get much closer to understanding which activity is working hardest to help cause conversions, rather than just appearing to.
Every time Artemis produces a new attribution report, a bespoke and unique model is built based on the specific data of the client and date period in question. The comparison of converting and non-converting journeys is used as a base for the algorithm, which also takes into account a myriad other factors (how recent the interaction, position in journey path, search position, etc.), to build the attribution model. Thanks to the algorithm, every model is able to account for the unique nuances of each client’s business and media activities and, as each of these changes over time, adjust accordingly each new model that is built.
Using this modelling technique, you can go beyond looking at just the activity that is served and identify the activity that works hard to introduce brands and products to people and change their minds.
Using this model means we can answer the question of causality for our clients in a way that fixed attribution models are not able to.
Find out more about attribution at our event, “The Attribution Dilemma” at our offices on 8th November, where speakers from leading businesses, such as Google and Adobe, will be speaking alongside a panel session that will be chaired by ExchangeWire’s own Paul Silver.Global Desk Editor