A/B testing has been around for a few years now and is an optimisation strategy that seems to sit under the radar, quietly and studiously getting on with the task at hand, without creating any great fuss. Does the fact it's not a glamorous head turner mean it's not getting as much airtime as it deserves? Oren Cohen, head of mobile and personalisation, Optimizely, thinks that could be the case. Here, Cohen speaks with ExchangeWire about the journey of A/B testing and how there's a lot more to it than meets the eye.
ExchangeWire: When we talk about A/B testing, we think about testing one image against another to achieve optimal performance – are we doing A/B testing a disservice by referring to it in such simple parameters?
Oren Cohen: There is a lot more to it than that. A/B testing is a method to make decisions. The common denominator is the level of uncertainty. Decision making is difficult because it requires choosing one alternative over various others, but no one knows which alternative will be the best, so it’s a way to gather data on how effective each alternative is and make the right decisions, rather than guess. Because it’s so general, the most impact it will have is when it’s applied to situations of uncertainty that are most critical to the organisation. If it’s one page on a website that few people visit, or one image versus another, will not make huge business impact. However, if it’s a new product line to sell men’s as well as women’s clothing, it becomes much more critical. The most successful companies we work with find ways to validate truly strategic decisions to the business using A/B testing. These could be decisions such as entering new markets, launching new products, offering new services, etc. The main advantage of adopting A/B testing as a decision making system is having an objective and quantified measure of how effective each alternative is.
The example of comparing one image to another is good for explaining generally how A/B testing works, but does a big disservice to its applications. Companies that don’t think past these simple applications are missing out on most of the impact that A/B testing has the potential to provide. When it comes to A/B versus multivariate testing, they are used completely interchangeably; and unless you are speaking to a practitioner, it’s a very technical difference between the two.
A/B testing is a tried-and-tested optimisation method, but does this mean that it's sometimes lower down the list on a brand's optimisation strategy?
The opposite is true. The fact that A/B testing is a proven method of improving financial results for the world’s best-known brands actually makes it a higher priority. A few years ago, A/B testing was a new concept only run by innovators. Today, it’s becoming more of a baseline optimisation strategy.
How can brands maximise the value of A/B testing?
It’s a loaded question, as there are so many things brands can do. Brands can maximise the value of A/B testing by investing in the strategy that informs it, ahead of the execution. That means anyone in charge of the optimisation strategy has to be well-informed of the key business objectives and metrics so that the roadmap ultimately supports the things that matter most. Don’t measure the impact based on how many experiments you run, but on whether it is addressing the most critical questions.
Then, to breakdown these objectives and find the most relevant experiments to run, a company should be building an optimisation roadmap to understanding the business objectives. When they drill down into these objectives, they should invest in the analysis, both quantitative and qualitative, that uncovers the most critical problems and allows for sensible prioritisation. Investing in the optimisation strategy maximises the value a brand can get from A/B testing.
Some brands can get it wrong by not continually running A/B testing. The best brands understand that user behaviour changes and that there is seasonality, with some experiments run again a few months later to reconfirm, or reject, the initial findings.
What limitations does A/B testing have and what alternatives exist to overcome those?
A common misconception is that A/B testing is limited to online channels, but some more creative applications and adoption of A/B testing could go even beyond that. The most successful users of A/B testing and adoption find ways to use it to validate questions outside the online space. For example, if your brand is going to invest tens of thousands of pounds on above-the-line advertising, such as on the tube or on billboards, you could A/B test images and messaging on your website, measuring user engagement. Then you could apply your learnings to the offline ads.
There are, of course, shortcomings to A/B testing though, too. The primary limitation when thinking of A/B testing in the fast-moving online world is the dependence on certain traffic levels to achieve statistical significance. There is no clear rule about how much traffic is 'enough' because the answer to that depends on what you want to achieve and how much time you can wait.
The best advice, to get the most within your brands limitations, is to educate your team on the tradeoffs of traffic volumes, conversion rates, level of statistical significance and time, and then manage your optimisation roadmap within the levels that are relevant for your business.
With the proliferation of new media and personalisation, does A/B testing hold the same power that it used to, when broadcast messaging allowed statistically significant scalability?
Yes, it does. Personalisation is a powerful tool to tailor content and messaging to segments and individual users.
However, so many of the elements on a page are broadcasts, which do not get personalised, even on the most advanced sites in the world, such as Amazon. This means that huge sections of your website are still broadcast and the value of A/B testing these sections does not change at all.
For example, you might be personalising the filters applied to a specific category page and the products you highlight for a specific user, but the overall layout of the page, the number of filtering options, and the main navigation options usually remain the same. A/B testing these elements, and the choices you provide all your users, can still result in huge increases in user engagement and sales. Scalability still exists. If anything, personalisation is opening up new opportunities to A/B test; but, within that, people need to understand the tradeoffs of traffic levels, how big their audiences are, and how fast they can make these decisions.
How can A/B testing maintain relevance as an optimisation tool?
A/B testing is a critical tool for making business-critical decisions based on data in situations of uncertainty.
I firmly believe A/B testing is going to remain a critical decision making tool in the fast-moving online world, which presents constant decision junctions of uncertainty. I don’t see it becoming less relevant at all.
It was a new concept only a few years ago and now more and more companies are using it. You can collect data relatively quickly and A/B testing will progress as companies will use it to address problems to which A/B testing previously wouldn’t have been applied.