In this Q&A with ExchangeWire, Andy Evans (pictured below), CMO, Sovrn, suggests measures to be taken against ad fraud which go beyond mere ads.txt implementation, and among other things discusses the impact of publisher whitelists on mid- to long-tail publishers.
ExchangeWire: Why should brands consider advertising with publishers that aren’t in the comScore 500?
Andy Evans: Brands understandably default to the vast reach and established reputation of big-name publishers; but in doing so they miss valuable opportunities in the mid and long tail of the digital advertising landscape. The internet is home to a multitude of independent, niche publishers and bloggers, whose original, specialist content attracts loyal, engaged audiences who are highly receptive to relevant advertising.
Independent content creators are knowledgeable and passionate about their specialist subject and, while they may not attract the mass audiences of comScore 500 publishers, they draw like-minded people who really want to learn or to share their own experiences. Brands may not get as many impressions or clicks on niche publisher websites individually as they could on larger websites, but in aggregate, through the right partner, buyers will find an audience who are just as likely to interact with their messaging and with the right targeting. Widening your audience can only mean reaching more new customers, rather than relying on the same top 500.
It has been suggested that the industry should move away from the current viewability metrics. What methods would you like to see the industry adopt?
I’d like to see the industry adopt a model of currency that is based on real user engagement, such as CPH (cost per hour) or CPS (cost per second). Currently, most engagement metrics rely on dwell time, which is the time between the ad loading and the user closing the tab. But this doesn’t tell us very much about how the user engages with the ad or page. Just because an ad loads and is in view doesn’t necessarily mean a user is engaging with it. The user could leave the tab open while they go to make a coffee, or (right click) open the tab to read later and then forget all about it.
Rather than relying on dwell time, the industry could adopt viewable engagement time (VET), which records engagement events such as mouse movements, clicks, swipes, and keyboard activity to measure how long users are actively engaged with the page while the ad is in view. By determining if the user is actually present, and whether they are actively engaged with page content, advertisers gain a more accurate understanding of the value of each ad placement.
Ultimately, the industry will move to measuring the true effectiveness of advertising and its impact on business outcomes, but this is quite a leap from where we are today. To get closer to this point, the industry can combine VET metrics with cost-per-second or cost-per-hour models, which are already in use for direct buys and will soon be available programmatically.
Publishers have been advised to implement ads.txt on all sites carrying ads to reduce ad fraud. Do you think this is enough to reduce ad fraud?
Ads.txt is a major step forward in reducing ad fraud, giving publishers more transparency into who buys their inventory and advertisers more confidence that the impressions they bid on come from a legitimate source. It will limit the practise of domain spoofing, as well as reducing inventory arbitrage. We encourage all our publishers to adopt it, especially as buyers now actively trade against it.
But ad fraud is a complex and continually evolving issue, and ads.txt on its own isn’t enough to tackle it. The industry must continue to support industry-wide safety initiatives, such as JICWEBS certification, which requires tech companies to demonstrate their products can deal with multiple sources of fraud, and which has recently been bolstered by the launch of the IAB Gold Standard. There are also certifications for buyers, sellers, and third parties from the Trustworthy Accountability Group (TAG) that have stringent criteria. In addition to these initiatives, the industry can use advanced tools from the likes of Integral Ad Science, MOAT, Forensiq, and Pixalate to identify suspicious activity and protect against bot traffic.
Ads.txt is a vital development, but it’s only one of the steps we need to take to make the web a safer place to advertise.
UK publishers are reportedly losing an average of £500,000 per year due to ad blocking. What more can be done do to tackle this issue?
This is an issue that is not going away. Users who install ad blockers often do so to prevent the deluge of ads that appear on poor-quality sites and sites that offer services like illegal streaming. However, when these same users then visit quality publishers, that rely on advertising revenue to thrive, they also get blocked. This is ultimately detrimental to the industry as a whole.
Buyers’ hardline approach to viewability, ad overload, and quality content could help to improve the situation by reducing the amount of ads that unwittingly find themselves funding the bad players. This might, in turn, have a knock-on effect to the number of ad blockers that are installed going forward.
The tool we’ve seen most commonly used to address ad blocking is messaging, where blocked web users are asked to whitelist a domain. This could be optional or compulsory to gain access to content. Few ad-block users are willing to whitelist a domain, however, and even when they do there is often very little user data the buy-side can use to determine the value of ad placements, limiting ad revenues.
Subscriptions or micropayments are often bundled with messaging, but it is so far unclear whether users are prepared to pay for text-based content in the same way they are for music and video, as with Spotify and Netflix. While users do appear increasingly willing to pay for specialist content and trusted news sources, the independent web as a whole may be too fragmented for subscription to provide a universal solution.
Two more techniques that can be used to combat ad blocking are ad-recovery, where publishers’ ad stacks are hidden so ad content is not filtered out, and ad insertion, which is a similar principle, but uses vendor-sourced demand. Both of these techniques generate revenue for publishers, but again this is limited by a lack of tracking data. In addition, ads are served to users who have specifically said they don’t want to see them, which can never be positive for user experience.
Finally, publishers could look to native or affiliate advertising, which is either integrated into content so ad blockers don’t filter it out, or meets the acceptable ads criteria. However, the performance of these types of advertising is currently inconsistent.
As things stand today, there is no universal answer, so publishers are likely to settle on one or a combination of the above.
There have been various calls of concern about the issues an internet free from net neutrality would cause. Why is this topic so controversial, and what does it mean for digital marketers?
On 14 December, 2017, the US Federal Communications Commission (FCC) voted to get rid of net neutrality. This is a hugely important subject. The internet is an open platform shared by all, and net neutrality is a vital attribute that helps keep it that way. It allows small independent content creators to compete on a level playing field with larger platforms, and it provides a forum to share ideas, express opinions, and drive innovation.
In the digital world we live in, an end to net neutrality could ultimately be an end to free speech. Independent websites with specialist content might disappear, and the web could be left to the few who are able to pay for control and preference. Digital marketers may no longer be able to reach the highly engaged audiences of niche websites and would instead be obliged to invest their ad budgets targeting the mass audiences of larger platforms.
Net neutrality is critical to maintain the unique online environment where anyone can conceive, create, and publish content that can be accessed by all.
Brand safety issues have pushed more brands to only work with publisher whitelists. What impact would this have on mid- to long-tail publishers?
It is understandable advertisers are prioritising brand safety; and whitelists are one way to achieve this. However, restricting media buys only to the domains on a limited list has an impact on campaign performance and also on the entire digital advertising ecosystem.
For brands, it means they are missing out on valuable ad placements alongside the original content on quality niche publisher sites, just because those sites aren’t on their whitelist. It’s a bit like the social media filter-bubble phenomenon, where users keep being shown the same type of content rather than ever being exposed to anything new or different. By relying too heavily on limited whitelists, brands could miss out on the reach and return from websites they’ve never heard of and their high-value audiences.
For mid- to long-tail publishers, an increasing reliance on limited whitelisting will inevitably reduce the ad revenue available to them, even though they are delivering high-quality, brand-safe, fraud-free content. In the long run, this could make independent websites unsustainable.
Where whitelists are used, they should be revised and updated regularly to include a broad range of sites from trusted sellers.
Brands can also look beyond the whitelist and choose instead to work with sellers who are committed to industry quality initiatives and uphold stringent brand safety standards. By doing so, they can be confident their ads are served in an appropriate environment, even where the domain isn’t on their whitelist.