×

Trust in Digital Marketing: How Brands Can Eradicate Bots - a Roundtable with White Ops

It’s perhaps more than an understatement to say that bots are a constant source of irritation and aggravation within the ad tech industry – not only do they frustrate brands’ efforts to create and carry out meaningful interactions with their target audiences, but they do so whilst lining bad actors’ pockets with marketers’ (at times, hard-won) budgets.

Fortunately, there are companies who are dedicated to taking these bandits down and fighting against all other forms of ad fraud. One of these firms is White Ops, which protects businesses around the world by verifying more than 10 trillion interactions every week. In a private roundtable, White Ops was joined by a number of brand representatives to discuss the current issues they face when it comes to bots (and other forms of fraud). Here’s a summary of what they discussed.

 

Why bots are a problem

Whilst bots typically exhibit certain behaviours or characteristics that make them easy to spot, fraudsters have developed increasingly sophisticated bots which are harder to detect. These bots literally mimic our human behaviour – they click on things, scroll, and can even fill in forms. As a result, businesses can end up collecting completely fabricated data, recording artificial interactions, and gathering distorted performance metrics.

The consequence will always be waste – brands either end up with fictitious consumer information, or stolen information from customers with no interest in the business (a scenario which opens up a GDPR-shaped rabbit hole). And this waste isn’t short-term – once a bot gets into a company’s tech stack, it’ll stay there, becoming part of the firm’s retargeting efforts to create a vicious circle of ads being presented to non-existent users and money going to waste.

 

How are these brands managing bot mitigation?

For one brand, who is in the midst of building their own agency with its own in-house capabilities, removing bots involves leveraging DSP technologies and capabilities to target all direct sellers. The company also reported that it creates inclusion leads across markets, as having marketing managers in different countries can give them more varied insights with which they can drive more qualified traffic. Yet whilst this approach, coupled with some other safety measures, offers some protection against bots, it is effectively a black-box, and thus lacks both accountability and granularity.

Another brand reported feeling well-equipped to detect ad fraud and bots within the display side, citing their partnerships with different agencies and their use of pre-bid filters and ads.txt targeting across all their protective measures. The latter brand is now turning its attention to search. This brand feels that its anti-fraud strategies have become stronger in the past couple of years, but admitted feeling that they may still rely too heavily on their agency partners in this area.

 

What are we doing wrong?

The group agreed that many are guilty of shying away from really examining the legitimacy of their traffic. Whilst damaging, this behaviour is understandable – from a marketing point of view, being conscious of the risk of receiving a high level of invalid traffic often isn’t enough to deter a brand if the channel this traffic is coming from is producing a high number of conversions. Marketers don’t want to risk missing out on winning legitimate conversions, so they’re not going to look for reasons to close down a potentially profitable channel.

In some instances, this ignorance is simple oversight – some brands remain unaware that they have any issues with bots because they think that having solutions in place to protect their programmatic operations is enough to keep such fraud away. Whilst these solutions provide a form of safeguarding, they don’t stretch to search and social, meaning that the bots that surface within these channels go undetected. White Ops confirmed the reality of this issue, citing their partnership with an auto brand as an example. The company had come to White Ops after seeing their conversion rates drop and being unable to resolve the issue themselves. After applying one of their solutions, White Ops discovered that 17% of the leads the firm were generating were fraudulent, with 38% of this invalid traffic originating from a single ad platform.

With both scenarios considered, the group agreed that brands should review and audit their channels more regularly and more rigorously. Whilst this will inevitably create more work (and perhaps some difficult conversations with stakeholders), it will prove far better for the long-term health of the business and the wider ecosystem alike.

 

What do we do about unclassified traffic?

Some of the brands expressed concern with the level of traffic that remains unclassified after undergoing verification processes – if a significant proportion of interactions are left unidentified, then how can marketers know that their ad spend is going towards authentic audiences?

Whilst it remains virtually impossible to definitively determine how much unclassified traffic comes from bots and how much from authentic interactions, anti-fraud firms have processes in place to determine the most accurate metrics possible. For White Ops, this involves applying specific algorithms, which are designed to look at technical evidence from a particular interaction, and then using the resulting findings to classify the interaction as human or bot. Once this evidence, which includes IP address, browser or device information, and click patterns (amongst many other signals), is analysed, White Ops categorically identifies the interaction as either being from a bot or from an authentic user. This gives marketers the ability to rule out any traffic attributed to bots with greater confidence that they are disqualifying truly inauthentic interactions.

 

Should we be so trusting of the big players?

Traditionally, bot fraud mitigation technology has been associated with programmatic display, but omnichannel has surfaced as an important battleground for the issue. For the group, this brought up the question of whether advertisers are too trusting of tech leaders such as Facebook and Google. Whilst we tend to attribute programmatic fraud to bad actors pumping it into the system – which is still often the case – the role of big industry players in facilitating and enabling fraud may be overlooked.

Some big tech firms that operate under a “walled garden” structure offer a revenue sharing model which rewards users for generating traffic. One of the clearest examples of this is YouTube’s partner programme, which offers content creators a cut of advertising revenue for every ad viewed on their channel. Whilst they can prove an effective way of garnering legitimate impressions, these revenue sharing models too often result in the creation of junk channels – as there’s an incentive for channel owners to acquire more traffic, some will inevitably turn to shady sources to do this.

The group agreed that, as these platforms aim to sell as many impressions as possible, there is ultimately a clear conflict of interest which media buyers should be more concerned about. The same applies to leaders in search, who, despite having systems in place to detect fraud, may be influenced by their financial priorities. Since finding and eliminating traffic sources effectively undermines the commercial goals of the industry leaders, the group agreed that turning to an independent third-party who isn’t involved in the buying or selling of media would likely provide more trustworthy and accurate verification services.

 

How is the industry currently fighting bots?

There are 3 main methods of battling bots, all of which involve giving companies a piece of software to place on their website. This software records data about the engagements that have taken place on the site, which the brands’ anti-fraud partners can analyse to determine the sources of respective engagements. The first method is using readily available data to instantaneously decide if a source is a bot, and to remove it accordingly. The second consists of a broader approach, which analyses aggregated traffic in order to find unnatural behaviours (such as 100% of all clicks made on an ad being in the exact same position) which are indicative of inauthentic interactions created by a bot. Then the third technique entails looking out for groups of visitors who are specifically targeting or visiting sites that are owned or monetised by the same group.

It’s important to note, however, that the last of these techniques is vulnerable to infrastructural changes surrounding ID (such as the deprecation of third-party cookies). This is because these changes have restricted firms’ ability to analyse audiences to in order to detect overlap.

White Ops’ technique is broadly a combination of the first two methods. The firm gives its ecommerce partners a JavaScript tag to place on their website, which gathers information used to determine whether engagements came from bots or humans. White Ops is also able to provide information surrounding the source of an interaction, making it easier for their partners to identify their most and least valuable channels, and can shut out bots without having to indiscriminately turn off entire traffic sources. With these precise insights, brands can make informed decisions about where to direct their ad spend, resulting in more efficient and effective campaigns. Furthermore, White Ops also offers to quantify the return on investment that firms could stand to achieve by using the solution, factoring in the cost of the tech tax and companies’ respective budgets to provide an estimate of the amount they could save.

 

What needs to be done?

There was consensus that there needs to be more education, both industry-wide and within individual businesses, on the diligence and trust techniques used by the brands’ respective partners. Keeping the team informed about the solutions they are using is particularly important when you consider how rapidly these solutions have to adapt to fraudsters’ evolving tactics. It will be a never-ending conversation, but an unimaginably valuable one at that.

Brands are often taxed on traffic, and traffic is commonly a significant target for brands’ ecommerce teams. This itself fuels the “fear of finding out”, as marketers can face resistance or scrutiny from business stakeholders who may react negatively to their findings on fraud. Therefore, having effective solutions in place that the whole team understands will fortify them against criticism or disputes levied by stakeholders and other parties who have less of an understanding of the issue.

Another aspect of improving education is conveying the difference between ad fraud and marketing fraud. Whilst ad fraud compromises the validity of each individual impression, marketing fraud compromises the entire marketing lifecycle, from search budget to acquisition efforts and beyond. Because this distinction is not commonly understood, many companies are under the impression that the solutions they use to guard the programmatic side of their business are protecting their entire marketing operation, when this is not the case. With an understanding of the difference between ad and marketing fraud, firms will be equipped to find the solutions they need to be protected against both.

The group also considered the benefits of assessing the quality of traffic as part of the procurement process, a step which some businesses have already incorporated in varying ways. Whilst adopting such a stage would obviously create a cost for the business, such a cost will likely pale in comparison to the amount of money, effort, and resources that could end up being wasted on attempting to convert non-existent consumers. Therefore, it seems logical (and worthwhile) for a business to invest in establishing a unit that will save millions from being lost to fraudsters.

Finally, action must be taken to build trust, the lack of which is currently deemed a notable impediment to successfully tackling bot fraud. The group recognised that investing in independent verification could be the best way of overcoming distrust in traffic analysis, with some intimating that they would be interested in comparing the findings of independent parties to those found by some vendors.