×

Combating Ad Fraud & Hate Speech: Q&A with Jake Dubbins, CAN

Anti-hate

In this exclusive interview with ExchangeWire, Jake Dubbins, co-chair of the Conscious Advertising Network (CAN), discusses what steps the advertising and marketing technology industries can take to combat ad fraud, foster diversity, and help quell the rising tide of hate speech.

How is CAN approaching the prevalent issue of ad fraud? Can meaningful and permanent solutions be found in the coming decade, or will it remain a fraud versus anti-fraud arms race?

When CAN first started we knew that we did not have the capability to solve all these big issues on our own. A great deal of work had already been done in a number of our areas of focus, so we saw it as our job to signpost best practice and also to fill in the gaps. We worked with JICWEBS to write our Ad Fraud manifesto as they had been working for a long time on these issues.

One of the key problems that we see is a lack of knowledge at the top of advertisers’ business. Many C-suite leaders do not understand the scale of the problem. Many have never even heard of JICWEBS. How can you enforce solutions in your supply chain if you don’t have a handle on the problem or know who can help solve it?

The other big thing is the framing of the issue. In the advertising and ad tech bubble most of the conversation at the moment is about the ICO findings into ad tech but the gaze remains inward. Too little focus is put on the real world implications of ad fraud. What does it fund? The predictions on the amount of money lost to ad fraud in 2019 vary from USD$5.8bn (£4.4bn) (White Ops and the ANA) and USD$23bn (£17.6bn) (Cheq). Wherever the truth lies, these are big numbers. This is big money lost from advertisers and big money funding things like organised crime. This is not just a percentage on a spreadsheet presented on a Tuesday and forgotten about on a Wednesday.

In terms of permanent solutions, I really do not know the answer to that as the types of fraud changes and becomes more sophisticated all the time. Obviously there are commercial companies entering the market to drive transparency and follow where the money goes. Having visibility of the supply chain is obviously a big improvement but to a degree there will always be bad actors trying to game the system.

With self-certification schemes falling under scrutiny recently, is this merely a perception issue, or does more independent auditing and enforcement need to be bought in to combat ad fraud?

Absolutely more independent auditing and enforcement needs to be brought in. We have had years of self-certification and there are still huge problems in the system. At CAN we talk a lot about the fact that the ethics have not kept pace with the technology of modern advertising. Ad tech is largely an unregulated marketplace, developing at incredible scale with huge money. Just because we can do something doesn’t mean that we should. Ad tech has many unintended consequences, industrial scale ad fraud being one of them. Most of the industry bodies - ISBA and the IPA for example believe there should be independent oversight of the tech platforms.

The balance needs to be struck between having robust third-party oversight of the industry and the companies that operate within it, while ensuring we do not create a huge barrier to entry. I don’t think many people want a further concentration of power to fewer big tech players who have the money and the man/woman power to lead compliance. There needs to be space for new players to enter the market. For small agencies and adtech players the playing field is already difficult to enter. For example, to be ‘Certified Against Fraud’ from TAG and JICWEBS, small businesses have to pay £18,750 if you are not a member of a trade body (which also costs money to join). That is a pretty big barrier to entry for most small agencies and really needs looking at. Is the industry looking to drive best practice for everyone or locking out smaller, ethical agencies trying to do the right thing?

What more can be done to foster diversity in the ad tech and martech industry, particularly at the c-suite level?

Advertising and adtech needs to celebrate all forms of diversity, including all genders, multicultural backgrounds, ages, sexual orientations, people with disabilities, neurodiverse traits, class, all socio-economic groups, faiths, ideologies and more. We are all intersectional: we are never just one demographic. Identities combine in many ways.

To live and breathe this ethos, we have to understand inclusion needs to the red thread through the entire end-to-end process: research, strategy, teams, technology, developers, media placement, and measurement.

At board level, all leaders should be accountable for driving diversity throughout the DNA of their brand and business. On a practical level, this involves setting KPIs which are regularly reviewed against business objectives, and understanding the biases and barriers which come in the way of recruitment, career progression, and retention. The focus shouldn’t be on box-ticking by demographics, but on cognitive diversity.

The C-suite do not often know where to start so they need to bring in help. Getting accredited by Creative Equals’ Media Equality standard is a good start. Have all teams trained in ‘inclusive recruitment’, making sure they understand their biases are barriers to hiring for difference. Hire from alternative websites, like The Dots (61% women, 33% BAME, 15% LGBTQIA+), and use blind-hiring methods.

Ask for help. The problem will not fix itself just by speaking about it on endless panels.

What more should the programmatic industry be doing in preventing the funding of hate crime and child harm?

Take it seriously for a start. We need to recognise that hate speech has real world implications. Hate crimes have doubled in England and Wales in the last five years. One of the main reasons I set up CAN, with my co-chair Harriet Kingaby, was because my neighbour in East London was a victim of a hate crime in 2017. A brand new dad, he was badly beaten up, along with the landlord, in a local pub. What had he done? He had been born in Turkey. It was a racist attack.

Hatred online is an economic model. In CAN’s recent presentation at the United Nations’ Forum for Business and Human Rights in Geneva we demonstrated how many groups are making money from online hate by producing content they can monetise through the platforms.

What more can we do? Invest time and money in understanding the issues. Over the last year we have found big brands inadvertently funding anti-semitic, islamaphopic, homophobic, and white supremicist content across the internet.

When we wrote the Hate Speech manifesto we worked with the UN’s Office of the High Commissioner for Human Rights and Stop Funding Hate to really understand the terminology of dehumanising language. We also took advice from Article 19, a Human Rights Charity defending freedom of expression and information around the world. At CAN we fundamentally believe in free speech within the law of the land but that speech does not have an automatic right to reach and it certainly does not have a right to earn vast sums of money.

In terms of child harm, we have been incredibly slow to react, as with most of these issues. The platforms only started dealing with self harm content, again monetised, after the suicide of Molly Russell. I cannot work out why this was not dealt with when YouTube started finding monetised channels run by paedophiles on its platform. At this point, why did the big tech platforms not start identifying other problematic areas? Self harm? Suicide? Bullying? Grooming? Comments on young children undressing? Much more needs to be done to understand real world issues that are turbocharged online by consulting with civil society. Action can then be taken much quicker. This then means that brand and human safety is paramount.

With an increasingly fractured political landscape in both the UK and the US, for instance, leading to more divisive rhetoric in social media and political advertising, how can brands advertising on social channels avoid funding such content? Is more action limiting political advertising on social channels needed to ensure the limitation of hate speech?

Brands can avoid funding misinformation and divisive content by signing up to initiatives like CAN and implementing the principles of the manifestos into the whole supply change. There is also the WFA’s Global Alliance for Responsible Media. At this stage they have kicked off a working group to develop adopt common definitions to ensure that the advertising industry is categorising harmful content in the same way. The 11 key definitions covering areas such as explicit content, drugs, spam and terrorism will enable platforms, agencies and advertisers to a shared understanding of what is harmful content and how to protect vulnerable audiences, such as children. It is vital that senior leaders understand what constitutes hate speech, where the line is versus free speech and what are the key areas of misinformation.

As I said earlier, a lot of people I think most of the industry is calling for regulation on political advertising. Another volunteer group - the Coalition for the Reform in Political Advertising - has been doing great work in this space.

Lord Puttnam is chairing a new special inquiry committee to investigate the impact of digital technologies on democracy in the House of Lords. As Keith Weed said in his evidence to the committee, "If commercial advertising needs to be legal, decent, honest and truthful, as I think the public would accept, I would argue that political parties and the Electoral Commission should look to create an environment for political advertising to be held to similar standards."

It is incredible that in 2020 you can say whatever you like in political adverts, put millions of pounds behind them and spray them all over Facebook with microtargeted tools unchecked. Facebook can be as transparent as they like but we need regulation and legislation as soon as possible.

One of the biggest threats we at CAN see this year is deliberate climate denial, lies and misinformation in the lead up to COP 26 in Glasgow.

Currently, Google does not have a stance on climate change denial videos, though it does have a policy around videos that contain obvious misinformation, such as ‘flat earth’ content. Climate denial videos are currently monetised. This is a major problem.

Not having a policy on climate misinformation makes it entirely possible that the system will be abused again and that can threaten global commitments on climate.

There is a big group of unpaid volunteers shining a light on all this. These are massive structural problems, involving big money, funding activities in the real world that undermine our societies and communities. It is really encouraging to see that regulation is being talked about in the House of Lords and that the WFA’s GARM is taking concrete steps to solve these issues. It is time that the industry cleaned up its act so we can all get our evenings and weekends back.