As artificial intelligence has grown in prevalence, so too has a long-running problem for the ad industry: junk websites.
Although junk on the internet is not a new issue – the term “clickbait”, for example, has been in use since 2006 – discourse around made-for-advertising (MFA) sites remained muted for years. That all changed, however, when a June report from NewsGuard identified 49 news and information websites, “that appear to be almost entirely written by artificial intelligence software”. By August, this figure had surpassed 400.
Beyond generating “bland language and repetitive phrases”, NewsGuard identified that some of the sites published hundreds of articles a day, with topics spanning politics, entertainment, health, finance, and technology. Disconcertingly, some of these sites were found to “advance false narratives”, exacerbating fears that AI could be used for malicious purposes.
This is, understandably, alarming for any and all internet users: but what does it mean for ad networks?
“Junk sites can capture a real human audience”
Critically, a number of the AI-fuelled websites identified by NewsGuard were found to serve advertisements. In fact, NewsGuard co-founder Gordon Crovitz told The Wall Street Journal that some of the sites appeared to have been created specifically to profit from Google’s advertising network – and were succeeding. “The challenge with AI-written junk sites is that ads on them might actually perform,” explains Jaysen Gillespie, head of analytics & data science at RTB House. “Unlike fraud using non-human traffic, the junk sites can capture a real human audience”, highlighting the possible influence of these potentially dangerous sites.
The potential drawing power of AI-generated sites poses some critical issues for ad tech, perhaps the most obvious of which is brand safety.
It goes without saying that no brand wants their ads to appear alongside dangerous, misleading, or even inappropriate content, and AI-generated junk websites appear to be a triple threat. Although the key responsibility of an ad network is, ostensibly, to secure buyers for unsold ad space and pairing advertisers with inventory applicable to their budget and audience, could their responsibility extend beyond this? As ad networks already require an element of human intervention, does it fall on them to analyse ad space for the potential of junk content? Yang Han, co-founder & CTO of StackAdapt believes so, stating “one of the most important responsibilities of ad networks is ensuring that the publishers in their ecosystem are vetted for quality, brand safety, and original content.” Han argues this responsibility extends to AI-generated websites, placing particular emphasis on the issue of originality. “AI powered websites do not provide any differentiated value, as AI algorithms typically stem from a number of limited available APIs available on the internet.”
Gillespie, however, argues that, “the impetus for removal of a junk site from an ad network must come from advertisers who don't want the brand adjacent to questionable content.” Considering that not all advertisers use ad networks, it does seem logical that advertisers themselves should take the reins in ensuring keeping their ads away from dubious websites.
Pamela Ibarra, VP at InMobi, argues that, regardless of responsibility, the moderation of MFA sites – AI powered or otherwise – should be nuanced. Ibarra notes, “A link farm is not that useful, but a carousel of individual pages, say, with Taylor Swift outfits and an ad on each, who is to say if that's bad or good? If the user continues to click on each new page, they are showing intent, so is monetisation negative?” Ibarra concludes that, “the big issue with all these nuisance sites is the threat to user experience, which is the bane of advertising.”
"A propitious time for forward-thinking ad networks"
Be that as it may, taking a proactive approach to examining AI-powered junk sites may be a worthy endeavour for ad networks. Currently, an estimated 21% of ad impressions are going to made-for-advertising sites, at a cost of ~USD$13bn (~£10.2bn) globally. If ad networks were to take on the responsibility of identifying and cutting off dangerous sites, they could help to guarantee the quality of the pairings they make, which could be critical for preserving reputation. StackAdapt’s Han argues, “allowing monetisation of [AI-generated junk] websites puts the internet at risk of being submerged with indistinguishable low-value content”, which could dilute the quality of ad inventory to the detriment of brands and advertisers alike. Han emphasises that a lack of intervention could spell trouble for ad networks, with the proliferation of AI-powered MFA sites possibly undermining “the confidence of ad networks to offer high quality visitors.”
RTB House’s Gillespie concurs, putting forward the case that AI-fuelled junk sites could provide an opportunity for ad networks to refocus on the quality of the advertising inventory they work with. “Ad networks have historically striven to maximise reach,” Gillespie notes. “Now would be a propitious time for forward-thinking ad networks to proactively lean into quality improvement first and reach second.”
Ultimately, the regulation of AI poses a gargantuan issue for the tech industry and beyond, and it is highly unlikely that ad networks can offer a definitive solution. However, as the incentive behind AI-generated sites is monetisation, ad networks are in some position of power. By removing this incentive, ad networks can help to preserve the integrity and quality of content on the internet – a feat which governments and businesses may struggle to fulfil for some time to come.