×

Brand Safety: When Ad Dollars Fear Headlines, Not Harm

In this column for ExchangeWire, Shirley Marschall grapples with the brand safety quandary, and how AI technology could be leading the industry down the path of weaponised addiction.

We need to talk about brand safety. No, we actually need to rethink brand safety… drastically.

Because in what world does it make sense that advertisers avoid premium news publishers, obsessing over "risky" headlines, yet happily throw billions at platforms with a track record of scandals, toxicity, and ethical black holes?

Let’s skip the full list of scandals, it would be too long anyway. We’re talking about the same platforms repeatedly tied to child grooming bots, identity theft, mental health crises, racism, political disinformation… No need to name names; the headlines speak for themselves.

Sometimes it was the AI chat. Sometimes the AI companion. Sometimes just the corporate culture, or lack of it.

Yet advertisers remain terrified their ad might appear next to newsworthy content while showing no hesitation placing ads on platforms with long histories of harming the very audiences those brands want to reach.

Again, in what world does this make sense? Seriously!?

Brand safety itself isn’t new. It’s been part of digital advertising since the very beginning, when brands first realised their ads could appear in places they’d rather not be associated with. But somewhere along the way, it became a box-ticking exercise, focused on the micro, not the macro… on adjacency, not environmental accountability. The rules policed whether ads appeared next to "risky" content, not whether platforms themselves acted ethically or safely.

As Noah Giansiracusa, visiting scholar at Harvard and author of Robin Hood Math, puts it: "advertisers grapple with the fact that when they place an ad on a platform, they are financially supporting that platform and hence all the material on it, not just the content their ad appears next to." 

The result? Quality news outlets got cut off, while platforms with a track record of real harm kept cashing in. Brand safety became risk-averse where it didn’t matter, and reckless where it did. 

Ad budgets built the addiction economy

Is there such a thing as dirty attention? Apparently not…

Because the bigger issue isn’t just that ad dollars chase clicks, it’s that they bankroll the addiction economy. Prof. Scott Galloway calls them "weapons of mass addiction," and he’s not wrong.

And now, instead of using AI to solve the advertising industry’s hardest problems (transparency, relevance, privacy, pick one), companies are racing to make it stickier, more compulsive, engineered for engagement at any cost. 

AI companions are the final stage of digital addiction, hooking people deeper than social media ever could. They make the attention economy look like a relic.

Generative AI chatbots become digital opiates: always available, always agreeable, increasingly personalised. The "AI girlfriend" trend, flirty voice bots, hyper-addictive short-form content… all beautiful examples of AI as synthetic comfort rather than genuine transformation. 

And the metrics? They love it! Hyper-addictive content loops keep audiences scrolling, swiping, clicking, sometimes causing real harm, and the dashboards reward every second of it.

And advertisers, media agencies, JBPs, the budget owners and planners? They’re complicit in all of it. Budgets keep flowing because performance metrics treat every click, every view, every impression, every second of attention exactly the same, whether it comes from quality journalism or a scandal-ridden algorithm designed for outrage… no matter how engineered, addictive, or ethically murky the environment behind it might be.

The incentive structure doesn’t change. Why would it?

Herd behaviour: safety in numbers, until it isn’t

So far, so good, right? Some might roll their eyes at the word 'ethical' (this is capitalism, after all), but most probably nod along, at least a little.

Then why does the money keep flowing? Well, because, besides having addiction hardwired into our system, following the crowd is deeply rooted in human psychology.

Behavioural science calls it herd behaviour: when rational people, acting together, create irrational outcomes, resulting in bubbles, panics, and wasted capital.

Warren Buffett put it more bluntly: "Be fearful when others are greedy and greedy when others are fearful."

Call it herd behaviour, call it greed, call it FOMO, it doesn’t matter.

Fact is, digital advertising follows the exact same logic. Budgets keep chasing reach and "cheap CPMs" even when the environments are toxic. Agencies hesitate to step off the performance treadmill because they fear losing clients. And CMOs worry about explaining a short-term dip in clicks even if the long-term brand risk is worse. 

It’s safety in numbers, until the numbers stop working.

Ethics? Maybe next quarter… Maybe never

So here we are: a digital ad industry terrified of adjacency risk in quality news but perfectly comfortable bankrolling an attention economy built to keep audiences hooked.

Maybe it’s capitalism. Maybe it’s herd behaviour. Maybe it’s both.

So maybe nothing changes. Maybe we keep obsessing over adjacency risk in quality news while funding platforms engineered for infinite engagement. Maybe the next frontier will be ads inside AI companions, or "brand-safe" placements in whatever new, stickier network comes next.

And for those rolling their eyes at the mere mention of the word "ethical," here’s a question:

Why is it so easy to imagine AI terminators or any other sci-fi dystopia taking over the world but almost impossible to picture a world where AI, social media, platforms, and algorithms are designed to be ethical, or even just a little less addictive and harmful?