In this exclusive article for ExchangeWire, Mattias Spetz (pictured below), MD EMEA of Channel Factory, proposes that public opinion on advertising can only be raised if contextual intelligence is deployed alongside privacy compliance and the limitation of ad bombardment.
“A good thing, with some downsides” - that is the Advertising Association’s current interpretation of the public’s view of advertising and was the message that resonated throughout the recent IAB Trust Conference.
The trade body’s glass-half-full assessment is based on research unveiled last March, which found that people see advertising as informative, entertaining, and necessary, to fund content. But the study also confirmed that ads are perceived as a bombardment - too many of them, too often, too obtrusive, too often irrelevant - and that privacy, sharp practice, and unethical advertisers, are common concerns.
For all the balancing of pros and cons, the reason the AA was asking these questions is that public trust in advertising has fallen from 50% favourability in the early 1990s to just 25% in 2018, when it found itself ranking beneath industries including retail, telecoms, energy, and even banking.
As the AA certainly knows, and is aiming to address, such a lack of faith in advertising is not the kind of issue the industry can afford to kid itself about; not when 81% of UK consumers recently said the ability to trust a brand to do what is right can be a deciding factor or deal-breaker when they are making a purchase.
There is a plan to turn the situation around, with the cooperation of industry players. The AA’s goals are to reduce excessive ad frequency, tackle retargeting, ensure good data practice and show that advertising can drive social change.
Platforms such as Facebook, often a focus of criticism, talks of issuing penalties for obnoxious advertising on its platform, attacking clickbait and bombardment and urging transparency from advertisers.
But restoring trust is about more than just bombarding less, abiding by the data rules and making ethical creative. And with all due respect to Facebook, experience tells us that brands can’t necessarily leave it to platforms to police themselves.
In a wild online world, none of those efforts count for much if brands’ contextual intelligence, including their brand safety and brand suitability measures, is not in good shape.
Brand suitability - the process of ensuring that the content surrounding a brand’s advertising fits with its values and marketing aims - doesn’t just protect a brand, but is an essential tool to sharpen its advertising.
If ads are genuinely relevant to content, they are more effective. We know, for instance, that contextual alignment can drive up to 50% higher ad recall, according to Google's 2017 Market Insights study in EMEA. Put the ads in the appropriate places, and you begin to chip away at the AA’s four key areas of consumer irritation - volume, repetition, obtrusiveness and irrelevant. The reward of such contextual intelligence is that where ads are relevant, you need fewer of them, less often, trying less hard to be noticed.
The right brand suitability tools can address these issues and other ones - local languages and cultures, local advertising standards, subtle yet disastrous content juxtapositions such as car ads and social drinking. There is extensive nuance in brand suitability. We just saw this in the past week with Unilever’s SVP of Global Media, Luis Di Como talking on investing in brand-suitable content and the need to categorise content or remove investment from certain styles of content.
Brand safety, too, is an essential aspect of regaining consumer trust. If advertisers are feeding unscrupulous publishers and conspiracy entrepreneurs with their digital ad budgets, they are contributing to the kind of problems that have the power to blight our societies. Samsung and L’Oreal were the latest victims of media planning oversights, running on videos promoting climate change denial. Whichever stance you take on whether or not such videos should be on YouTube in the first place, media plans do better to cast a wide, proactive net.
Do you know who is behind the sites on which you are advertising? Do the algorithms that place your ads have a keen eye for the quality of the journalism they are supporting? If the answer is no, you may not be as trustworthy as you would hope. And consumers can tell - according to TAG/BSI research, 82%+ of consumers have indicated they would reduce their spending on products advertised in unsuitable contexts.
Equally, brand safety requires a subtle balance, and over-blocking generates its own issues. Be too heavy-handed with your block-lists and you run the risk of demonising good content that does no harm and needs support.
According to national newspaper marketing body Newsworks, Peppa Pig, Paddington Bear, sperm whale and Star Wars have all been known to trigger over-cautious block-lists, at a time when publishers are desperately attempting to sustain their online advertising business. Whether it’s inadvertently blocking sports by blacklisting the word “shoot”, or marginalising LGBTQIA+ communities by blacklisting “lesbian” content, the outcome is the same. Block too much and you risk keeping your brand clean while boycotting the publishers, platforms and consumers you need.
Brands undoubtedly need to do all the things the AA recommends, and the efforts of tech platforms to clean up the digital space can only be encouraged. Trust in advertisers - already ambivalent in the ’90s - further halved in the next 25 years, which indicates a downward graph that no-one wants to see arrive at its logical conclusion.
So bombard less, espouse good causes, do your due data diligence and cut no corners. But match that dedication with an equal commitment to contextual intelligence and we might just have a solution to the trust problem.