×

Who Takes Legal Liability?

Last week (18 May), UK law firm Kemp Little held their annual conference, 'The Fourth Industrial Revolution… or Evolution'. In this piece, we explore how Artificial Intelligence (AI) increases the scalability of knowledge and legal liability for programmatic advertising. 

Dynamic scalability of knowledge

The concept of ‘dynamic scalability of knowledge’ can be explained by looking at what happens when a human and a driverless car are in a collision. Following the accident, both parties will learn from the experience; however, the driverless car teaches all other driverless cars what it has learnt.

This is happening in media planning and buying too. When a bidder makes a decision, and a feedback loop provides performance data, the bidder learns and is capable of applying that learning to decisions it makes in the future. The bidder is able to do this at scale, across millions of opportunities.

Legal liability for programmatic advertising

The American Association of Advertisers (AAA) and The Incorporated Society of British Advertisers (ISBA) have been fighting for advertisers’ rights not to pay for fraudulent impressions/clicks.

This is because it is deemed that the advertiser is not responsible for exposing themselves to the fraudsters. So, who is responsible?

There are several tech vendors in the frame: The demand side platform (DSP) that placed the bid in the impression auction; the ad exchange that accepted the impression and created the auction; and the supply side platform (SSP) that accepted the impression from the publisher.

Or, is it the publishers who should be accepting liability? Or, is it the person who set the bid parameters in the DSP? Or, the entity that this person works for, for example, the media agency.

Each of these candidates present compelling claims that it is not their responsibility, but someone else’s. Furthermore, we often conclude that responsibility should be divided throughout the chain, with every player doing their bit to help ‘clean up the ecosystem’.

Vendor versus vendor responsibility is, quite frankly, not that interesting. What is fascinating is the division of blame between humans and AI. There is an argument that you can’t just blame AI all the time – yes AI is making a decision, but humans have instructed the AI.

Put simply, the promises made by vendors of ‘machine learning’ have stretched beyond reality.

If AI is going to become more commonplace for service delivery in general – and keep in mind that ad tech is much further ahead than many industries – then it is unreasonable, and unscalable, for each AI company to take responsibility for the cost of bad actions.
Enter: the insurance company. Are we going to start to see insurers offering protection for agencies, brands, and vendors to cover the costs and liability should AI make a bad decision? How will underwriters calculate ‘tolerable range of accuracy’? How will this impact contract negotiations? How will policies adapt as the AI learns and becomes less likely to make mistakes? And, a final question: could these insurance policies be underwritten and delivered by an AI system? If so, who would insure the AI creating the policies!