Since ChatGPT launched last November, interest in artificial intelligence has skyrocketed, with discourse around its potential impact on everything from healthcare to dating. Wherever artificial intelligence is being discussed, however, one overarching theme persists: privacy.
AI systems need copious amounts of data to operate effectively, and when we consider Big Tech’s chequered history of scraping, exploiting, and mishandling people’s data, it’s no wonder there’s trepidation over the privacy implications of AI. How tech giants, however, are able to handle users’ data is changing and perhaps the privacy wave sweeping the tech industry can reassure worried netizens that AI’s use will be regulated. In an op-ed for The New York Times, FTC chief Lina Khan claimed the watchdog is “well equipped with legal jurisdiction” to control the use of AI and, earlier this month, the UK’s ICO warned AI developers to address privacy concerns before taking their products to the market. Executive director of regulatory risk Stephen Almond’s assertion that “there can be no excuse for ignoring risks to people’s rights,” may signal that AI will be held to a higher standard when it comes to privacy than any Web2 technology before it.
Despite the possible protection tighter regulations could provide, it appears AI tools are as vulnerable to data breaches as any other technology; just this week, it was revealed that over 10,000 compromised ChatGPT accounts were found for sale on the Dark Web, exposing major privacy flaws at parent company OpenAI. Reports that tech giants including Apple and Samsung have restricted their employees’ use of generative AI due to how it uses data may also demonstrate the technology’s potential threat to privacy.
To better understand the privacy implications of artificial intelligence, we turned to industry experts.
We need to set the ground rules for responsible AI
AI and Privacy have a love-hate relationship – but mostly hate. Personal data is the fuel that powers the vast majority of customer-facing AI models, and LLMs like ChatGPT present the same privacy quandaries as ubiquitous search and social media algorithms. Algorithmic discrimination, psychological manipulation, fake information and social polarisation are making us dumber, angrier and sadder.
Decoupling the convenience of AI from its sinister social impacts is difficult, but doable. Privacy-preserving AI like federated learning, transparency over ML models (explainability), and enforceable red lines on data collection (e.g. sensitive data) can mitigate negative impacts. We need to set the ground rules for AI to develop safely and encourage innovation that has a net positive impact on society. If we leave it to the engineers, we may soon be living in the Matrix. If we leave it to the lawyers, we would still be using horses. We all need to pitch in.
Mattia Fosci, founder & CEO, Anonymised
Invasive profiling and micro-targeting a risk
Developments in AI will significantly impact privacy, particularly in media and marketing, where AI-powered systems already excel at collecting, analysing, and interpreting vast amounts of user data. This raises concerns about safeguarding personal information beyond first-party data and digital footprints. With increasingly sophisticated AI algorithms, invasive profiling and micro-targeting become a risk, leading to an erosion of trust.
Striking a balance between leveraging AI for advertising and protecting privacy is crucial, especially as synthetic content is predicted to dominate by 2026, fostering misinformation. Transparent data governance, robust consent mechanisms, and ethical AI practices are essential for CMOs and senior leaders to maintain consumer trust. Collaboration among governments, tech platforms, publishers, brands, and agencies is necessary to harness AI's benefits while safeguarding society from harm.
Aurelia Noel, head of innovation and transformation, dentsu X
Privacy will prevail, with or without AI
Technologies that rely on AI are designed to learn through the analysis of vast amounts of data, much of which is personal data. As a result, the recent emergence of AI tools is raising concerns about data protection. However, privacy has already fundamentally changed the overall tech industry, as evidenced by the ethical and regulatory challenges it has created in the advertising ecosystem.
Users' rejection of data sharing and the resulting tightening regulations are forcing companies to put consumer privacy at the core of their model. Regardless of whether they leverage AI or not, only companies that do so will thrive, as the wave of privacy is unstoppable.
Geoffroy Martin, CEO, Ogury
Balancing progress with privacy is crucial for AI
While AI has the potential to enhance security in the future, the current wave of accessible AI is posing significant privacy challenges. ChatGPT itself comes with a cautionary note, advising users not to disclose sensitive information. In fact, major companies like Amazon, JP Morgan, and Goldman Sachs have taken measures to either ban or restrict the use of ChatGPT among their employees, highlighting the gravity of privacy risks.
ChatGPT is just one example; another is the case of Clearview AI in the United States, which was banned from selling its facial recognition database to private entities. The potential ramifications of such a sale are profound, as it would enable mass surveillance through facial recognition on an unprecedented scale. Monitoring of private citizens by private companies who decide what they do with that data is not a world I want to live in. It’s all a little too Black Mirror for me!
As we embrace the possibilities offered by AI, it is so important that we address the privacy implications head-on. Balancing technological progress with the preservation of individual privacy is paramount.
Niamh Linehan, partnerships & content director, Women of Web3
The real concern with AI is who controls it
AI has unleashed an unprecedented technological power of collecting, processing, and analysing vast amounts of personal data scraped off the Internet, which can be used for various purposes, positive and negative: from medical research that saves lives, to mass surveillance, oppressive and invasive state control.
The real concern with AI is who controls it, as well as the legal basis upon which the AI tool processes people’s data, the mechanism allowing individuals to control how that data is used, and people’s right to have their data deleted or corrected. This is about our individuality. In truth, the battleground is over whether AI controls us, or whether we control it.
Flavia Kenyon, Barrister, 36 Commercial
AI is an inherent threat to privacy
AI will benefit many things, but certainly not privacy. It’s hard to imagine a technology built on an insatiable appetite for data not being a significant privacy risk, and if the revenue model ends up being advertising, AI’s threat to privacy will be all the greater. Considering the history of the companies who have the resources and business imperative to advance consumer artificial intelligence, advertising seems like an inevitable monetisation plan. As AI is so new, however, we do have a window of opportunity to prevent AI from taking the same path the internet and search did.
Soren H. Dinesen, CEO, Digiseg
Transparency will mitigate any threat from AI
AI is simply looking at historic data, identifying patterns and making predictions. As long as it's used on privacy compliant data and in the spirit of GDPR there should be nothing to worry about. Unfortunately there will be both scaremongering and bad actors that get ahead of the legislative curve, so anyone using AI needs to be able to explain what data is feeding their models and how it’s being used.
Jonny Whitehead, strategy director, Skyrise
AI will enable targeted advertising with minimised privacy risk
AI will have a profound impact on privacy, and for the better. With AI enabling us to unlock actionable patterns and connections previously unknown, the need to use personally identifiable information (PII) will become minimised. Let’s face it, if artificial intelligence can expertly predict who will get pancreatic cancer, then predicting who needs mayonnaise, insurance and a new car is not exactly a challenge for it, opening up the potential to serve targeted ads which don’t rely on people’s PII.
Drew Stein, co-founder & CEO, Audigent