How are Regulations Shaping Ad Tech in 2026?
by on 28th Apr 2026 in News

We look at how regulations are shaping at tech at the moment. What are the latest significant changes, and what’s on the horizon for the rest of 2026?
As 2026 unfolds, regulations are one of the driving forces of change.
It looks like the tide could finally be turning for social media giants, with regulators all over the globe taking a firmer hand to protect children from the dangers of their platforms.
Discussions of regulating AI have been keeping legislators busy, but real regulations are still lacking. The sector remains largely unrelated, leaving countless questions surrounding the technology unanswered.
How much progress could we see this year?
Social media
Talk of regulation has hit the mainstream when it comes to social media. What started off as one country taking a leap to ban social platforms for children has turned into a worldwide movement. Australia introduced a full ban for those aged under 16 at the end of 2025; now every week we see another country announce they are exploring similar measures.
Elsewhere in APAC, Indonesia and Malaysia have already introduced bans for under 16s.
In November last year, the European Commission (EC) backed a non-legislative report supporting a social media ban for under 16s. The report was heavily supported: it received 483 votes in favour, with only 92 against, and 86 abstentions. However, no binding legislation has been put into place by the EU, resulting in national governments within the region now outpacing the bloc.
Spain announced its plans to ban social media for under 16s in February, with Prime Minister Pedro Sánchez pledging to protect children against the “digital Wild West”.
Other European countries are considering lower age limits. Greece has announced a ban for under 15s which will come into force at the start of next year. French lawmakers are also in the process of introducing a ban for under 15s. Similar measures are also being prepared in Slovenia.
Meanwhile, Germany is pushing to ban under 14s.
In the UK, a government consultation into the matter is ongoing. It examines a range of measures which include limiting or removing certain features which drive compulsive use. Bans and digital curfews are being trialled on hundreds of teenagers. A few weeks ago, Prime Minister Keir Starmer said “We’ll go through the consultation, but I think I’ll be absolutely clear, things will not stay as they are. This is going to change.”
Today (28th April, 2026), the UK government confirmed its commitment to introduce restrictions.
Following the recent landmark ruling which found Meta liable for the addictive design of their platforms, this may be just the beginning. Could we begin to see legislators move to place more stringent restrictions on social media giants and what types of environments they are allowed to create for all users?
AI
Regulating AI has been a huge challenge for legislators across the globe. From a legal perspective, its rapid rise has been impossible to keep up with.
Regulating AI models
The world’s first-ever legal framework on AI came from the EU: the AI Act aims to foster trustworthy AI within the region. The legislation came fully into force in 2024, with its implementation ongoing. Most rules will become applicable from August 2026.
The AI Act introduced a risk based framework for AI systems, revolving around a risk-based regulation model. It outright bans manipulative AI, social scoring, some biometric surveillance, among others. Areas deemed high-risk, such as documentation and registration, are highly regulated. Chatbots fall under the limited risk categorisation, facing transparency rules.
When it comes to regulating AI models, many leaders have shied away from introducing legislation under the claims that heavy regulation of the sector could stifle innovation. When previous Conservative UK Prime Minister Rishi Sunak was in power, he pushed for a pro-innovation approach to regulating the sector, arguing against rushing regulation for such a nascent technology. Although some might argue this makes sense, this is how legislators fall behind.
While Starmer previously appeared in favour of a more slack approach to regulation, he is now planning closer relations with Brussels to align the UK with most of the EU’s regulations.
Following the recent Grok scandal, in which X users employed the tool which is built into the social platform to create explicit images of specific women and children, Starmer also moved to extend online safety rules to AI chatbots.
Unfortunately in the US, President Trump has been pushing the opposite approach. In December, he signed an order aimed at blocking states from enforcing their own AI rules, while he continues to urge narrow regulation.
Some states have gone against his wishes, however. In California, for example, Democratic governor Gavin Newsom signed an executive order giving companies four months to develop AI policies which prioritise public safety.
The ongoing AI-copyright debate
Within the realm of regulating AI, training models on copyrighted material has been a huge point of contention. So far, AI companies have been able to get away with murder. Many publishers, authors, and creatives have taken case upon case to the courts, but with legislation lacking, they’re in a tough position.
The case which really put the issue on the global map was the New York Times’ lawsuit against OpenAI and Microsoft. It was the first time a large-scale publisher sought out billions of dollars in damages for alleged copyright infringement. The case, which was filed back in December 2023, is still ongoing.
The New York Times’ lawsuit focused on whether AI training itself is lawful. As copyright battles play out in the courtroom, the operative question has been: is it fair use?
Whatever your view may be, there is still no definitive legislation on the matter to back up either side. We are still yet to see any country establish a clear prohibition of using copyrighted material for the training of AI models.
The UK government has been trying to tackle the situation, though their attempts have ended up sparking outrage from the creative community, with figures such as Elton John being highly critical. In March, the government announced that it would be delaying any decision on the matter. Ministers are gathering further evidence, having extended their discussions about regulatory options. As a result, legislation on the matter could be postponed until next year.
In the EU, the European Commission points to the General-Purpose AI Code of Practice for guidance on copyright matters, however, compliance is voluntary.
Where social media, AI, and rights management converge
The intersection of social media and AI brings about even more regulatory problems which are yet to be addressed.
According to Mikhail Hanney, managing director UK of agency Pulse Advertising, rights management – which he refers to as “the thread nobody’s fully pulled yet” – is an issue causing considerable confusion.
“Being in the room when TikTok announced their AI avatars made it viscerally clear: when platforms can generate hyper-realistic content themselves, the long-term value of human-led storytelling is genuinely up for debate. UK regulators are still catching up,” he commented.
Mikhail expands on the complications surrounding the rights of AI avatars: “If an AI avatar mirrors a creator's likeness, who gets compensated? If a brand owns organic usage rights, how far does that actually go?”
Countless questions arise, and we don’t have any answers yet.
Cracking down on Big Tech
Europe has been leading the way when it comes to cracking down on Big Tech. Adopted in 2022, the Digital Markets Act (DMA) targets ‘gatekeepers’ like Apple, Amazon, Meta, and Google, aiming to make the digital sector fairer and more contestable. With the first fines being dealt in mid-2025, enforcement is now in full swing.
Among the biggest cases developing this year are the Google Android interoperability case and Google’s search data sharing case. For the former, the EC is now seeking feedback on measures to ensure Google’s interoperability; the tech giant has until 13th May to submit its views on the draft measures. For the latter, the EC is currently seeking feedback from third-parties (any citizens, companies, and organisations directly affected by the scope of the proceedings) on the proposed measures.
Looking ahead
Ad tech continues to be heavily shaped by regulation, from well-established to newer legislation.
The EU is intensifying its efforts to regulate Big Tech as it doubles down on rules outlined in its Digital Markets Act.
On a global scale, regulation of the social media landscape is becoming wildly different to what we knew just a year ago.
Meanwhile, AI continues to lack oversight. While some regulation has been put in place, it’s not enough. From publishers looking for protections to the debate of social media AI avatar rights, the industry currently has more questions than answers. Hopefully, the next couple of years will bring more clarity to these debates.
With AI running largely unregulated when Big Tech is in control, legislators need to keep their finger on the pulse.
Ad TechAILegalLegislationRegulation




Follow ExchangeWire