×

Regulating Social Media: Where do we go from here? 

We examine the harmful nature of social media platforms today, which are designed for addiction and prioritise engagement at any cost. As countries around the globe move to impose restrictions on children’s access to social media, what are the most appropriate measures? 

Social media has become such a dominant force in our lives that it’s hard to imagine a world without it now. 

As time has progressed, the risks have intensified and multiplied. What were once much less threatening environments are now spaces rife with risk, from misinformation to radicalisation. 

Finally, legislators are taking action. Australia took the first leap, introducing a full ban on all social media platforms for children aged under 16. 

The ban has inspired movements all over the globe. Elsewhere in the APAC region, Indonesia and Malaysia have similar plans. In the EU, MEPs of the European Commission agreed that children should be at least 16 years old to access social media. From Spain to Slovenia, countries all over Europe are planning to introduce bans. 

In the UK, MPs recently voted against an outright ban for under 16s. The introduction of measures is still on the horizon though, with the Commons having backed giving additional powers to the secretary of state to introduce more flexible regulation.

A consultation into the matter, launched by the Labour government at the start of March, is currently ongoing. It will examine a range of measures, such as limiting or removing certain features which drive compulsive use. Alongside this, bans and digital curfews will be trialled on hundreds of teenagers across the country. 

Designed for addiction

Long gone are the days when Instagram would display the message: “You’re All Caught Up!” That was it, you’d come to the end of updates on your feed. 

Things are dramatically different now, with social media algorithms created to be critically addictive machines, made by design to keep users hooked and chasing that hit of dopamine with every scroll. 

One of the most harmful features introduced in recent years is the infinite scroll. Platforms would let a user scroll forever, literally. It doesn’t take an expert to see that this is dangerous. 

Describing the feature, Arturo Béjar, a Meta whistleblower who worked within a child safety department at the company until 2021, said “the promise of these things is that there is always going to be something interesting and rewarding and there is a never-ending supply. That is the mechanic of infinite scroll.”   

This was one of the key features blamed for hooking children in the first social media addiction jury trial which concluded last week, finding Meta and YouTube liable for intentionally building addictive platforms. Snap and TikTok were also initially included as defendants, however, they agreed to settlements with the plaintiff before the trial. The ruling marks a landmark moment, with the platforms being found liable for their harmful design.   

Attention over everything 

Social media giants appear to have done everything in their power to hold onto user attention, at whatever cost. Inside the Rage Machine, a documentary recently released by the BBC, explores a series of problematic decisions made by Meta following the growth in popularity of its biggest competitor, TikTok. 

Through internal testing, it became clear that misleading, negative, and harmful content was generating the most engagement. Yet, Meta’s senior management instructed employees to present more of this type of content to users in order to raise engagement levels. The justification given was simply “because the stock price is down”.  

Matt Motyl, a whistleblower who worked as Meta’s senior staff researcher from 2019 to 2023, described his team’s opposition to the instructions: “We warned against this in a document that we delivered to the team and that was read to Zuck[erberg] in advance of launching these broadscale experiments…and that was all ignored. We warned, very sternly, this is not going to go well.” 

As competition against TikTok intensified, Meta launched Reels in 2020. Motyl also expanded on the user safety challenges when it comes to introducing a new feature. There’s an elevated risk using an infrastructure that hasn’t existed before. “It’s either completely absent or it’s very immature, so it’s hard to prepare sufficiently in advance of that launch,” he explained.  

Reels did raise engagement, for the worst. Internal documents later revealed that comments on Reels had a higher hostile speech prevalence compared to comments on posts distributed through the feed. Bullying and harassment was 75% higher, hate speech 19% higher, and violence and incitement 7% higher. The documents demonstrate an acknowledgement of Meta’s struggle to prevent harm to users on Reels. 

“In order to launch something that’s going to protect people from some kind of harm in Reels or in Feed, you have to convince the team that owns Feed, or that owns Reels, to sign off on the product change that you want. But there’s this power imbalance. They have incentives to not let those products launch, because toxic stuff gets more engagement than non-toxic,” Motyl commented. 

Meta was unwilling to protect users, for the sake of higher engagement. Motyl acknowledged that “there’s a common tradeoff between protecting people from harmful content and engagement.” 

Adding AI into the mix 

Adding generative AI into the mix, the situation becomes even more dire. Elon Musk’s X has been involved in the worst of cases, the most alarming example being the recent wave of users employing its Grok AI tool (integrated within the app) to create explicit images of non-consenting women and children. xAI, Musk’s AI company, is now facing a lawsuit from teenagers whose likenesses were harmfully recreated. 

From a wider perspective, generative AI’s arrival to social media raises many questions which are yet to be answered. Mikhail Hanney, managing director of UK agency Pulse Advertising, expands on this issue. “I was recently on a panel where we were discussing AI in social media, and the questions that stumped the room were the ones that should be industry standard by now,” he recalled. “We don't have clean answers.”

Legislators are making strides now, but they’re still far behind.  

Examining the evidence so far 

How well is the ban in Australia actually going? 

So soon into the ban, extensive data has not yet been collected. Australia’s eSafety commissioner announced the start of a two-year evaluation which would follow more than 4,000 children and families to assess its success. 

However, data released a couple of weeks ago found that one fifth of Australian teenagers are still using social media, two months after the ban took effect. Other findings published this week indicate that 70% of those aged under 16 who used social media before the ban had maintained access.

Instagram, Facebook, Snapchat, TikTok, and YouTube are now under investigation for not complying with Australia's ban. Anika Wells, the country's communications minister, claimed that the companies have not been doing enough to enforce it. “None of this is impossible. None of this is even difficult for Big Tech who are innovative billion dollar companies. What this update shows is unacceptable,” Wells stated.

Looking at a couple of anecdotal accounts, the Guardian spoke to a few teenagers in Australia about the ban. Sarai Ades, aged 14, spoke about how easy it was to get around it. “We all just deleted some of our old accounts a couple of days before the restrictions kicked in and created new ones with fake birthdays about a week after,” Ades said.

She explained that she created new accounts on TikTok and Snapchat, and that Instagram hadn’t actually flagged her old account as underage yet. She also mentioned that nobody she knows was subjected to facial recognition when they opened a new account. 

Ades now has unfiltered access to the platforms, as they think she is over 18. “I definitely have more videos coming up on my feed around geopolitical instability and more violent coverage. It was really shocking at first and I wasn’t at all prepared,” she revealed. Interestingly, she said that this has made her experience on social media more positive overall. She attributes this to being exposed to new information including more political issues, debates, and opinions. 

Ewan Buchanan-Constable, another teenager (aged 15) who spoke to the publisher, said that not all of his accounts had been flagged, so “nothing much has changed”. He couldn’t remember whether he had initially input his correct birthday. Like Ades, he felt that the ban “was easy to get around if you wanted to”. 

To ban or not to ban? 

With the long-term outcome of Australia’s ban yet to be seen, it’s not so clear cut for legislators elsewhere which measures are the best to introduce. 

One of the principal factors which pushed UK MPs to vote against the full blanket ban of social media for under 16s were the recommendations of organisations such as the National Society for the Prevention of Cruelty to Children (NSPCC). They advised that a full ban could drive teenagers into darker, more unregulated corners of the internet. 

This line of thought is followed by Lisa Morgan, Generation Media’s managing director, who believes that an outright ban simplifies a complex issue. “Social platforms are now deeply embedded in how young people communicate, learn and develop identity,” she expanded. “Removing access entirely may not reduce risk, but could simply push under-16s towards less regulated digital spaces where safeguards, moderation and parental oversight are weaker.”  

Morgan feels that “the more effective path is evidence-based regulation combined with safer platform design. That could include genuinely age-appropriate experiences, reduced addictive mechanics, and clearer accountability from platforms.” 

She concluded her thoughts: “Ultimately, protecting young audiences online requires proportionate, data-led solutions to balance safety with how young people actually use digital media. Blanket bans may sound good, but they may prove difficult to enforce and leave children just as vulnerable.” 

Hanney mirrors this sentiment. “Banning young people doesn't make the behaviour disappear, it drives it underground. Better education and stronger platform rules will always beat legislation that breeds secrecy,” he commented. 

Holding Big Tech to account 

One thing seems clear: more regulation somewhere is definitely needed. Ironically, so far it has been considered easier to impose restrictions on millions of children than to hold a handful of Big Tech companies accountable and demand better for all users.

Could the landmark ruling against Meta and YouTube in the social media addiction trial propel us into a new era? The ruling has caused a major splash, with some even calling it social media’s “big tobacco” moment. Following the ruling, UK Prime Minister Keir Starmer stated “We’ll go through the consultation, but I think I’ll be absolutely clear, things will not stay as they are. This is going to change.” 

Having reached this point, introducing certain restrictive measures for children should be largely positive. But looking at the bigger picture, more pressure needs to be put on social media giants to create less harmful environments for all of us.