Stock image of children with their faces hidden by logos of social media platforms

Recently, the government appeared to make a significant announcement on children’s online safety. Ministers said they are committed to implementing social media “restrictions” for under-16s. But beyond that headline, there is still no clarity about what those restrictions would actually involve.

A lack of clarity

Rather than committing to a ban, the government says it is continuing to consult and consider its options – leaving parents, teachers and young people none the wiser about how children will be protected in practice.

That lack of clarity matters, because the half-measures now being floated – time limits and curfews – fundamentally misunderstand the nature of the risk.

Harm on social media is not something that builds gradually over hours of use. For many children, it is delivered immediately. When extreme or distressing content appears within minutes – sometimes seconds – rationing access does not reduce danger. It simply delays it.

If ministers are serious about keeping children safe, they must act decisively and raise the age of social media access to 16.

‘Big Tech’s Little Victims’

Earlier this year, the National Education Union ran the Big Tech’s Little Victims Algorithm Experiment, which examined what children experience when they first sign up to social media at the legal age of 13.

Researchers created four fictional profiles of British 13-year-olds across TikTok, Snapchat, YouTube and Instagram, and used them just as a child would.

What happened next was alarming. Within minutes – sometimes seconds – the accounts were served content featuring guns, graphic violence, sexualised material, self-harm and misogynistic narratives. On average, for every minute the profiles spent scrolling, they were shown one piece of concerning content. In some cases, the very first video was distressing.

A fatal flaw in the government’s approach

This evidence exposes the fatal flaw in the government’s approach. When harm is delivered almost instantly, time limits do not reduce risk – they simply ration it. A curfew does nothing if a child is shown extreme content the moment they log on.

These algorithmic systems are not broken. They are doing exactly what they were designed to do – maximise engagement at any cost.

Even worse, time-based restrictions ignore the social reality of how young people interact online. Teenagers do not experience social media in isolation. Harmful content spreads quickly via peer groups, group chats and re-posted videos.

Teachers see the consequences of this every day: rising misogyny in classrooms; increased anxiety, isolation and emotional distress; children arriving at school exhausted from late-night scrolling, distracted by content they cannot unsee. Parents are left to manage the fallout while platforms continue to deny responsibility.

What is the government waiting for?

The tech sector has had repeated warnings, mounting evidence and ample opportunity to change course. It has failed. Platforms know the ages of their users. In the Algorithm Experiment, dates of birth for children just turning 13 were given when creating the profiles. Yet still dangerous content was pushed endlessly. They know the risks their algorithms create, but they continue to prioritise profit over protection.

That failure demands swift government intervention. So what is the government waiting for?

Raising the age of social media access to 16 would be a clear, proportionate step grounded in evidence and public concern. Polling shows strong support among parents and teachers alike.

Despite the familiar rhetoric about government overreach, this would not be unprecedented. The government has already taken decisive action to restrict access to pornography for young people, recognising that delaying access is one of the most effective ways to reduce harm across a lifetime.

In April, the government has u-turned to support banning smartphones in schools, but this won’t protect children from harmful content they see on social media at home.

At 13, children’s minds are still developing. Allowing unregulated, engagement-driven algorithms free access to them is indefensible. Every day of government inaction leaves more children exposed to avoidable harm.

Government consultation

The government’s consultation on children’s use of social media is open until 26 May, and it is seeking views from parents, young people, educators, health professionals and anyone concerned about children’s wellbeing. People should take part – because what happens next matters. You can submit your views to the consultation online here.

But consultations cannot substitute for leadership. Announcing that “restrictions” are coming, without saying what they are or when they will take effect, is not decisive action. And floating half-measures like time limits or curfews may create the impression of progress, while leaving children exposed to harm the moment they log on.

Time limits may sound reassuring. Vague promises may grab headlines. But they will not keep children safe. If the government truly wants to protect children online, it must stop hiding behind half-measures – and raise the age of social media access to 16.

Featured image via the Canary

By Daniel Kebede


From Canary via This RSS Feed.