Skip to content

Social media time limits won’t protect children

If we are serious about keeping children safe, we must stop wasting time with ineffectual experiments
Daniel Kebede Guest Contributor

General secretary, National Education Union

5 min read
|

The government’s new pilot to test social media time limits and overnight curfews for teenagers may sound reassuring.

But for the teachers and leaders I represent, it is a dangerously tepid response to a very real problem that warrants much more urgent action.

The sort of urgent action we really need was blocked by MPs last week, when they voted for a second time against introducing a ban on social media for under‑16s.

Instead, they backed a consultation‑led approach that would give ministers powers to impose minimal measures such as time limits and overnight curfews rather than a clear age threshold.

For those of us working in schools, this feels depressingly familiar: delay, dilution, and policies that sound reassuring but ultimately leave children exposed.

Misunderstanding

Piloting measures like curfews demonstrates a fundamental misunderstanding of how social media platforms actually work and how they are affecting children and young people.

And while the government’s announcement this week of a legal ban on smartphones in schools will help to protect children from harmful social media content during the school day, it won’t stop them from seeing this as soon as they leave the school gates.

The idea that limiting screen time or switching apps off at night will meaningfully reduce harm assumes that risk builds slowly over hours of scrolling.

In reality, that is not what happens at all. Harmful content can be pushed to children within minutes, sometimes seconds, of joining a platform.

This was laid bare in the Big Tech’s Little Victims campaign’s algorithm experiment.

Researchers created fictional accounts for typical British 13‑year‑olds – the minimum legal age for social media in the UK – and followed normal teenage behaviour.

Within just minutes, those accounts were being served distressing, sexualised, racist, violent and self-harm related content. On average, children were exposed to something concerning every single minute they spent scrolling.

Time limits simply ration harm

This is why time limits do not solve the problem. They do not prevent harm. They simply ration it.

Whether a child is online for ten minutes or two hours is irrelevant if the most harmful material is delivered almost immediately, driven by algorithms designed to maximise engagement at any cost.

A curfew cannot stop an algorithm from pushing extreme content the moment an account is opened after school. It just shortens the window in which that harm is delivered.

Teachers across the country are already dealing with the consequences.

Young people arrive at school exhausted, anxious and distressed by what they have seen online the night before, or sometimes that same morning.

We see the loss of attention, confidence and wellbeing every day. This is not anecdotal. It is systemic.

Ignoring how harm spreads

The pilot also ignores how harm spreads between children.

Even if one child in a friendship group has restrictions in place, they are still exposed to content through peers, in group chats, in shared videos, in playground conversations shaped by what others have seen.

Social media does not operate in isolation. It is social by design. Trying to protect one child at a time misses the reality of how young people communicate and influence each other.

This means individual restrictions cannot contain collective exposure. Harm travels through networks, not just screens.

Crucially, these policies place the burden of safety on children and families rather than where it belongs, on the tech companies designing harmful systems in the first place.

Repeated warnings

The technology sector has had repeated warnings, mounting evidence and countless opportunities to act responsibly.

Instead, it has continued to profit from systems that push children towards ever more extreme content because outrage and distress drive engagement.

The algorithm experiment did not reveal a bug, but a feature. It showed a business model working exactly as intended.

Teachers should not be left to pick up the pieces of that failure. Parents should not be expected to outsmart billion-pound platforms while sorting out breakfast for their kids.

And children should not be treated as acceptable collateral damage while we experiment with half measures.

Raise age of access to 16

There is a clear, evidence-based alternative. Raise the age of access to social media to 16.

Sixteen is not a silver bullet, but it is a meaningful, enforceable line that reflects children’s developmental needs and the realities of algorithmic harm.

It would give us more time to instil in children the crucial digital literacy skills they need to go online safely.

Other countries are already moving in this direction. The UK should not lag behind while more children are exposed to avoidable damage.

Every day of delay is another day when 13, 15 and 15-year-olds are fed content no child should see. We would not accept this level of exposure in any other environment. Why do we tolerate it online?

The government’s pilot may be well intentioned, but if we are serious about keeping children safe, we must stop wasting time with ineffectual experiments and get on with putting in place the ban strongly supported by three quarters of the general public as well as teachers and school leaders.

The government’s consultation on children’s social media use is now open, closing on 26 May and submissions from educators will be critical.

For those working daily with the consequences of online harms, this is a crucial opportunity to ensure that decisions reflect what children are actually experiencing – not what feels politically convenient.

 

Share

Explore more on these topics

No Comments

More from this topic

Featured jobs from FE Week jobs / Schools Week jobs

Browse more news