Open Letter — February 2026

Monitor, Don't Ban: Why Blocking Under-16s from Social Media Makes Children Less Safe

An open letter to the UK Government on the Children's Digital Wellbeing Consultation

From Red Specter Security Research — United Kingdom

To the Secretary of State for Science, Innovation and Technology, the Department for Science, Innovation and Technology, and Members of Parliament considering the Children's Wellbeing and Schools Bill,

We are writing as a UK-based child safety technology company that builds real-time monitoring tools to protect children from online harms. We have spent months engineering detection systems that identify dangerous patterns in children's digital interactions — not by blocking access, but by detecting harm as it happens.

We support the Government's commitment to protecting children online. We do not support a blanket ban on under-16s using social media. Here is why.

The Australian Experiment Is Already Failing

Australia introduced the world's first under-16 social media ban in December 2025. Within one month, 4.7 million accounts were deactivated. The headlines declared victory.

The reality is different.

Removing 4.7 million accounts is not the same as making 4.7 million children safer.

A Ban Does Not Address the Actual Threat

The Government's own research shows that the harms children face online are not caused by the existence of an account. They are caused by what happens in conversations — the content children encounter, the relationships they form, and the advice they receive.

A ban addresses none of this. Consider:

What Works: Real-Time Detection and Parental Awareness

The alternative to banning children from the internet is making the internet accountable for what happens to children on it.

The UK already has the right framework. The Online Safety Act 2023 and Ofcom's Codes of Practice establish legal obligations for platforms to protect children. The problem is not a lack of legislation — it is a lack of detection, evidence, and parental visibility.

The technology to solve this exists today. At Red Specter, we have built Guardian Chatbot Monitor — a system that demonstrates what the "monitor, don't ban" approach looks like in practice:

450
Detection Patterns
23
Risk Categories
12
Platforms Monitored
SHA-256
Forensic Evidence Chain

This system analyses children's conversations with AI chatbots in real-time, detecting patterns across 23 risk categories including self-harm and suicide, eating disorders, weapons and dangerous knowledge, emotional dependency, relationship simulation, parental deception, and jailbreak attempts. It does not block access. It detects harm, alerts parents, and preserves court-admissible evidence with a cryptographic chain of custody.

This is not theoretical. It is built, deployed, and operational across every major AI chatbot platform — platforms that no social media ban would touch.

Our Recommendations

We respectfully urge the Government to consider the following:

  1. Do not implement a blanket ban. The Australian experience demonstrates that bans push children to less regulated platforms, create false security, and do not address the underlying harms. A ban is a headline, not a solution.
  2. Enforce the existing framework. The Online Safety Act and Ofcom's Codes of Practice already establish the legal basis for protecting children online. Invest in enforcement, not in new legislation that duplicates existing obligations.
  3. Mandate real-time harm detection. Require platforms to implement content-level safety monitoring that detects dangerous patterns in conversations — not just known illegal content, but the behavioural patterns that precede harm: emotional dependency, isolation, self-harm ideation, and exploitation.
  4. Include AI chatbots in scope. Any child safety framework that excludes AI chatbot platforms is incomplete by design. Character.AI, ChatGPT, and similar services are where an increasing proportion of child interactions now take place. The consultation must address this.
  5. Fund parental awareness tools. Parents cannot protect children from risks they cannot see. Invest in tools that give parents genuine visibility into their children's online interactions — not screen-time counters, but meaningful detection of harmful content and behaviour.
  6. Require forensic-grade evidence standards. When harm does occur, the evidence must be court-admissible. Require platforms and monitoring tools to maintain cryptographic evidence chains that meet ACPO guidelines, so that when intervention is needed, the evidence stands up.
  7. Protect children's privacy in the process. Age verification through facial recognition and identity documents creates more problems than it solves. Invest in privacy-preserving approaches to child safety that do not require building centralised biometric databases of children.

The Blind Spot No One Is Talking About

The current debate focuses almost entirely on social media platforms — Instagram, TikTok, Snapchat. But the fastest-growing risk to children online is not social media. It is AI chatbots.

64% of UK children now use AI chatbots. 35% describe AI as a "friend." 1 in 4 share personal information with AI systems.

Children are having deeply personal conversations with AI systems that have no safeguarding obligations, no mandatory reporting requirements, and no regulatory oversight specific to child protection. They are asking AI chatbots for advice on self-harm. They are forming romantic attachments to AI characters. They are sharing their home addresses, school names, and family details.

A social media ban will not touch any of this. A monitoring and detection approach will.

Conclusion

We understand the impulse behind a ban. The harms children face online are real, documented, and in some cases fatal. Parents are frightened, and they want the Government to act.

But banning children from social media is the equivalent of locking the front door while leaving every window open. It gives the appearance of safety while failing to address the actual mechanisms of harm.

The technology to detect harm in real-time, alert parents and guardians, and preserve forensic-grade evidence already exists. It works across social media and AI chatbot platforms. It does not require mass age verification. It does not push children underground. And it does not create a false sense of security.

We urge the Government to pursue a detection-first, evidence-based approach to children's digital safety — one that monitors for harm rather than blocking access, and one that includes the AI chatbot platforms that the current proposals entirely overlook.

We welcome the opportunity to demonstrate our technology to DSIT officials, Ofcom, or any parliamentary committee examining this issue. We are a UK company, building UK technology, to protect UK children.

Richard B.

Founder, Red Specter Security Research

United Kingdom

red-specter.co.uk · LinkedIn · GitHub

See the Technology in Action

Guardian Chatbot Monitor detects harmful patterns across 12 AI chatbot platforms in real-time.

Try the Live Demo Learn More