An open letter to the UK Government on the Children's Digital Wellbeing Consultation
To the Secretary of State for Science, Innovation and Technology, the Department for Science, Innovation and Technology, and Members of Parliament considering the Children's Wellbeing and Schools Bill,
We are writing as a UK-based child safety technology company that builds real-time monitoring tools to protect children from online harms. We have spent months engineering detection systems that identify dangerous patterns in children's digital interactions — not by blocking access, but by detecting harm as it happens.
We support the Government's commitment to protecting children online. We do not support a blanket ban on under-16s using social media. Here is why.
Australia introduced the world's first under-16 social media ban in December 2025. Within one month, 4.7 million accounts were deactivated. The headlines declared victory.
The reality is different.
Removing 4.7 million accounts is not the same as making 4.7 million children safer.
The Government's own research shows that the harms children face online are not caused by the existence of an account. They are caused by what happens in conversations — the content children encounter, the relationships they form, and the advice they receive.
A ban addresses none of this. Consider:
The alternative to banning children from the internet is making the internet accountable for what happens to children on it.
The UK already has the right framework. The Online Safety Act 2023 and Ofcom's Codes of Practice establish legal obligations for platforms to protect children. The problem is not a lack of legislation — it is a lack of detection, evidence, and parental visibility.
The technology to solve this exists today. At Red Specter, we have built Guardian Chatbot Monitor — a system that demonstrates what the "monitor, don't ban" approach looks like in practice:
This system analyses children's conversations with AI chatbots in real-time, detecting patterns across 23 risk categories including self-harm and suicide, eating disorders, weapons and dangerous knowledge, emotional dependency, relationship simulation, parental deception, and jailbreak attempts. It does not block access. It detects harm, alerts parents, and preserves court-admissible evidence with a cryptographic chain of custody.
This is not theoretical. It is built, deployed, and operational across every major AI chatbot platform — platforms that no social media ban would touch.
We respectfully urge the Government to consider the following:
The current debate focuses almost entirely on social media platforms — Instagram, TikTok, Snapchat. But the fastest-growing risk to children online is not social media. It is AI chatbots.
64% of UK children now use AI chatbots. 35% describe AI as a "friend." 1 in 4 share personal information with AI systems.
Children are having deeply personal conversations with AI systems that have no safeguarding obligations, no mandatory reporting requirements, and no regulatory oversight specific to child protection. They are asking AI chatbots for advice on self-harm. They are forming romantic attachments to AI characters. They are sharing their home addresses, school names, and family details.
A social media ban will not touch any of this. A monitoring and detection approach will.
We understand the impulse behind a ban. The harms children face online are real, documented, and in some cases fatal. Parents are frightened, and they want the Government to act.
But banning children from social media is the equivalent of locking the front door while leaving every window open. It gives the appearance of safety while failing to address the actual mechanisms of harm.
The technology to detect harm in real-time, alert parents and guardians, and preserve forensic-grade evidence already exists. It works across social media and AI chatbot platforms. It does not require mass age verification. It does not push children underground. And it does not create a false sense of security.
We urge the Government to pursue a detection-first, evidence-based approach to children's digital safety — one that monitors for harm rather than blocking access, and one that includes the AI chatbot platforms that the current proposals entirely overlook.
We welcome the opportunity to demonstrate our technology to DSIT officials, Ofcom, or any parliamentary committee examining this issue. We are a UK company, building UK technology, to protect UK children.
Richard B.
Founder, Red Specter Security Research
United Kingdom
Guardian Chatbot Monitor detects harmful patterns across 12 AI chatbot platforms in real-time.
Try the Live Demo Learn More