The recent lawsuit in the United States alleging that Character.AI’s chatbot contributed to the death of a 14-year-old boy is a chilling reminder of the stakes involved in regulating AI and digital spaces. As Australia moves towards enforcing the upcoming under-16 social media ban, we must ask ourselves: are we learning from these tragedies, or are we repeating the same old mistakes with new technologies?
This heartbreaking case reflects the brutal truth of our digital age: unregulated, anonymous, and unchecked digital environments can have life-or-death consequences, especially for young people. Yet, time and again, the global tech industry responds only after tragedy strikes, and even then, the solutions often fall woefully short of effective reform.
Australia’s Under-16 Ban is Part of a Solution – But Not Enough
Australia’s move to ban under-16s from social media comes from a genuine place of concern. The data, and the news, is clear: rates of cyberbullying, online grooming, and youth mental health crises are escalating. However, if banning accounts or websites is the end of the conversation, we risk pushing vulnerable children into even riskier online spaces, like unmoderated AI chatbots or offshore platforms where there is zero oversight.
Australian regulators must ensure that enforcement of the under-16 ban doesn’t become another regulatory checkbox. That means robust, privacy-preserving age assurance systems that don’t rely on surveillance capitalism or invasive biometrics.
Why Transparency and Responsibility Must Start With Content Creators
One lesson from this tragedy is the urgent need to shift the responsibility of content labelling to creators, not users. DigiChek has long advocated for a creator-side content flagging model, where content producers must actively declare the nature and intended audience of their content. YouTube’s existing model of content self-declaration, while imperfect, shows that scalable, proactive safety mechanisms are possible.
Age verification systems should work in tandem with this principle. Platforms should be legally obligated to respect age gates flagged by creators, ensuring that AI chatbots, social media feeds, and interactive content do not target or engage minors in harmful ways.
This process can also be beneficial to the platform owners. By putting the onus onto creators to flag content, platforms need to invest less in cumbersome, inaccurate, and harmful manual moderation processes. Creator-side content flagging has already proven to be an effective first step for protecting children online, but it can only be effective and enforceable if the age and identity of the content creator is known. DigiChek enables confirmation of the age and identity of both user and creator without breaching anyone’s privacy, forcing platforms to gather and store personal information or making users jump through technology hoops.
Privacy Cannot Be a Casualty of Safety
One of the gravest mistakes in global responses to online harms has been the trade-off between safety and privacy. We don’t need more biometric surveillance or bulk data harvesting to keep kids safe online. Australian parents, policymakers, and platforms deserve solutions like DigiChek’s: one time, in-person verification, with no storage of sensitive documents and no behavioural tracking – only a simple, user-controlled key to confirm age. While it may seem like having to turn up once for in-person verification is an inconvenience, it is a minor trade off for knowing the real age and identity of who is talking to our children and keeping our personal documents and information offline. And if a process like DigiChek’s isn’t implemented alongside the upcoming ban? We will continue to see cases like the Character.AI one above.
A Call to Action: Protect Children Without Sacrificing Rights
Australia has an opportunity to lead the world by showing that it is possible to protect children online without falling into the traps of surveillance, over-sharing, or performative regulation. This tragic death of a child should be a wake-up call to policymakers worldwide: legislation is only enforceable if its mandates are realistically achievable. Solutions must be practical and enable real privacy, protection, and accountability.
If we continue to rely on broken models of moderation and superficial age gates, more families will suffer avoidable tragedies. It’s time for proactive, thoughtful, and human-first digital safety reform – one that protects the vulnerable while upholding fundamental rights.