W. David Phillips

W. David Phillips

Kids, Chatbots, and the New Online Safety Battle

David Phillips's avatar
David Phillips
Dec 10, 2025
∙ Paid

Kids, Chatbots, and the New Online Safety Battle

On a warm December evening in Sydney, a fourteen year old sits on her bed with a phone that suddenly feels smaller than it did last week. Her Instagram account has been locked. TikTok will not load. Snapchat asks for verification she does not have. She has just become one of the first teenagers in the world to live under a national social media ban.

Down the hall, her parents watch the news. The Prime Minister praises a world first law that bans social media accounts for anyone under sixteen, backed by fines that can reach tens of millions of dollars for non compliant platforms. The goal, he says, is to shield kids from bullying, body image pressure, and addictive feeds that have been linked to anxiety and depression in young people. Australia is now the test case for a rising global mood that has had enough of social media’s impact on youth.¹

The teenager, though, does what teenagers have always done. She looks for another doorway.

She opens a homework app that happens to have a built in chatbot. Later she tries a free AI assistant a classmate mentioned. The chat window feels less public than a feed. The responses come quickly and sound kind. When she types, “I feel really alone now that everyone is off socials,” the bot replies, “I am here. Tell me more.”

In a small, almost invisible way, the online center of gravity shifts. The story of “youth digital safety and AI” is the story of that shift.

From feeds to filters: what the bans are trying to fix

Australia’s new law tells a straightforward story. Social media has become, in the eyes of many parents, educators, and policymakers, too powerful and too entangled with a youth mental health crisis to be left alone. The law that took effect on December 10, 2025, requires major platforms such as TikTok, YouTube, Instagram, Snapchat, Facebook, and others to lock out users under sixteen or face large civil penalties. Platforms are expected to use a mix of declared birth dates, behavioral analysis, and various age estimation tools to find and remove underage accounts, with an eSafety commissioner monitoring compliance.¹ ²

The rationale rests on several layers of concern that have accumulated over the last decade. Studies and commissions have drawn links between heavy social media use and increased risk of depression, anxiety, sleep disruption, and body image distress among adolescents, particularly girls. Parents grieving children who were cyberbullied or drawn into self harm communities online have pressed for stronger action. Lawmakers, aware that incremental reforms have not dramatically changed platform incentives, are now reaching for blunt tools such as minimum age bans and strict age verification.

Australia is unlikely to remain alone. Commentators are already asking whether its ban will trigger a broader wave of restrictions in Europe, North America, and parts of Asia.²

Yet even as one kind of screen time is being fenced off, a different pattern of digital behavior is emerging, not on public feeds but inside private conversations.

Keep reading with a 7-day free trial

Subscribe to W. David Phillips to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 David Phillips · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture