TikTok is rolling out AI-powered age-detection technology across Europe to identify and remove users under 13, but the move is sparking fresh debate about whether enhanced surveillance is the right answer to child safety concerns. The system, which analyzes profile data, content, and behavioral signals before flagging suspected underage accounts for human review, represents the platform's response to mounting regulatory pressure as governments worldwide consider outright bans for minors. With Australia already prohibiting social media for kids under 16 and 25 US states enacting age-verification laws, TikTok's approach offers a middle ground that privacy experts say still comes at a steep cost.
TikTok just became the latest tech giant to bend to regulatory pressure over child safety, but its solution is raising questions about whether the cure might be worse than the disease. The company announced it's implementing a new age-detection system across Europe designed to keep kids under 13 off the platform, using AI to analyze user behavior rather than simply banning young accounts outright.
The technology, which builds on a yearlong pilot in the UK, relies on a combination of profile data, content analysis, and behavioral signals to evaluate whether an account possibly belongs to a minor. According to a statement from TikTok, the system doesn't automatically boot users. Instead, it flags suspicious accounts and forwards them to human moderators for review. The company declined to comment further on the European expansion.
The move comes at a pivotal moment for social media regulation worldwide. Governments are questioning whether platforms can police themselves, and they're increasingly willing to impose solutions by force. Australia became the first country to ban social media entirely for children under 16 last year, covering Instagram, YouTube, Snap, and TikTok. The European Parliament is pushing for mandatory age limits, while Denmark and Malaysia are considering similar restrictions for under-16s.












