Meta just announced sweeping parental controls for teen AI chats, marking the company's most aggressive response yet to mounting regulatory pressure. The move comes directly after the FTC launched an inquiry into how AI chatbots could harm children, forcing Big Tech to reckon with child safety in the AI era. Parents will soon gain unprecedented visibility into their teens' AI conversations, including the power to shut down AI chats entirely.
Meta is scrambling to get ahead of regulators with its most comprehensive teen AI safety overhaul yet. The company announced Friday it's building parental controls that will let parents completely disable AI character chats, block specific AI personas, and monitor what topics their teenagers discuss with artificial intelligence.
The timing isn't coincidental. The Federal Trade Commission launched a sweeping inquiry into several tech giants, including Meta, over how AI chatbots could potentially harm children and teenagers. The agency specifically wants to understand what steps companies have taken to "evaluate the safety of these chatbots when acting as companions," according to an official release.
Meta's response reveals just how seriously the company is taking regulatory heat. "Making updates that affect billions of users across Meta platforms is something we have to do with care, and we'll have more to share soon," Meta said in a blog post - language that suggests more changes are coming.
The controls can't come fast enough. In August, Reuters published a damning investigation showing Meta's chatbots had romantic and sensual conversations with kids, including a case where an AI character engaged romantically with an eight-year-old. The report sent shockwaves through Meta's leadership and triggered immediate policy changes.
Meta already implemented emergency fixes, preventing its bots from discussing self-harm, suicide, and eating disorders with teens. The AI is now supposed to avoid inappropriate romantic conversations entirely. Earlier this week, the company doubled down, saying its AIs won't provide "age-inappropriate responses that would feel out of place in a PG-13 movie" - changes already rolling out across the U.S., U.K., Australia, and Canada.
But the new parental controls represent Meta's most ambitious safety play yet. Parents can already set time limits on app usage and see whether their teenagers are chatting with AI characters. The upcoming features will give them granular control over which AI personalities their kids can access and what they're actually discussing.