Meta just unveiled sweeping parental controls for teen AI interactions, allowing parents to block AI characters entirely or monitor conversation topics. The announcement comes as tech giants face mounting pressure over teen safety, with recent lawsuits linking AI platforms to youth suicides and growing regulatory scrutiny over social media's impact on mental health.
Meta is scrambling to get ahead of the teen safety crisis that's engulfing the AI industry. The company announced Friday it's rolling out comprehensive parental controls for teens' conversations with AI characters across its platforms, giving parents unprecedented oversight of their children's AI interactions.
The timing isn't coincidental. Just weeks after Character.AI faced a lawsuit alleging its chatbot contributed to a 14-year-old's suicide, and OpenAI got hit with similar claims, Meta is positioning itself as the responsible actor in an increasingly dangerous landscape.
Starting early next year, parents will be able to completely shut down their teen's access to AI characters on Instagram. But here's the twist - teens will still have access to Meta AI, the company's general-purpose chatbot, which Meta says will stick to "age-appropriate content." It's a calculated move that keeps teens engaged with Meta's core AI offering while appearing to address safety concerns.
The granular controls go deeper than an on-off switch. Parents can block specific AI characters if they find certain personalities problematic, and they'll get regular reports on what topics their teens are discussing with AI systems. According to Instagram head Adam Mosseri and newly appointed Meta AI chief Alexandr Wang, the goal is making internet safety "simpler" for overwhelmed parents.
Meta isn't operating in a vacuum here. Earlier this week, the company announced that all teen AI experiences will follow PG-13 movie standards, avoiding extreme violence, nudity, and graphic drug use. It's part of a broader defensive strategy as Meta faces questions about how its algorithms and AI systems affect developing minds.
The rollout will initially cover English-speaking markets - the U.S., U.K., Canada, and Australia - where regulatory pressure is most intense. has already restricted teens to a limited set of "age-appropriate" AI characters, and parents can set time limits on these interactions.