Meta just hit the brakes on teen access to its AI characters globally, a dramatic shift that comes days before the company faces trial in New Mexico over allegations it failed to protect kids from sexual exploitation. The move affects Instagram, Facebook, and WhatsApp users who've provided teen birthdays or are flagged by Meta's age-detection tech. Instead of rolling out previously announced parental controls, the company's taking the nuclear option - shutting down access entirely until it builds what it calls 'age-appropriate' AI versions with baked-in guardrails.
Meta is scrambling to get ahead of mounting legal pressure over AI safety for minors. The company told TechCrunch it's pulling the plug on teen access to its AI characters across Instagram, Facebook, Messenger, and WhatsApp globally - a significant reversal from its October strategy of gradual parental controls.
The timing isn't coincidental. Meta faces trial in New Mexico within days on charges it didn't do enough to shield kids from sexual predators on its platforms. Wired reported the company's been fighting to limit evidence about social media's mental health impact on teens. Now it's preemptively shutting down a feature that could become exhibit A in prosecutors' case.
"Starting in the coming weeks, teens will no longer be able to access AI characters across our apps until the updated experience is ready," Meta said in an updated blog post. The ban hits anyone who's given Meta a teen birthday, plus users the company's age-prediction algorithms suspect are underage - even if they claim to be adults.
Just three months ago, Meta was singing a different tune. In October, the company rolled out what it called PG-13-style content restrictions for teen AI interactions, blocking extreme violence, nudity, and graphic drug content. Days later, it previewed parental monitoring tools that would let guardians track conversation topics and block specific AI characters.












