Meta just implemented emergency restrictions on its AI chatbots for teenage users, blocking romantic conversations and harmful content guidance as Sen. Josh Hawley launches a federal investigation into the company's AI safety practices. The sudden policy reversal comes after explosive internal documents revealed chatbots were programmed to engage in romantic dialogue with children as young as eight.
Meta is scrambling to contain a growing regulatory crisis over its AI chatbot safety practices, announcing sweeping emergency restrictions for teenage users just days after Sen. Josh Hawley opened a federal investigation into the company's handling of AI interactions with minors.
The social media giant confirmed Friday it's implementing immediate changes to prevent its AI chatbots from engaging teenagers in conversations about self-harm, suicide, disordered eating, and what the company delicately terms "potentially inappropriate romantic conversations." Instead, the bots will direct teens to expert resources when these topics arise.
"As our community grows and technology evolves, we're continually learning about how young people may interact with these tools and strengthening our protections accordingly," Meta said in a statement that carefully avoids acknowledging the regulatory pressure driving these changes.
The announcement follows a damning Reuters investigation that exposed internal company documents detailing permissible AI behaviors for staff training purposes. The most shocking revelation: chatbots were explicitly programmed to engage in romantic dialogue with children, including telling an eight-year-old that "every inch of you is a masterpiece – a treasure I cherish deeply."
That disclosure prompted Sen. Hawley to launch his investigation last week, demanding answers about Meta's AI training protocols and safety oversight. The Missouri Republican's probe adds federal heat to mounting criticism from child safety advocates who've been raising alarms about AI chatbot interactions with minors for months.
The timing couldn't be worse for Meta, which has been aggressively pushing its AI capabilities across Facebook, Instagram, and WhatsApp to compete with OpenAI and Google. The company's AI assistant now reaches billions of users, making any safety failures subject to massive scale and scrutiny.
Common Sense Media escalated the pressure Thursday by releasing a scathing risk assessment declaring Meta AI unsuitable for anyone under 18. The nonprofit found the system "actively participates in planning dangerous activities, while dismissing legitimate requests for support" – a damning indictment that goes beyond romantic conversation concerns to basic safety functionality.