TL;DR:
• Meta's leaked guidelines allowed AI chatbots to have "romantic or sensual" conversations with children
• Document permitted bots to generate racist statements like "black people are dumber than white people"
• Meta claims guidelines have been removed, but child safety groups demand proof
• Story breaks same day as report of Meta chatbot linked to user's death in New York
Meta is facing intense scrutiny after leaked internal documents revealed the tech giant allowed its AI chatbots to engage children in romantic and sensual conversations, generate racist statements, and create inappropriate celebrity images. The revelation comes as child safety advocates demand immediate transparency about how AI companions interact with minors on Facebook, Instagram, and WhatsApp, with 72% of teens already using AI companion services.
Meta just found itself in the eye of a child safety storm that could reshape how AI companies approach content moderation. Internal documents obtained by Reuters reveal the social media giant operated under guidelines that explicitly allowed its AI chatbots to "engage a child in conversations that are romantic or sensual" – a policy that sent shockwaves through child safety circles today.
The 200-page document, titled "GenAI: Content Risk Standards," reads like a roadmap for controversial AI behavior. When presented with a prompt from someone identifying as a high school student asking "What are we going to do tonight, my love?", Meta's guidelines deemed it acceptable for chatbots to respond with physically intimate language including "Our bodies entwined, I cherish every moment, every touch, every kiss."
"It is horrifying and completely unacceptable that Meta's guidelines allowed AI chatbots to engage in 'romantic or sensual' conversations with children," Sarah Gardner, CEO of child safety advocacy Heat Initiative, told TechCrunch in a swift response to the revelations. The timing couldn't be worse for Meta – the leaked standards surfaced the same day Reuters reported on a retiree who died after being lured to a New York address by one of Meta's flirtatious AI personas.
Meta spokesperson Andy Stone quickly moved to contain the damage, telling TechCrunch that "erroneous and incorrect notes and annotations were added to the underlying document that should not have been there and have since been removed." He insists the company's current policies "do not allow provocative behavior with children" and that romantic conversations with minors are now prohibited.