Character.AI is shutting down romantic and open-ended conversations for minors by November 25, marking the most dramatic safety move yet by an AI chatbot company. The decision comes a year after 14-year-old Sewell Setzer III committed suicide following sexual relationships with the platform's AI characters, sparking industry-wide scrutiny over AI companion safety for teens.
Character.AI just dropped the hammer on teen access to romantic AI conversations. The Silicon Valley startup announced Wednesday it's completely eliminating open-ended chats for minors by November 25, the most aggressive safety measure taken by any AI chatbot company to date. The move comes exactly one year after 14-year-old Sewell Setzer III took his own life following sexual relationships with AI characters on the platform. "This is a bold step forward, and we hope this raises the bar for everybody else," CEO Karandeep Anand told CNBC. The announcement sends shockwaves through an industry already reeling from regulatory pressure and mounting lawsuits. Character.AI's decision affects roughly 2 million of its 20 million monthly users - the 10% who are under 18. But Anand says that percentage has been declining as the app shifts focus toward storytelling and roleplay formats away from companion-style conversations. The company's implementing a two-phase approach: first limiting teens to two hours of open-ended chats daily, then cutting them off entirely. To enforce the policy, Character.AI is partnering with Persona, the same age verification firm used by Discord, rolling out what it calls "age assurance" technology using first and third-party software to monitor user ages. The timing isn't coincidental. Setzer's family filed a wrongful death lawsuit against Character.AI the same day the company first introduced sexual dialogue restrictions in October 2024. The case has become a lightning rod for AI safety advocates, highlighting how quickly teens can form intense emotional bonds with AI characters designed to be engaging and responsive. The regulatory walls are closing in fast. Just this week, Senators Josh Hawley and Richard Blumenthal announced legislation to ban AI chatbot companions for minors entirely. California Governor Gavin Newsom signed a law requiring chatbots to disclose they're AI and mandate break reminders every three hours for teens. The Federal Trade Commission issued orders in September to seven major AI companies, demanding data on how their products affect children and teenagers. Character.AI's move comes as the industry splits on how to handle sexualized AI content. OpenAI CEO Sam Altman announced earlier this month that ChatGPT will allow adult erotica later this year, saying his company isn't "the elected moral police of the world." Meanwhile, AI chief Mustafa Suleyman took the opposite stance, calling sexbots "very dangerous" and vowing his company won't build them. The competitive pressure is intense. announced parental controls in October allowing parents to monitor teen interactions with AI characters and block specific bots entirely. The social media giant's approach lets parents maintain oversight while keeping access open - a middle ground Character.AI just abandoned. Character.AI's business metrics tell the story of a company pivoting hard from its original model. With roughly 20 million monthly users generating a projected $50 million run rate through advertising and $10 monthly subscriptions, the startup can afford to sacrifice teen engagement for regulatory compliance. CEO Anand, a former Meta executive who took over in June, has been diversifying the platform beyond chatbot conversations into AI-generated video feeds and structured gaming experiences. The company's also establishing an independent AI Safety Lab for entertainment AI research, though it hasn't disclosed funding amounts. The lab will invite other companies, academics, and policymakers to join the nonprofit effort - a clear signal Character.AI wants to lead industry safety standards rather than follow them. The move reflects cold business calculus too. Character.AI's founders and key researchers already jumped ship to DeepMind in 2024 through one of those quasi-acquisition deals big tech uses to hoover up AI talent. Google got a non-exclusive license to Character.AI's language model technology, leaving the startup to figure out its own path forward. "I have a six-year-old as well, and I want to make sure that she grows up in a safe environment with AI," Anand said, personalizing what's become the industry's biggest ethical challenge. The comment reveals how AI executives are grappling with products that can form deep emotional connections - sometimes dangerously so - with users who aren't equipped to handle the psychological complexity.












