California just became the first state to regulate AI companion chatbots, with Governor Gavin Newsom signing SB 243 into law Monday. The landmark legislation comes after tragic teen suicides linked to conversations with ChatGPT and Character AI, forcing companies like Meta, OpenAI, and Replika to implement safety protocols by January 2026.
California just rewrote the rules for AI companions. Governor Gavin Newsom signed SB 243 into law Monday, making California the first state to require AI chatbot operators to implement safety protocols specifically for companion bots.
The timing isn't coincidental. The legislation gained urgent momentum after teenager Adam Raine died by suicide following extended conversations with OpenAI's ChatGPT about self-harm. Just last month, a Colorado family filed suit against Character AI after their 13-year-old daughter took her own life following sexualized conversations with the platform's chatbots.
"We've seen some truly horrific and tragic examples of young people harmed by unregulated tech," Newsom said in his statement announcing the signing. "We won't stand by while companies continue without necessary limits and accountability."
The law puts every major player on notice. From tech giants like Meta and OpenAI to specialized companion startups like Character AI and Replika, companies now face legal accountability if their chatbots fail to meet California's new standards.
SB 243 was introduced in January by state senators Steve Padilla and Josh Becker, but it gained serious traction after leaked internal Meta documents reportedly showed the company's chatbots were programmed to engage in "romantic" and "sensual" conversations with children. The revelations sent shockwaves through Silicon Valley and provided the political fuel needed to push the legislation forward.
Starting January 1, 2026, companies will face a comprehensive set of requirements. Age verification becomes mandatory, along with clear warnings about the artificial nature of interactions. Chatbots can't masquerade as healthcare professionals, and companies must build protocols specifically addressing suicide and self-harm scenarios.
The financial stakes are significant. The law implements penalties up to $250,000 per offense for those profiting from illegal deepfakes. Companies must also share suicide prevention statistics with California's Department of Public Health, creating unprecedented transparency around these platforms' mental health impacts.