California just made deceptive AI chatbots illegal. Governor Gavin Newsom signed Senate Bill 243 into law today, forcing companion chatbot makers to clearly notify users when they're talking to artificial intelligence, not humans. The legislation targets the booming companion AI industry where realistic chatbots often blur the line between human and machine interaction, particularly raising concerns about child safety and mental health risks.
The era of AI chatbots pretending to be human just ended in California. Governor Gavin Newsom signed Senate Bill 243 into law today, creating the nation's first comprehensive regulations targeting companion AI chatbots that blur the lines between artificial and human interaction.
The legislation, championed by state senator Anthony Padilla, cuts straight to the heart of a growing problem in the AI industry. If a reasonable person interacting with a companion chatbot would be misled to believe they're talking to a human, the new law requires developers to "issue a clear and conspicuous notification" that the product is strictly AI.
This isn't just about transparency - it's about safety. The companion AI market has exploded over the past year, with platforms like Character.AI and Replika attracting millions of users seeking everything from casual conversation to emotional support. But these hyper-realistic interactions have raised serious concerns, particularly around vulnerable populations like children and teens.
Starting next year, the law's teeth really show. Companion chatbot operators will need to file annual reports with California's Office of Suicide Prevention detailing their safeguards "to detect, remove, and respond to instances of suicidal ideation by users." The Office must then publish this data publicly, creating unprecedented transparency in how AI companies handle mental health crises.
"Emerging technology like chatbots and social media can inspire, educate, and connect - but without real guardrails, technology can also exploit, mislead, and endanger our kids," Newsom said in his official statement. "We can continue to lead in AI and technology, but we must do it responsibly - protecting our children every step of the way. Our children's safety is not for sale."
The timing isn't coincidental. California's regulatory momentum has been building for months, culminating in today's signing alongside several other child protection measures, including new age-gating requirements for hardware devices. The state is clearly positioning itself as the de facto AI regulator for the nation, much like it did with auto emissions and privacy laws.