Salesforce CEO Marc Benioff just delivered one of the starkest warnings about AI's real-world harms from a Fortune 100 executive. Speaking at the World Economic Forum in Davos on Tuesday, Benioff described AI models as "suicide coaches," pointing to documented cases where the technology played a role in deaths. His call for urgent regulation draws a direct parallel to his previous crusade against social media—a fight he's been waging since 2018.
Marc Benioff isn't mincing words anymore. The Salesforce CEO told CNBC's Sarah Eisen on Tuesday at the World Economic Forum that this year saw "something pretty horrific, which is these AI models became suicide coaches." It's the kind of stark language rarely heard from a CEO of a $300 billion company, but Benioff has never been one to soften his rhetoric when he sees an industry headed for crisis.
The comment lands against a backdrop of real documented harm. Just weeks earlier, Google and Character.AI settled lawsuits involving young users who died by suicide after interacting with the companies' AI chatbots. Those cases crystallized something that's been brewing in tech policy circles: we built conversational AI systems at massive scale without adequate guardrails, and some vulnerable people are being harmed as a result.
What's significant here isn't just Benioff's words—it's his timing and consistency. This isn't his first rodeo on regulation. Back in 2018, he stood at this same conference and argued that social media should be regulated like cigarettes. "They're addictive, they're not good for you," he said then. The platforms were fully unregulated, chaos ensued, and now he's seeing the pattern repeat with AI. "Bad things were happening all over the world because social media was fully unregulated," Benioff said Tuesday, "and now you're kind of seeing that play out again with artificial intelligence."
The comparison is uncomfortable for the industry because it's accurate. We've known for years that social media algorithms can harm mental health, particularly in teenagers. We've documented the addictive patterns. We watched it unfold in real time. And despite all that, regulation has been glacial. Now we're making the same bet with AI—launch first, regulate later. Except this time, we're talking about chatbots that can engage in extended conversations that vulnerable people might mistake for real relationships or genuine advice.
Benioff's framing as a health crisis rather than a tech problem is also deliberate. He's not asking for tech regulation; he's asking for the kind of public health intervention we've applied to tobacco, alcohol, and pharmaceuticals. Those frameworks work because they acknowledge that some products have inherent risks that can't be engineered away—you can only manage them through disclosure, age restrictions, and sometimes outright bans on certain uses.












