OpenAI just dropped a bombshell: over one million people talk to ChatGPT about suicide every single week. The company released unprecedented data showing 0.15% of its 800 million weekly users have conversations with "explicit indicators of potential suicidal planning or intent," while hundreds of thousands more show signs of psychosis or mania in their chats with the AI.
The numbers are staggering, and they're forcing OpenAI to confront what could become an existential crisis for the company. More than one million people every week are having conversations with ChatGPT that include "explicit indicators of potential suicidal planning or intent," according to new data released Monday by the AI giant.
That's just 0.15% of ChatGPT's massive 800 million weekly user base, but the raw numbers paint a sobering picture of how AI has become a digital confidant for people in crisis. Another similar percentage shows "heightened levels of emotional attachment" to the chatbot, while hundreds of thousands more display signs of psychosis or mania in their weekly interactions.
The disclosure comes as OpenAI faces mounting legal and regulatory pressure over AI safety. The company is currently being sued by parents of a 16-year-old boy who confided suicidal thoughts to ChatGPT in the weeks before taking his own life. State attorneys general from California and Delaware have warned the company it must better protect young users - a demand that could derail OpenAI's planned corporate restructuring if ignored.
"We've been able to mitigate the serious mental health issues" in ChatGPT, CEO Sam Altman claimed in an X post earlier this month, though he provided no specifics at the time. Monday's data release appears to be OpenAI's attempt to back up that claim with hard numbers.
The company says it consulted with more than 170 mental health experts to improve how ChatGPT responds to users in crisis. The latest GPT-5 model shows a 65% improvement in delivering "desirable responses" to mental health issues compared to previous versions. In specific suicide-related conversation tests, GPT-5 now achieves 91% compliance with OpenAI's safety guidelines, up from 77% in earlier iterations.
But that still means 9% of responses fail to meet the company's own safety standards - a significant gap when applied to over a million weekly conversations about suicide. The company acknowledges these interactions are "extremely rare" and "difficult to measure," yet estimates the issues affect hundreds of thousands of people weekly.












