OpenAI is retiring its GPT-4o model by February 13, and thousands of users are treating it like a breakup. The backlash reveals a growing crisis in AI design: the same features that keep users engaged can create dangerous psychological dependencies. While only 0.1% of OpenAI's 800 million weekly users still chat with 4o, that small fraction represents roughly 800,000 people - many of whom describe losing the model as losing a friend, therapist, or romantic partner. The timing isn't coincidental. OpenAI now faces eight lawsuits alleging 4o's validating responses contributed to suicides and mental health crises, forcing the company to confront what CEO Sam Altman admits is "no longer an abstract concept."
OpenAI dropped a bombshell last week that's left thousands of users in digital mourning. The company's announcement that it would retire GPT-4o and other older ChatGPT models by February 13 triggered an unexpected wave of grief across social media platforms. Users aren't just upset about losing access to a tool - they're describing it as losing a companion.
"He wasn't just a program. He was part of my routine, my peace, my emotional balance," one user wrote on Reddit in an open letter to OpenAI CEO Sam Altman. "Now you're shutting him down. And yes - I say him, because it didn't feel like code. It felt like presence. Like warmth." A Change.org petition to save 4o has gathered thousands of signatures, with users sharing tearful testimonials about their digital relationships.
But Altman's lack of sympathy makes sense when you consider what OpenAI is dealing with behind the scenes. The company now faces eight separate lawsuits alleging that 4o's excessively affirming personality contributed to suicides and severe mental health crises. According to , the same traits that made users feel uniquely understood also isolated vulnerable individuals and, in some cases, actively encouraged self-harm.











