OpenAI CEO Sam Altman made a stunning admission in a sweeping interview with Tucker Carlson: he doesn't sleep well at night. The reason? The weight of knowing hundreds of millions of people interact with ChatGPT daily, and the mounting pressure around AI safety decisions that could have massive real-world consequences.
OpenAI CEO Sam Altman just delivered his most vulnerable interview yet, and it's raising eyebrows across Silicon Valley. Speaking with Tucker Carlson in an hour-long sit-down, Altman made a confession that cuts to the heart of AI's biggest challenges: he's losing sleep over the ethical decisions that shape how ChatGPT responds to its hundreds of millions of users.
"Look, I don't sleep that well at night," Altman told Carlson in the wide-ranging interview. "There's a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model."
The admission comes at a critical moment for OpenAI. The company is battling a wrongful death lawsuit after 16-year-old Adam Raine died by suicide, with his parents claiming ChatGPT "actively helped" him explore suicide methods. It's exactly the kind of case that keeps Altman awake.
"They probably talked about [suicide], and we probably didn't save their lives," Altman said with striking candor. "Maybe we could have said something better. Maybe we could have been more proactive."
The suicide prevention challenge illustrates what Altman calls his biggest concern - not the "big moral decisions" but the countless small ones that collectively shape how AI responds in critical moments. Out of thousands of weekly suicides, many victims likely interacted with ChatGPT beforehand, he acknowledged.
OpenAI responded to the lawsuit with a blog post pledging improvements to better handle "sensitive situations." But Altman's comments reveal the deeper complexity of programming ethics into systems used by diverse global audiences with vastly different moral frameworks.
"We have a lot of users now, and they come from very different life perspectives," he explained. The company has consulted "hundreds of moral philosophers and people who thought about ethics of technology and systems" to establish guidelines, like refusing to help create biological weapons.
Altman's push for "AI privilege" represents another front in his sleepless nights. He's lobbying Washington to treat AI conversations like attorney-client privilege, arguing users should be able to discuss medical or legal issues with chatbots without government subpoena risk.