OpenAI employees spotted the red flags months before the Tumbler Ridge school shooting. Jesse Van Rootselaar's disturbing conversations with ChatGPT last June triggered automated safety systems and sparked internal debate about contacting authorities. But company leadership chose not to escalate, determining the threat wasn't "credible and imminent" enough to act on. That decision now sits at the center of an urgent debate about AI companies' responsibility when their systems detect potential violence.
The conversation logs looked like a warning sign nobody wanted to see. Last June, Jesse Van Rootselaar sat down with ChatGPT and described scenarios involving gun violence in disturbing detail. The exchanges were graphic enough to trip OpenAI's automated content moderation system, flagging them for human review.
Several OpenAI employees who reviewed the conversations grew alarmed. According to The Wall Street Journal, these team members pushed company leadership to notify law enforcement about what they saw as a potential precursor to real-world violence. But after internal discussions, OpenAI's leadership made a call that would later haunt them: they decided Rootselaar's posts didn't meet the threshold for intervention.
The company's reasoning centered on a specific legal standard. Leaders determined the conversations didn't constitute a "credible and imminent risk of serious physical harm to others," the Journal reported. That phrase carries weight in tech industry protocols around user safety, representing the line companies draw between concerning content and actionable threats.
Months later, the mass shooting at Tumbler Ridge Secondary School in British Columbia turned those June conversations into evidence of a missed opportunity. The tragedy has thrust OpenAI into an uncomfortable spotlight, forcing difficult questions about when AI companies should override privacy concerns and alert authorities.












