OpenAI employees were ordered to stay inside their San Francisco offices Friday after internal communications warned that a former Stop AI activist had expressed interest in "causing physical harm to OpenAI employees." The lockdown highlights escalating tensions between AI companies and activist groups demanding development slowdowns.
OpenAI turned its Mission Bay headquarters into a fortress Friday afternoon after internal communications warned employees about a credible threat from someone once linked to the Stop AI movement. The lockdown began when OpenAI's internal communications team sent an urgent Slack message naming an individual who had "expressed interest in causing physical harm to OpenAI employees" and had "previously been on site at our San Francisco facilities." Just before 11 AM, San Francisco police fielded a 911 call about threats being made at 550 Terry Francois Boulevard - right next to OpenAI's offices. According to Citizen app data, police scanner recordings described the suspect by name and suggested he may have purchased weapons with plans to target additional OpenAI locations. The timing couldn't be more pointed. Hours before the alleged threat, the individual had publicly distanced himself from Stop AI in a social media post - a move that now looks like preparation for escalation. Inside OpenAI's offices, security protocols kicked into high gear. The global security team distributed three photos of the suspected individual while reassuring staff that "there is no indication of active threat activity" but warning that "the situation remains ongoing." Employees were told to ditch their company badges when leaving and avoid wearing anything with the OpenAI logo - turning the AI giant's brand into a potential target marker. This isn't OpenAI's first rodeo with activist pressure. Over the past two years, groups calling themselves Stop AI, No AGI, and Pause AI have staged increasingly bold demonstrations outside San Francisco AI offices. In February, protestors were arrested for physically blocking OpenAI's front doors. Earlier this month, a Stop AI affiliate made headlines by jumping onstage to serve CEO Sam Altman a subpoena during a public interview. The alleged threat-maker wasn't just any activist. According to a Stop AI press release from last year, he served as an organizer and was quoted saying he'd find "life not worth living" if AI replaced humans in scientific discovery and job markets. "Pause AI may be viewed as radical amongst AI people and techies," he said then, "but it is not radical amongst the general public, and neither is stopping AGI development altogether." That quote now reads like a warning shot. The incident exposes how quickly the AI safety debate has moved from academic conferences to street protests to potential violence. What started as philosophical disagreements about artificial general intelligence timelines has morphed into a security crisis requiring police intervention and office lockdowns. For , this represents more than just a Friday afternoon disruption. The company has positioned itself as the responsible leader in AI development, emphasizing safety measures and gradual rollouts. But as its technology becomes more powerful and publicly visible, it's also becoming a lightning rod for fears about AI's impact on humanity. The activist community isn't monolithic either. Many AI safety advocates have distanced themselves from increasingly aggressive tactics, worried that extremist actions will discredit legitimate concerns about rushed AI deployment. Friday's events suggest that distinction may be collapsing as tensions escalate.












