Tech workers are crossing company lines to support Anthropic's ethical guardrails on military AI. Employees from Google and OpenAI just signed an open letter backing the AI startup's firm stance against letting the Pentagon use its technology for mass domestic surveillance or fully autonomous weapons - even as Anthropic maintains an existing defense partnership. The unusual show of cross-company solidarity signals growing worker unrest over how AI gets deployed in military contracts.
The battle lines over military AI just got more complicated. Workers at Google and OpenAI are publicly backing Anthropic's decision to draw hard ethical lines around its Pentagon partnership, creating an unusual alliance across three of AI's biggest players.
Anthropic has been walking a tightrope - working with the Department of Defense while refusing to let its Claude AI power mass domestic surveillance programs or fully autonomous weapons systems. Now employees from competing AI labs are saying that's exactly the model the industry needs.
The open letter represents something rare in tech: workers at rival companies finding common cause on ethics before their employers do. It's the kind of cross-company organizing that hasn't been seen since the 2018 protests over Google's Project Maven, when thousands of employees successfully pressured the company to drop its military image recognition contract.
But this time the dynamic is different. Instead of protesting their own companies, Google and OpenAI employees are holding up a competitor's stance as the standard. The message to their own leadership is clear: if Anthropic can set boundaries with the Pentagon, why can't we?
Anthropic CEO Dario Amodei has been threading a delicate needle on defense work. The company signed a contract with the Pentagon, giving it access to Claude for approved national security applications. But Amodei drew bright red lines - no surveillance dragnet operations, no weapons that kill without human oversight.












