Anthropic president Daniela Amodei just fired back at the Trump administration's anti-regulation agenda, arguing that safety isn't killing the AI industry - it's actually driving growth. Speaking at WIRED's Big Interview on Thursday, Amodei defended her company's vocal approach to AI risks against criticism from Trump's AI czar David Sacks, who accused Anthropic of "fear-mongering."
The political battle lines over AI regulation just got a lot clearer. Anthropic president Daniela Amodei isn't backing down from her company's safety-first approach, even as the incoming Trump administration signals it'll block state AI regulations and roll back federal oversight.
"No one says, 'We want a less safe product,'" Amodei told WIRED editor Steven Levy during Thursday's Big Interview event. Her comment came after Trump's newly appointed AI and crypto czar David Sacks tweeted that Anthropic is "running a sophisticated regulatory capture strategy based on fear-mongering."
But Amodei's betting that market forces will prove Sacks wrong. With over 300,000 startups, developers, and companies now using some version of Anthropic's Claude model, she's seeing firsthand how enterprises actually make AI purchasing decisions. They want power, sure - but they want reliability even more.
"We're setting what you can almost think of as minimum safety standards just by what we're putting into the economy," Amodei explained. Companies building critical workflows around AI are asking themselves: "Why would you go with a competitor that is going to score lower on that?"
It's a fascinating market dynamic that mirrors how safety regulations evolved in automotive. Amodei compared Anthropic's transparency about model limitations to automakers releasing crash-test videos - initially shocking, but ultimately selling points that demonstrate commitment to improvement.
This approach has apparently paid off in talent retention too. Anthropic has exploded from 200 employees to over 2,000 in just a few years, driven partly by workers drawn to what Amodei calls the company's "genuine" mission. "There's something about the mission and the values and this desire to be honest about both the good and the bad," she said of new hires' motivations.
At the heart of Anthropic's strategy is what it calls "constitutional AI" - training models on ethical principles and documents like the UN Universal Declaration of Human Rights. Rather than just teaching systems what's factually right or wrong, this approach embeds broader ethical reasoning into how AI responds to queries.












