In a major reversal, Anthropic is abandoning its privacy-first stance and will begin using Claude user conversations to train its AI models starting October 8. The company, which previously stood apart by not mining user chats, is now making data collection the default - forcing millions of Claude users to actively opt out if they want to keep their conversations private.
Anthropic just pulled the rug out from under privacy-conscious AI users. The company announced it's flipping Claude's data policy on its head, moving from a no-training default to automatically harvesting user conversations for model improvements. The shift marks a seismic change for the AI company that built its reputation partly on respecting user privacy.
The policy reversal takes effect October 8, originally slated for September 28 before Anthropic pushed it back. "We wanted to give users more time to review this choice and ensure we have a smooth technical transition," Gabby Curtis, an Anthropic spokesperson, told WIRED. That extra time feels more like damage control than genuine consideration.
The company's justification reads like standard Silicon Valley speak: "Data from real-world interactions provide valuable insights on which responses are most useful and accurate for users," according to Anthropic's blog post. Translation - they want the data goldmine that competitors like OpenAI and Google have been mining for years.
Here's the kicker: the toggle for data sharing is automatically switched on. Users who accepted the terms update without carefully reading every detail are now opted into having their conversations scraped. It's a classic dark pattern - bury the most consequential choice in a wall of legal text and default to the company-friendly option.
To opt out, users need to navigate to Privacy Settings and turn off the switch under "Help improve Claude." But even this escape hatch has catches. The new policy covers not just fresh conversations but any old chats you revisit. Go back to reference an old coding project or conversation, and boom - that historical data becomes fair game for training.
The privacy erosion doesn't stop there. Anthropic also extended its data retention period from 30 days to a sprawling five years, regardless of whether users opt into training. That's a massive expansion of how long the company sits on your personal conversations, creating a treasure trove of user data that could be subject to future policy changes or security breaches.