By consumer, we mean the new policies apply to Claude Free, Pro, and Max users, including those using Claude Code. Business customers using Claude Gov, Claude for Work, Claude for Education, or API access will be unaffected, which is how OpenAI similarly protects enterprise customers from data training policies.
So why is this happening? In that post about the update, Anthropic frames that change around user choice, saying that by not opting out, users will “help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations.” Users will “also help future Claude models improve at skills like coding, analysis, and reasoning, ultimately leading to better models for all users.”
In short, help us help you. But the full truth is probably a little less selfless.
More so, like every other large language model company, Anthropic also needs more than it needs people to have fuzzy feelings about its brand. Training AI models also requires vast amounts of high-quality conversational data, more than it needs people to have fuzzy feelings about its brand. Training AI models also requires vast amounts of high-quality conversational data, as well as accessing millions of Claude interactions, which should offer exactly the kind of real-world content that can improve Anthropic’s competitive advantage against its rivals, such as Google’s Gemini and OpenAI.
More than the competitive pressures of AI development, the changes would also seem to reflect broader industry shifts in data policies, as companies such as Anthropic and OpenAI face increasing scrutiny over their data retention practices. OpenAI is, however, now fighting against a court order that forces the company to retain all consumer ChatGPT conversations indefinitely.
The stakes for user awareness are rising fast. Privacy experts argue that AI’s complexity makes true informed consent nearly impossible. Under the Biden administration, the Federal Trade Commission cautioned that AI companies could face enforcement if they “surreptitiously” alter terms, hide disclosures in fine print, or bury them in legalese and hyperlinks, reported TechCrunch.