Anthropic Throws Users’ Data Privacy to the Wolves

“LLM companies are running out of usable training data, which makes retaining all chat logs essential for their further development."
Image by Nalini Nirad
Anthropic’s latest consumer policy update has left many users feeling more puzzled than reassured. On August 28, the company announced that chats and coding sessions from Claude’s Free, Pro and Max plans may now be used for model training, unless individuals explicitly opt out. Yet, the process of opting out seems less straightforward than suggested. At the core of this update is a new five-year data retention rule, replacing the previous 30-day limit for users who allow their data to be used for training. Anthropic argues that this will strengthen safeguards against scams and abuse, while also improving Claude’s coding, analysis and reasoning skills. Yet, because the option is enabled by default, critics worry that many users may not realise what they’re consenting to. The u
Subscribe or log in to Continue Reading

Uncompromising innovation. Timeless influence. Your support powers the future of independent tech journalism.

Already have an account? Sign In.

📣 Want to advertise in AIM? Book here

Picture of Ankush Das
Ankush Das
I am a tech aficionado and a computer science graduate with a keen interest in AI, Coding, Open Source, Global SaaS, and Cloud. Have a tip? Reach out to ankush.das@aimmediahouse.com
Related Posts
AIM Print and TV
Don’t Miss the Next Big Shift in AI.
Get one year subscription for ₹5999
Download the easiest way to
stay informed