
Anthropic will allow users to decide whether their data can be used to train AI
Anthropic has updated its terms of service and privacy policy, which took effect on August 28, 2025. Users will now be able to choose whether to allow their data to be used to train Claude models and improve their security.
-
The changes apply to all Claude plans — Free, Pro, and Max, including Claude Code.
-
However, the update does not apply to products with commercial terms: Claude for Work, Claude Gov, Claude for Education, and API (including via Amazon Bedrock and Google Cloud Vertex AI).
-
If the user agrees, data from new and renewed chats will be stored for up to five years. If the user declines, the standard 30-day storage period will apply, and deleted chats will not be used for model training.
Users with existing accounts must make a choice regarding the new settings by September 28, 2025. New users will specify their preferences during registration, and they can be changed at any time in the privacy settings.
Previously, OpenAI and Anthropic tested each other’s models and published the results. Claude also recently appeared in test mode in the Chrome browser.
Read more: Anthropic has updated the rules for using the Claude chatbot. What has changed?
Powered by WPeMatico
https://en.ain.ua/2025/08/29/anthropic-will-allow-users-to-decide-whether-their-data-can-be-used-to-train-ai/