
Anthropic has updated the rules for using the Claude chatbot. What has changed?
Anthropic has announced updates to its product usage policy, including its chatbot Claude. The new rules will take effect on September 15, 2025.
This is stated on the company’s website.
The purpose of the changes is to clarify when AI can and cannot be used, taking into account user feedback, product development, and regulatory requirements.
Key changes
-
Cybersecurity and “agent” tools. Anthropic has introduced a separate section prohibiting the use of Claude to create malicious software, hack networks, and carry out cyberattacks. At the same time, the company supports legal testing of systems for vulnerabilities — but only with the consent of their owners.
-
Political content. Previously, there was a complete ban on any political material. Now it has been clarified: only scenarios that could undermine democratic processes remain prohibited — for example, voter manipulation or targeted campaigning.
However, policy research, civic education, and analytical texts are no longer subject to restrictions.
-
Use by law enforcement agencies. The rules have become clearer: as before, mass surveillance, tracking, profiling, and biometric monitoring are prohibited. However, “back-office” tools and analytical applications that were previously permitted are now allowed.
-
High-risk scenarios. In the fields of law, finance, and employment, Claude can only be used with additional safeguards, such as human involvement in the decision-making process and clear labeling of AI content. This applies specifically to services for end consumers, not B2B solutions.
As a reminder, in early August 2025, Anthropic terminated OpenAI’s access to its API due to a violation of its terms of use.
Read more: Anthropic introduces weekly limits for Claude Code due to excessive load and abuse
Powered by WPeMatico
https://en.ain.ua/2025/08/18/anthropic-has-updated-the-rules-for-using-the-claude/