By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Online Tech Guru
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release
Reading: Anthropic has new rules for a more dangerous AI landscape
Best Deal
Font ResizerAa
Online Tech GuruOnline Tech Guru
  • News
  • Mobile
  • PC/Windows
  • Gaming
  • Apps
  • Gadgets
  • Accessories
Search
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release

UN Plastics Treaty Talks Once Again End in Failure

News Room News Room 16 August 2025
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow
  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Online Tech Guru > News > Anthropic has new rules for a more dangerous AI landscape
News

Anthropic has new rules for a more dangerous AI landscape

News Room
Last updated: 15 August 2025 21:07
By News Room 3 Min Read
Share
SHARE

Anthropic has updated the usage policy for its Claude AI chatbot in response to growing concerns about safety. In addition to introducing stricter cybersecurity rules, Anthropic now specifies some of the most dangerous weapons that people should not develop using Claude.

Anthropic doesn’t highlight the tweaks made to its weapons policy in the post summarizing its changes, but a comparison between the company’s old usage policy and its new one reveals a notable difference. Though Anthropic previously prohibited the use of Claude to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life,” the updated version expands on this by specifically prohibiting the development of high-yield explosives, along with biological, nuclear, chemical, and radiological (CBRN) weapons.

In May, Anthropic implemented “AI Safety Level 3” protection alongside the launch of its new Claude Opus 4 model. The safeguards are designed to make the model more difficult to jailbreak, as well as to help prevent it from assisting with the development of CBRN weapons.

In its post, Anthropic also acknowledges the risks posed by agentic AI tools, including Computer Use, which lets Claude take control of a user’s computer, as well as Claude Code, a tool that embeds Claude directly into a developer’s terminal. “These powerful capabilities introduce new risks, including potential for scaled abuse, malware creation, and cyber attacks,” Anthropic writes.

The AI startup is responding to these potential risks by folding a new “Do Not Compromise Computer or Network Systems” section into its usage policy. This section includes rules against using Claude to discover or exploit vulnerabilities, create or distribute malware, develop tools for denial-of-service attacks, and more.

Additionally, Anthropic is loosening its policy around political content. Instead of banning the creation of all kinds of content related to political campaigns and lobbying, Anthropic will now only prohibit people from using Claude for “use cases that are deceptive or disruptive to democratic processes, or involve voter and campaign targeting.” The company also clarified that its requirements for all its “high-risk” use cases, which come into play when people use Claude to make recommendations to individuals or customers, only apply to consumer-facing scenarios, not for business use.

Share This Article
Facebook Twitter Copy Link
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Qloud Games raises $5m in seed funding round

News Room News Room 16 August 2025
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow

Trending

Apple Finally Destroyed Steve Jobs’ Vision of the iPad. Good

Apple responded with the iPad Pro, which brought more horsepower than anyone knew what to…

16 August 2025

Scientists Discover ‘Deceptively Cute’ Prehistoric Species That Looked Like a Pokémon

Scientists have identified a new species of ancient whale with cartoonish bulging eyes that they…

16 August 2025

The Best Back-to-School Deals on Gadgets and Dorm Gear

It's “back to school” season for some and “been back in school” season for others,…

16 August 2025
News

Review: DJI Osmo 360

Price-wise, the £410 ($550) launch cost for the Standard Combo undercuts Insta360’s comparable bundles by a fair margin, which only adds to the appeal. This package comes with the camera,…

News Room 16 August 2025

Your may also like!

News

How One Wikipedia Editor Unraveled the ‘Single Largest Self-Promotion Operation’ in the Site’s History

News Room 16 August 2025
Gaming

Alternative app store Skich launches on Android

News Room 16 August 2025
Gaming

Triangulate Codes (August 2025) – IGN

News Room 16 August 2025
Gaming

Fateless raises $14 million for its new cross-platform RPG

News Room 16 August 2025

Our website stores cookies on your computer. They allow us to remember you and help personalize your experience with our site.

Read our privacy policy for more information.

Quick Links

  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
Advertise with us

Socials

Follow US
Welcome Back!

Sign in to your account

Lost your password?