By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Online Tech Guru
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release
Reading: When AI Companies Go to War, Safety Gets Left Behind
Best Deal
Font ResizerAa
Online Tech GuruOnline Tech Guru
  • News
  • Mobile
  • PC/Windows
  • Gaming
  • Apps
  • Gadgets
  • Accessories
Search
  • News
  • PC/Windows
  • Mobile
  • Apps
  • Gadgets
  • More
    • Gaming
    • Accessories
    • Editor’s Choice
    • Press Release
Slay the Spire 2 Review

Slay the Spire 2 Review

News Room News Room 7 March 2026
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow
  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Online Tech Guru > News > When AI Companies Go to War, Safety Gets Left Behind
News

When AI Companies Go to War, Safety Gets Left Behind

News Room
Last updated: 6 March 2026 19:48
By News Room 6 Min Read
Share
When AI Companies Go to War, Safety Gets Left Behind
SHARE

I’ve spent the past few days asking AI companies to convince me that the prospects for AI safety have not dimmed. Just a few years ago, it seemed that there was universal agreement among companies, legislators, and the general public that serious regulation and oversight of AI was not just necessary, but inevitable. People speculated about international bodies setting rules to insure that AI would be treated more seriously than other emerging technologies, and that could at least provide obstacles to its most dangerous implementations. Corporations vowed to prioritize safety over competition and profits. While doomers still spun dystopic scenarios, a global consensus was forming to limit AI risks while reaping its benefits.

Events over the last week have delivered a body blow to those hopes, starting with the bitter feud between the Pentagon and Anthropic. All parties agree that the existing contract between the two used to specify—at Anthropic’s insistence—that the Department of Defense (which now tellingly refers to itself as the Department of War) won’t use Anthropic’s Claude AI models for autonomous weapons or mass surveillance of Americans. Now, the Pentagon wants to erase those red lines, and Anthropic’s refusal has not only resulted in the end of its contract, but also prompted Secretary of Defense Pete Hegseth to declare the company a supply-chain risk, a designation that prevents government agencies from doing business with Anthropic. Without getting into the weeds on contract provisions and the personal dynamics between Hegseth and Anthropic CEO Dario Amodei, the bottom line seems to be that the military is determined to resist any limitations on how it uses AI, at least within the bounds of legality—by its own definition.

The bigger question seems to be how we got to the point where releasing killer robot drones and bombs that identify and eliminate human targets wound up in the conversation as something that the US military would even consider. Did I miss the international debate about the merits of creating swarms of lethal autonomous drones scanning warzones, patrolling borders, or watching out for drug smugglers? Hegseth and his supporters complain about the absurdity of private companies limiting what the military can do. I think it’s crazier that it takes a lone company risking existential sanctions to stop a potentially uncontrollable technology. In any case, the lack of international agreements means that every advanced militia must use AI in all its forms, simply to keep up with its adversaries. Right now, an AI arms race seems unavoidable.

The risks extend far beyond the military. Overshadowed by the Pentagon drama was a disturbing announcement Anthropic posted on February 24. The company said it was making changes to its system for mitigating catastrophic risks from AI, called the Responsible Scaling Policy. It had been a key founding policy for Anthropic, in which the company promised to tie its AI model release schedule to its safety procedures. The policy stated that models should not be launched without guardrails that prevented worst-case uses. It acted as an internal incentive to make sure that safety wasn’t neglected in the rush to launch advanced technologies. Even more important, Anthropic hoped adopting the policy would inspire or shame other companies to do the same. It called this process the “race to the top.” The expectation was that embodying such principles would help influence industry-wide regulations that set limits on the mayhem that AI could cause.

At first, this approach seemed promising. DeepMind and OpenAI adopted aspects of Anthropic’s framework. More recently, as investment dollars ballooned, competition between the AI labs increased, and the prospect of federal regulation began looking more remote, Anthropic conceded that its Responsibly Scaling Policy had fallen short. The thresholds did not create the consensus about the risks of AI that it hoped it would. As the company noted in a blog post, “The policy environment has shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level.”

Meanwhile, the competition between AI companies has gotten more cutthroat. Instead of a race to the top, the AI rivalry seems more like a bareknuckle version of King of the Mountain. When the Pentagon banished Anthropic, OpenAI rushed to fill the gap with its own Department of Defense contract. OpenAI CEO Sam Altman insisted that he entered his hasty deal with the Pentagon to relieve pressure on Anthropic, but Amodei was having none of it. “Sam is trying to undermine our position while appearing to support it,” Amodei said in an internal memo. “He is trying to make it more possible for the admin to punish us by undercutting our public support.” (Amodei later apologized for his tone in the message.)

Share This Article
Facebook Twitter Copy Link
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This Jammer Wants to Block Always-Listening AI Wearables. It Probably Won’t Work

This Jammer Wants to Block Always-Listening AI Wearables. It Probably Won’t Work

News Room News Room 6 March 2026
FacebookLike
InstagramFollow
YoutubeSubscribe
TiktokFollow

Trending

The Trump administration says it can’t process tariff refunds because of computer problems

The US Customs and Border Protection says it currently can’t comply with an order to…

6 March 2026

Pokémon Card Market Watch: The Biggest Price Spikes to Check Out This Week

This week has been historic for Pokémon, as we celebrated the franchise’s 30th anniversary with…

6 March 2026

Review: Marley Spoon Meal Kit

This included a Persian turmeric chicken with dill-currant rice that fits seamlessly into Marley Spoon's…

6 March 2026
Gaming

How Steam changes and a China strategy helped TinyBuild’s The King is Watching hit 500k sales

How Steam changes and a China strategy helped TinyBuild’s The King is Watching hit 500k sales

Roguelite kingdom builder The King is Watching, from Serbian developer Hypnohead and publisher TinyBuild, has shifted over 500,000 copies since its release in July 2025. Now one of the label's…

News Room 7 March 2026

Your may also like!

Valve’s Steam Machine may not launch this year
News

Valve’s Steam Machine may not launch this year

News Room 6 March 2026
GDC Awards 2026 to recognise Don Daglow and Rebecca Heineman
Gaming

GDC Awards 2026 to recognise Don Daglow and Rebecca Heineman

News Room 6 March 2026
72 ‘Buy It for Life’ Products: Cast-Iron, Tools, Speakers, Chairs, and More
News

72 ‘Buy It for Life’ Products: Cast-Iron, Tools, Speakers, Chairs, and More

News Room 6 March 2026
Grammarly is using our identities without permission
News

Grammarly is using our identities without permission

News Room 6 March 2026

Our website stores cookies on your computer. They allow us to remember you and help personalize your experience with our site.

Read our privacy policy for more information.

Quick Links

  • Subscribe
  • Privacy Policy
  • Contact
  • Terms of Use
Advertise with us

Socials

Follow US
Welcome Back!

Sign in to your account

Lost your password?