Trump Orders Federal Agencies to Immediately Cease All Use of Anthropic Technology

When the Department of Defense’s deadline for Anthropic to loosen its ethical guidelines lapsed on Friday afternoon, few expected the response that followed. Within hours, President Donald Trump had weighed in with a direct order that would send shockwaves through Silicon Valley and the AI industry: all federal agencies must immediately cease using Anthropic technology.

“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.” — Donald Trump on Truth Social

The Pentagon’s Ultimatum

The public showdown between the Department of Defense and Anthropic began earlier this week after the two parties entered into discussions about the military’s use of Claude, Anthropic’s flagship AI system. The talks quickly broke down as both sides appeared unable to reach agreement over safety guardrails.

Anthropic, which has positioned itself as the most safety-forward of the leading AI companies, has maintained two firm red lines: its technology cannot be used for mass domestic surveillance or for fully autonomous weapons systems capable of killing without human input. The Pentagon, which had a $200 million, two-year agreement with Anthropic, pushed for unfettered access to Claude’s capabilities.

Defense Secretary Pete Hegseth announced Friday evening that he was directing his department to classify Anthropic as a supply-chain risk to national security. This designation, typically reserved for foreign adversaries, could endanger the company’s partnerships with other businesses.

“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. America’s warfighters will never be held hostage by the ideological whims of Big Tech.” — Pete Hegseth, Defense Secretary

An Unprecedented Designation

The supply-chain risk designation represents an extraordinary step. According to Anthropic’s statement released late Friday, this type of designation has never before been publicly applied to an American company.

The Government Services Administration followed Hegseth’s lead on Friday evening, terminating its contracts with Anthropic. The Pentagon will continue using Anthropic’s AI services for a transition period of no more than six months while it seeks alternatives.

Anthropic responded forcefully, stating it had not received direct communications from the defense department or the White House regarding the status of negotiations. The company said it was “deeply saddened” by the events and vowed to challenge any supply chain risk designation in court.

“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.” — Anthropic spokesperson

Industry Rallies Behind Anthropic

In a remarkable display of solidarity, Anthropic’s fiercest rivals have publicly sided with the company. OpenAI CEO Sam Altman indicated in a CNBC interview on Friday that OpenAI shares the same red lines as Anthropic regarding surveillance and autonomous weapons.

Nearly 500 OpenAI and Google employees signed an open letter stating “we will not be divided.” Both companies also hold contracts with the military and face similar pressure from the Pentagon.

The open letter revealed the Pentagon’s strategy: “The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused. They’re trying to divide each company with fear that the other will give in.”

What Comes Next

Despite the force of the federal government coming down on Anthropic, several scenarios remain possible. The AI firm and Pentagon could still reach some agreement. Other AI companies could take over Anthropic’s contracts. Or Anthropic could follow through on its threat to challenge the designation in court, setting up a landmark legal battle over AI ethics and government authority.

The confrontation raises fundamental questions about the relationship between AI companies and national security. As AI capabilities advance, the tension between safety guardrails and military applications will only intensify. For now, Anthropic has drawn a line in the sand—and the federal government has responded with the full weight of its contracting power.


This article was reported by the ArtificialDaily editorial team. For more information, visit The Guardian.

By Mohsin

Leave a Reply

Your email address will not be published. Required fields are marked *