OpenAI and Google Employees Rush to Anthropic’s Defense in Pentagon La

When the U.S. Department of Defense labeled Anthropic a “supply-chain risk” on March 4, 2026, it wasn’t just another bureaucratic designation. It was an unprecedented move against an American AI company—one typically reserved for foreign adversaries like China and Russia. The reason? Anthropic’s CEO Dario Amodei refused to allow Claude to be used for mass surveillance of Americans or fully autonomous weapons without human oversight.

“The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry.” — Amicus brief signed by OpenAI and Google DeepMind employees

An Unlikely Alliance

What happened next surprised even seasoned industry observers. On March 9, more than 30 employees from OpenAI and Google DeepMind—including chief scientist Jeff Dean—filed a statement supporting Anthropic’s lawsuit. The brief argued that if the Pentagon was “no longer satisfied with the agreed-upon terms of its contract with Anthropic,” it could have “simply canceled the contract and purchased the services of another leading AI company.”

The Defense Department did exactly that, signing a deal with OpenAI within moments of designating Anthropic a supply-chain risk. But instead of celebrating the win, dozens of OpenAI employees protested, signing open letters urging the DOD to withdraw the label and calling on their own leadership to support Anthropic and refuse unilateral use of their AI systems.

The Stakes for AI Ethics

Contractual boundaries have become the front line in a larger battle over AI governance. Without public law to govern AI use, the contractual and technical restrictions developers impose on their systems are a critical safeguard against catastrophic misuse. The amicus brief affirms that Anthropic’s stated red lines—no mass surveillance, no autonomous weapons without human oversight—are legitimate concerns warranting strong guardrails.

Industry precedent is what has competitors worried. The brief warns that “if allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond.” More chillingly, it adds that the designation will “chill open deliberation in our field about the risks and benefits of today’s AI systems.”

Global implications extend far beyond this single case. Anthropic has filed two federal lawsuits—one in San Francisco and one in Washington D.C.—calling the DOD’s actions “unprecedented and unlawful.” The company says the designation could jeopardize “hundreds of millions of dollars” in revenue and potentially billions across its full business.

“We’re past the hype cycle now. Companies that can demonstrate real value—measurable, repeatable, scalable value—are the ones that will define the next decade of AI.” — Industry analyst on the evolving AI landscape

What Comes Next

A court hearing has been fast-tracked, with the tech industry watching closely to see how the judiciary responds to this collision between national security imperatives and corporate AI ethics policies. The outcome will likely shape how AI companies can operate globally and what restrictions they can place on government use of their technology.

For Anthropic, the fight represents both an existential threat and a defining moment. The company has positioned itself as the safety-first alternative in the AI race, and this legal battle puts that commitment to the test. For OpenAI and Google employees who signed the brief, it signals that even fierce competitors can agree on fundamental principles when the stakes are high enough.

The question now is whether the courts will agree that a private company has the right to restrict how its technology is used—or whether national security concerns will override those boundaries. Whatever the decision, it will reverberate through boardrooms and government agencies for years to come.


This article was reported by the ArtificialDaily editorial team. For more information, visit TechCrunch.

By Arthur

Leave a Reply

Your email address will not be published. Required fields are marked *