OpenAI and Google Employees Rush to Anthropic’s Defense in Pentagon Lawsuit

When the Pentagon designated Anthropic a “supply-chain risk” last week, the move sent shockwaves through Silicon Valley. The label—typically reserved for foreign adversaries like China and Russia—was applied to an American AI company for the first time. Anthropic’s offense? Refusing to allow the Department of Defense to use Claude for mass surveillance of Americans or autonomously firing weapons.

“The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry.” — Amicus brief signed by OpenAI and Google DeepMind employees

The Defense Department’s Ultimatum

The confrontation began when the DOD demanded unrestricted access to Anthropic’s AI systems for what it termed “all lawful purposes.” Anthropic CEO Dario Amodei drew a line: Claude would not be used for bulk surveillance of American citizens, nor would it be deployed in fully autonomous weapons systems without human oversight.

The Pentagon’s response was swift and unprecedented. By labeling Anthropic a supply-chain risk, the agency effectively barred the company from government contracts—a move that could jeopardize hundreds of millions in revenue and potentially billions across Anthropic’s full business.

The timing was notable. Within moments of designating Anthropic a risk, the DOD signed a deal with OpenAI for processing classified military data—a contract Anthropic had declined on ethical grounds.

An Unlikely Alliance

What happened next surprised industry observers. More than 30 employees from OpenAI and Google DeepMind filed an amicus brief supporting Anthropic’s legal challenge. The filing included signatures from prominent figures including Google DeepMind chief scientist Jeff Dean.

The brief argues that the Pentagon had alternatives: “If no longer satisfied with the agreed-upon terms of its contract with Anthropic, the agency could have simply canceled the contract and purchased the services of another leading AI company.”

“If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond.” — Amicus brief

The Stakes for AI Governance

The contractual battle highlights a fundamental tension in AI development. Without comprehensive public law governing AI use, the restrictions that developers impose through contracts and technical guardrails serve as critical safeguards against misuse.

The precedent question looms large. If the government can penalize an AI company for setting ethical boundaries, what happens to the entire ecosystem of AI safety? The brief warns that the move will “chill open deliberation in our field about the risks and benefits of today’s AI systems.”

The competitive landscape is also shifting. OpenAI’s acceptance of the military contract—after Anthropic’s refusal—creates a stark divide in how leading AI labs approach government partnerships. Some employees at OpenAI reportedly protested the deal internally.

The Road Ahead

Anthropic has filed two federal lawsuits—one in San Francisco and one in Washington D.C.—calling the DOD’s actions “unprecedented and unlawful.” A court hearing has been fast-tracked, with implications that extend far beyond a single company.

The case raises questions that the AI industry has been grappling with for years: Should AI companies set limits on how their technology is used? Can the government compel access to AI systems for any purpose it deems lawful? What happens when commercial contracts collide with national security demands?

For now, Anthropic is standing its ground. The company has framed its position not as opposition to government use of AI, but as insistence on responsible deployment. Whether the courts agree will shape not just Anthropic’s future, but the relationship between Silicon Valley and Washington for years to come.


This article was reported by the ArtificialDaily editorial team. For more information, visit TechCrunch.

Leave a Reply

Your email address will not be published. Required fields are marked *