Anthropic Clashes With Pentagon Over AI Use Restrictions

When Anthropic signed a $200 million contract with the Department of Defense last year, it marked a significant milestone for the five-year-old AI startup. The company became the only AI provider to deploy its models on the Pentagon’s classified networks, a technical achievement that underscored its growing importance in the national security landscape. But less than a year later, that relationship has reached a breaking point.

“If any one company doesn’t want to accommodate that, that’s a problem for us. It could create a dynamic where we start using them and get used to how those models work, and when it comes that we need to use it in an urgent situation, we’re prevented from using it.” — Emil Michael, Undersecretary of Defense for Research and Engineering

The Terms of Disagreement

The conflict centers on a fundamental question: Should AI companies be allowed to restrict how their technology is used by the military? Anthropic is demanding written assurances that its models will not be deployed for autonomous weapons systems or used to conduct mass surveillance on American citizens. The Defense Department, in turn, is insisting on the right to use the technology for what it describes as “all lawful use cases” without limitation.

Negotiations have been ongoing since February, but according to Emil Michael, who spoke at a summit in Florida this week, the talks have “hit a snag.” The Pentagon is now reviewing its entire relationship with Anthropic, a process that could have significant implications for both parties.

The stakes are higher than a single contract. If Anthropic refuses to budge, the Defense Department could designate the company as a “supply chain risk”—a classification typically reserved for foreign adversaries like Chinese technology firms. Such a designation would require all Pentagon vendors and contractors to certify that they do not use Anthropic’s models, effectively cutting the company off from a significant portion of the federal procurement ecosystem.

A Pattern of Principled Resistance

This is not the first time Anthropic has found itself at odds with the federal government. The company has maintained a consistent stance on AI safety and responsible use since its founding in 2021 by former OpenAI researchers. That posture has increasingly put it in conflict with the Trump administration, which has taken a more permissive approach to AI regulation.

David Sacks, the administration’s AI and crypto czar, has publicly accused Anthropic of promoting “woke AI” because of its advocacy for stronger oversight. The criticism has done little to change the company’s position. In a statement to CNBC, an Anthropic spokesperson said the company remains committed to “using frontier AI in support of U.S. national security” while emphasizing that it is having “productive conversations, in good faith” with the Defense Department.

“Anthropic wants assurance that its models will not be used for autonomous weapons or to spy on Americans en masse.” — Axios Report

The Competitive Landscape

Anthropic’s rivals have taken a different approach. OpenAI, Google, and xAI were all awarded similar $200 million Defense Department contracts last year. According to a senior Pentagon official who spoke on condition of anonymity, those companies have agreed to let the military use their models for all lawful purposes within unclassified systems. One company has gone further, granting permission across “all systems”—including classified networks.

This divergence highlights a growing strategic split in the AI industry. On one side are companies that view military contracts as a significant revenue opportunity and are willing to accept broad usage terms to secure them. On the other are firms like Anthropic that see their technology as potentially dangerous enough to warrant contractual guardrails, even at the cost of government business.

The financial calculus is complex. Anthropic recently closed a $30 billion funding round that valued the company at $380 billion—more than double its September valuation. That fundraising success suggests investors are not overly concerned about the potential loss of Defense Department revenue. But the supply chain risk designation, if applied, could have cascading effects across the company’s commercial relationships.

What Comes Next

The outcome of this standoff will likely set precedents that extend far beyond Anthropic. As AI capabilities continue to advance, more companies will face similar decisions about whether—and under what conditions—to work with military and intelligence agencies. The Pentagon’s position is clear: it wants the flexibility to use AI tools as it sees fit. Anthropic’s position is equally firm: it will not provide technology without safeguards.

For now, both sides appear to be holding their ground. The Defense Department is conducting its review. Anthropic is continuing to engage in negotiations. And the broader AI industry is watching closely, aware that whatever happens next could reshape the relationship between Silicon Valley and the national security establishment for years to come.


This article was reported by the ArtificialDaily editorial team. For more information, visit CNBC.

By Arthur

Leave a Reply

Your email address will not be published. Required fields are marked *