Pentagon Gives Anthropic Friday Deadline to Open AI for Military Use o

When Defense Secretary Pete Hegseth sat down with Anthropic CEO Dario Amodei on Tuesday afternoon, the meeting carried stakes that extended far beyond a single government contract. What transpired was a high-stakes confrontation between the Pentagon’s vision for AI-enabled warfare and one tech executive’s ethical red lines—a clash that could reshape how artificial intelligence is developed, deployed, and governed in military contexts.

“A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.” — Dario Amodei, Anthropic CEO

The Friday Deadline

According to sources familiar with the meeting, Hegseth delivered a blunt message: Anthropic has until Friday to open its AI technology for unrestricted military use, or the Pentagon will terminate its contract. The ultimatum marks a dramatic escalation in the ongoing tension between the defense establishment and Silicon Valley’s most safety-conscious AI lab.

The stakes are substantial. Last summer, the Pentagon awarded defense contracts worth up to $200 million each to four AI companies: Anthropic, Google, OpenAI, and Elon Musk’s xAI. Of those four, Anthropic remains the last holdout refusing to supply its technology to GenAI.mil, the military’s new internal AI network.

But the Pentagon’s leverage extends beyond contract cancellation. Officials warned they could designate Anthropic a supply chain risk—a label that would severely complicate the company’s ability to work with other government agencies and contractors. More dramatically, they raised the possibility of invoking the Defense Production Act to essentially give the military authority to use Anthropic’s products regardless of the company’s approval.

Amodei’s Red Lines

Fully autonomous targeting represents one of Amodei’s firmest boundaries. The Anthropic CEO has repeatedly warned about the dangers of AI systems making lethal decisions without meaningful human oversight. In a January essay, he outlined scenarios where autonomous armed drones could escalate conflicts in ways that human commanders never intended.

Domestic surveillance constitutes the other line Amodei refuses to cross. His concerns center on the potential for AI-powered mass surveillance to track dissent, identify “disloyalty,” and suppress opposition before it can organize. The scenario he described—AI systems monitoring billions of conversations to detect and “stamp out” pockets of resistance—reads like a dystopian warning.

Despite the ultimatum’s pressure, sources described the meeting’s tone as cordial. Amodei listened, explained his position, and held his ground. Neither side budged on their core positions.

“Anthropic’s peers, including Meta, Google and xAI, have been willing to comply with the department’s policy on using models for all lawful applications. So the company’s bargaining power here is limited, and it risks losing influence in the department’s push to adopt AI.” — Owen Daniels, Georgetown University’s Center for Security and Emerging Technology

The “Woke AI” Debate

Hegseth’s rhetoric has made clear that this confrontation is about more than contract terms. In a January speech at SpaceX in South Texas, the Defense Secretary declared that the Pentagon would shrug off AI models “that won’t allow you to fight wars.” His vision, he said, means systems that operate “without ideological constraints that limit lawful military applications.” The kicker: “Our AI will not be woke.”

The framing has drawn criticism from AI safety advocates who argue that ethical constraints are not “ideological” obstacles but necessary safeguards. But it has resonated with a political constituency that views tech companies’ safety efforts as excessive interference with national security imperatives.

The timing of Hegseth’s announcements has raised eyebrows. Days after Grok—Elon Musk’s AI chatbot embedded in X—generated global scrutiny for creating sexualized deepfakes without consent, Hegseth announced it would join the Pentagon’s AI network. The message was hard to miss: capability matters more than caution.

Anthropic’s Unique Position

Ironically, Anthropic occupies a privileged position within the defense ecosystem. It was the first AI company approved for classified military networks, where it works with partners like Palantir on sensitive operations. Google, OpenAI, and xAI remain limited to unclassified environments.

This distinction reflects Anthropic’s early engagement with national security concerns. Under the Biden administration, the company volunteered for third-party scrutiny of its AI systems to guard against catastrophic risks. Amodei himself has warned of AI’s potentially existential dangers while rejecting the “doomer” label—arguing that risks should be managed “in a realistic, pragmatic manner.”

The company’s positioning as the “responsible” AI lab has been central to its brand since its founding. When Anthropic’s creators quit OpenAI in 2021 to form their own startup, they explicitly cited safety concerns as a driving motivation. That history makes compromise on military applications particularly fraught.

The Trump Administration Factor

This is not Anthropic’s first clash with the current administration. The company publicly criticized Trump’s proposals to loosen export controls on AI chips to China—a stance that put it at odds with both the White House and its close partner Nvidia.

White House AI adviser David Sacks has been openly critical of Anthropic’s safety advocacy, accusing the company in October of “running a sophisticated regulatory capture strategy based on fear-mongering.” The remark came in response to Anthropic co-founder Jack Clark’s writings about balancing technological optimism with “appropriate fear” about advancing AI capabilities.

Anthropic has attempted to navigate the political landscape by hiring former officials from both parties—including Chris Liddell, a White House veteran from Trump’s first term, who recently joined the board. But the bipartisan approach has not prevented the current confrontation.

Historical Echoes

The Pentagon-Anthropic standoff evokes memories of Project Maven, the Defense Department’s drone surveillance program that sparked protests among tech workers several years ago. Google eventually withdrew from the project under employee pressure, but the Pentagon’s reliance on drone surveillance only grew.

“The use of AI in military contexts is already a reality and it is not going away,” said Owen Daniels of Georgetown’s Center for Security and Emerging Technology. The question is not whether AI will be used for national security purposes, but how—and with what constraints.

For Amos Toh of the Brennan Center’s Liberty and National Security Program, the episode highlights a governance gap. “The law is not keeping up with how quickly the technology is evolving,” he wrote. “But that doesn’t mean DoD has a blank check.”

What Comes Next

With Friday’s deadline approaching, the outcome remains uncertain. Anthropic could capitulate, accepting the Pentagon’s terms to preserve its contract and market position. It could hold firm, risking contract termination and supply chain designation. Or some middle path might emerge—negotiated constraints that satisfy neither side completely but prevent a complete rupture.

Whatever happens, the confrontation has already made one thing clear: the era of AI companies setting their own terms for military engagement is ending. The Pentagon has drawn its line in the sand, and the industry is being forced to choose sides.

For Amodei and Anthropic, the decision carries implications that extend beyond a single contract. The company’s brand, its relationship with the AI safety community, and its position in an increasingly polarized industry all hang in the balance. The choice between principle and pragmatism has never been more stark—or more consequential.


This article was reported by the ArtificialDaily editorial team. For more information, visit San Diego Union-Tribune and Anthropic.

By Mohsin

Leave a Reply

Your email address will not be published. Required fields are marked *