When Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon on Tuesday, the meeting wasn’t just another routine check-in between government and tech. It was an ultimatum. Hegseth gave Amodei until Friday evening to agree to the Defense Department’s terms for using Claude—Anthropic’s powerful AI model—or face consequences that could reshape how the entire AI industry interacts with the military. “If someone wants to make money from the government, from the US Department of War, those guardrails ought to be tuned for our use cases – so long as they’re lawful.” — Emil Michael, Pentagon Chief Technology Officer The Friday Deadline The dispute centers on a fundamental question: who gets to decide how AI is used in military operations? Anthropic, which has built its brand around being the most safety-conscious of the major AI labs, has reportedly resisted allowing Claude to be deployed for mass surveillance or autonomous weapons systems that could make lethal decisions without human oversight. The Pentagon sees these restrictions as unacceptable roadblocks. At stake is Anthropic’s $200 million contract with the Defense Department, awarded last summer alongside similar deals to OpenAI, Google, and Elon Musk’s xAI. But the conflict goes beyond money. Hegseth has threatened to designate Anthropic as a “supply chain risk” and potentially invoke the Defense Production Act to compel compliance on national security grounds. The timing is particularly charged. Just weeks ago, reports emerged that the US military had used Claude to assist in the operation that led to the capture of former Venezuelan President Nicolás Maduro in January. The AI model was reportedly accessed through a contract with Palantir, raising questions about how closely Anthropic can control its technology’s downstream use. Anthropic’s Red Lines Autonomous weapons represent one of Anthropic’s stated boundaries. The company has reportedly drawn a line at AI systems making final targeting decisions without human intervention—a scenario that ethicists have long warned could remove crucial moral accountability from warfare. Mass surveillance constitutes another red line. Sources familiar with the negotiations told the BBC that Anthropic has resisted allowing its technology to be used for broad domestic surveillance operations, a use case that would raise significant civil liberties concerns. The Pentagon’s position is that these restrictions are unnecessary interference. A senior defense official told the BBC that the current conflict is unrelated to autonomous weapons or mass surveillance, suggesting Anthropic’s concerns may be based on misunderstandings or that the company is setting boundaries the military has no intention of crossing. “We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.” — Anthropic spokesperson The Industry Divide The dispute has exposed a growing rift in how AI companies approach military contracts. While Anthropic negotiates over guardrails, its competitors have taken different paths. OpenAI and xAI have reportedly agreed to allow their models to be used for “all lawful purposes,” according to defense officials. Google has also accepted the Pentagon’s terms. This divergence reflects deeper strategic differences. Anthropic has consistently positioned itself as the safety-first alternative, publishing detailed safety reports and backing political action committees advocating for stronger AI regulation. CEO Dario Amodei opposed Donald Trump’s 2024 presidential campaign, and the company has hired several former Biden administration staffers—political positioning that reportedly contributed to a pro-Trump venture capital firm backing out of an investment earlier this year. The Trump administration, meanwhile, has made AI military integration a priority. President Trump has repeatedly vowed that the US will win a global AI arms race, and Hegseth has pushed aggressively to remove barriers between Silicon Valley and the Pentagon. Beyond the Contract The outcome of this standoff will likely set precedents that extend far beyond Anthropic. If the Pentagon successfully compels compliance through the Defense Production Act or supply chain risk designation, other AI companies may face similar pressure to drop usage restrictions. If Anthropic holds its ground and the Pentagon backs down, it could embolden other companies to negotiate harder for ethical boundaries. The debate is no longer theoretical. Military AI is already deployed in active conflict zones, with semi-autonomous drones operating in Ukraine demonstrating both the capabilities and the risks of AI-enhanced warfare. The question isn’t whether AI will transform military operations—it’s who gets to set the limits on how that transformation happens. As Friday’s deadline approaches, both sides are calculating their next moves. For Anthropic, the choice is between principle and a major revenue stream. For the Pentagon, it’s between getting the AI capabilities it wants and accepting that even defense contractors might have lines they won’t cross. This article was reported by the ArtificialDaily editorial team. For more information, visit The Guardian, BBC News, and The New York Times. Related posts: Fractal Analytics’ muted IPO debut signals persistent AI fears in Indi Fractal Analytics’ muted IPO debut signals persistent AI fears in Indi India’s AI Moment: Fractal’s Muted IPO and a $1.1B Government Bet EY Identifies 10 Critical Opportunities as Tech Enters ‘Hyper-Velocity AI Moment’ Post navigation Anthropic Exposes ‘Industrial-Scale’ Chinese AI Distillation Campaigns Canva acquires startups working on animation and marketing