When Anthropic CEO Dario Amodei refused to give the Pentagon unrestricted access to Claude by Friday’s deadline, he knew the consequences could be severe. What he didn’t expect was the full force of the U.S. government turning against his company in a public spectacle that has stunned Silicon Valley and raised fundamental questions about the balance between AI safety and national security. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.” — Anthropic Statement The Ultimatum and the Fallout The Trump administration on Friday ordered all U.S. agencies to immediately stop using Anthropic’s artificial intelligence technology, escalating an unusually public clash between the government and one of America’s most prominent AI companies. President Donald Trump, Defense Secretary Pete Hegseth, and other officials took to social media to chastise Anthropic for failing to allow the military unrestricted use of its AI technology. Anthropic had sought narrow assurances from the Pentagon that Claude would not be used for mass surveillance of Americans or in fully autonomous weapons. The Pentagon said it was not interested in such uses and would only deploy the technology in legal ways, but it also insisted on access without any limitations. The supply chain risk designation imposed by Hegseth is particularly significant. This administrative tool has traditionally been reserved for companies owned by U.S. adversaries to prevent them from selling products harmful to American interests. Its application to an American technology company is unprecedented. “The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!” — President Donald Trump on Truth Social OpenAI’s Countermove Hours after Anthropic was punished, OpenAI CEO Sam Altman announced that his company had struck a deal with the Pentagon to supply its AI to classified military networks. The timing was striking—potentially filling a gap created by Anthropic’s ouster while positioning OpenAI as the more cooperative partner. But Altman revealed something surprising: the same red lines that were the sticking point in Anthropic’s dispute are now enshrined in OpenAI’s new partnership. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote, noting that the Defense Department “agrees with these principles, reflects them in law and policy.” Altman’s olive branch to Anthropic was unmistakable. He expressed hope that the Pentagon would “offer these same terms to all AI companies” as a way to “de-escalate away from legal and governmental actions and toward reasonable agreements.” Silicon Valley Rallies Behind Anthropic The dispute has sent shockwaves through the AI industry. Venture capitalists, prominent AI scientists, and a large number of workers from Anthropic’s top rivals—including OpenAI and Google—have voiced support for Amodei’s stand in open letters and public forums. Retired Air Force Gen. Jack Shanahan, a former leader of the Pentagon’s AI initiatives, offered a sobering assessment. “The government painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end,” he wrote on LinkedIn. Shanahan noted that Claude is already widely used across the government, including in classified settings, and that Anthropic’s red lines were “reasonable.” The readiness question looms large. Shanahan emphasized that AI large language models powering chatbots like Claude, Grok, and ChatGPT are “not ready for prime time in national security settings,” particularly not for fully autonomous weapons. The Broader Implications Virginia Sen. Mark Warner, the top Democrat on the Senate Intelligence Committee, raised concerns about the government’s motivations. The combination of the supply chain risk designation “combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.” The outcome could reshape the landscape of AI development in America. If Anthropic succeeds in challenging the designation, it could establish important precedents for how AI companies negotiate with the government. If it fails, other companies may be forced to choose between their safety principles and their ability to do business with the world’s largest customer. For now, the six-month phase-out period gives both sides time to negotiate—or dig in for a prolonged legal battle that could define the relationship between Silicon Valley and Washington for years to come. This article was reported by the ArtificialDaily editorial team. For more information, visit Federal News Network. Related posts: Fractal Analytics’ muted IPO debut signals persistent AI fears in Indi Fractal Analytics’ muted IPO debut signals persistent AI fears in Indi India’s AI Moment: Fractal’s Muted IPO and a $1.1B Government Bet EY Identifies 10 Critical Opportunities as Tech Enters ‘Hyper-Velocity AI Moment’ Post navigation Trump Administration Declares Anthropic a Supply Chain Risk, Orders Ag