Hundreds of Google and OpenAI Employees Back Anthropic’s Pentagon Stan

When Defense Secretary Pete Hegseth sat down with Anthropic CEO Dario Amodei, the conversation carried the weight of an industry at a crossroads. The Pentagon wanted unrestricted access to Claude, Anthropic’s AI assistant. Amodei refused. Now, as a Friday deadline looms for Anthropic to comply or face consequences, something remarkable is happening across Silicon Valley: employees at rival companies are standing together.

“They’re trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand.” — Open letter signatories

The Red Lines That Started a Standoff

Anthropic’s opposition centers on two specific uses of AI that the company considers beyond the pale: domestic mass surveillance and fully autonomous weaponry. These aren’t abstract concerns. The Pentagon has reportedly been negotiating with Google and OpenAI to bring their technology into classified environments, and Anthropic’s refusal to follow suit has put it in the government’s crosshairs.

Hegseth delivered an ultimatum: concede to unrestricted military use, or face designation as a “supply chain risk” or invocation of the Defense Production Act to force compliance. The threats represent an unprecedented attempt to compel an AI company to abandon its safety guardrails.

The contradiction at the heart of the government’s position hasn’t gone unnoticed. As Amodei pointed out in a statement Thursday, “These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”

An Unlikely Alliance Across Company Lines

The open letter that emerged this week is remarkable not just for what it says, but for who signed it. More than 300 employees from Google and over 60 from OpenAI—companies that compete fiercely with Anthropic for talent, customers, and market position—have publicly urged their own leaders to stand with their rival.

The signatories are asking for something specific: they want executives at Google and OpenAI to “put aside their differences and stand together” to uphold the same boundaries Anthropic has drawn. The letter calls on both companies to maintain red lines against mass surveillance and fully automated weaponry, regardless of what the Pentagon offers.

“Mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression. Surveillance systems are prone to misuse for political or discriminatory purposes.” — Jeff Dean, Google Chief Scientist

Where the Companies Stand

Google DeepMind has not formally addressed the conflict, but Chief Scientist Jeff Dean made his personal position clear on X. His statement against mass surveillance—citing Fourth Amendment concerns and the potential for political misuse—suggests sympathy with Anthropic’s position, even if the company itself has remained silent.

OpenAI has been more direct. In an interview with CNBC Friday morning, CEO Sam Altman said he doesn’t “personally think the Pentagon should be threatening DPA against these companies.” A company spokesperson confirmed to CNN that OpenAI shares Anthropic’s red lines against autonomous weapons and mass surveillance.

The informal alignment is notable. While neither company has formally committed to refusing Pentagon demands, both appear to be watching how the Anthropic standoff resolves before making their own positions explicit.

The Stakes for AI Governance

President Trump escalated the conflict Friday with a post on Truth Social announcing he was “directing every federal agency in the United States government to immediately cease all use of Anthropic’s technology.” The declaration that Anthropic would be designated a supply-chain risk represents a dramatic expansion of the dispute.

For the AI industry, the implications extend far beyond one company. If the government can successfully compel Anthropic to abandon its safety restrictions through regulatory threats, other companies will face similar pressure. The outcome could determine whether AI safety guardrails are negotiable when national security claims are invoked.

The military already uses X’s Grok, Google’s Gemini, and OpenAI’s ChatGPT for unclassified tasks. The current negotiations are about bringing these tools into classified environments—precisely the context where surveillance and weapons applications become most concerning.

What Comes Next

The Friday deadline has passed, but the standoff continues. Anthropic has maintained its position despite the threats, and the employee letter suggests that any company that concedes to Pentagon demands may face internal resistance.

The open letter’s signatories have done something unusual in Silicon Valley’s competitive culture: they’ve put principle above corporate loyalty. Whether their leaders follow suit remains to be seen.

For now, Anthropic stands alone in having drawn a public line in the sand. The question is whether that line holds—and whether anyone else steps up to defend it.


This article was reported by the ArtificialDaily editorial team. For more information, visit TechCrunch and The Wall Street Journal.

By Mohsin

Leave a Reply

Your email address will not be published. Required fields are marked *