Pentagon Ultimatum: Anthropic Must Cede AI Control or Risk Defense Contracts
AI News

Pentagon Ultimatum: Anthropic Must Cede AI Control or Risk Defense Contracts

5 min
2/26/2026
AnthropicPentagonultimatumclassified

Pentagon Escalates Standoff Over AI Guardrails

In a dramatic escalation of a simmering conflict, the Pentagon has delivered an ultimatum to Anthropic, the artificial intelligence safety-focused company. The demand: grant the U.S. military unrestricted access to its Claude artificial intelligence model on classified systems by 5:01 p.m. on Friday, February 28, 2026. Failure to comply carries severe consequences, including being labeled a supply chain risk or having the Defense Production Act (DPA) invoked against it.

The confrontation, first reported by The New York Times and confirmed by CBS News and BBC, pits core principles of responsible AI development against the Pentagon's operational requirements. At stake is Anthropic's unique position as the only AI company currently operating on the Department of Defense's classified networks, a status earned through its specialized Claude Gov model and a partnership with data analytics firm Palantir.

A Stark Choice: Comply or Be Compelled

According to senior Pentagon officials and sources briefed on a Tuesday meeting between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei, the company faces two contradictory threats. The first is being designated a supply chain risk, a label typically reserved for foreign adversaries, which would jeopardize all of Anthropic's government contracts.

The second threat is the invocation of the Defense Production Act. This Cold War-era law grants the president authority to compel private companies to prioritize production for national defense. Recently used during the COVID-19 pandemic to order ventilator and mask production, its use here would force Anthropic to tailor its model to military specifications. As one source told TechCrunch, this represents a "serious game of chicken."

The Core Conflict: Who Controls the 'On' Switch?

The dispute hinges on a fundamental question of control. Anthropic, which has built its brand on a safety-first ethos, continues to demand assurances that its models will not be used for autonomous weapons programs or mass surveillance. The company's supporters argue it is being punished for being first on the classified system and creating a model without the standard commercial guardrails.

The Pentagon's position, articulated by officials like Emil Michael, is that lawful use of the technology is solely the government's responsibility. Officials contend they cannot allow every contractor to dictate specific use cases for equipment they sell, asserting that "lawful use must be the only constraint." This principle clashes directly with Anthropic's foundational safety protocols.

continue reading below...

Anthropic's Strategic Value and Pentagon's Backup Plans

Complicating the Pentagon's hardline stance is Anthropic's reported technical superiority. A senior Pentagon official confirmed to The New York Times that Claude is considered a superior product to rivals, regularly yielding more accurate information. This technical edge makes it a valuable asset for national security applications.

However, the Pentagon is not without alternatives. The same official confirmed an agreement with Elon Musk's xAI to use its Grok model on classified systems. Integration with Palantir's software and classified cloud servers is underway, though it will take time. The Pentagon is also reportedly close to a deal with Google to bring its Gemini model onto the classified network, though that agreement is not yet final.

Broader Market Context and Investor Concerns

This clash occurs within a broader context of Pentagon investment in frontier AI. Last year, the Department of Defense awarded $200 million contracts to Anthropic, OpenAI, Google, and xAI to develop AI capabilities for national security. Anthropic's $200 million contract was awarded in July specifically to advance U.S. national security.

The use of coercive federal power has raised alarm beyond the tech sector. Legal and policy experts warn of significant repercussions. "Any reasonable, responsible investor or corporate manager is going to look at this and think the U.S. is no longer a stable place to do business," one analyst told TechCrunch. They argued it attacks "the very core of what makes America such an important hub of global commerce."

A Breach of Trust and Operational Precedent

Observers cited by the BBC suggest the current spat results from a breach of trust between the two sides. This tension was exacerbated by reports, confirmed by sources to the BBC, that Anthropic's Claude model was used in the operation leading to the capture of former Venezuelan President Nicolás Maduro in January, through its contract with Palantir.

This real-world use case, potentially crossing ethical lines the company seeks to uphold, highlights the practical dilemma. Furthermore, as reported by Gizmodo, Anthropic recently rolled back aspects of its Responsible Scaling Policy (RSP), though the company insists this change is unrelated to Pentagon negotiations. The timing, however, is conspicuous.

What Comes Next?

As of reports from Reuters and TechCrunch, Anthropic does not plan to ease its usage restrictions, setting the stage for a high-stakes showdown. The deadline looms large. If Anthropic holds firm, the Pentagon must decide whether to follow through on its threats, potentially losing access to what it considers the best AI tool for classified work.

The outcome will set a critical precedent for the entire defense-tech sector, defining the boundaries between corporate ethics, government authority, and the deployment of powerful, dual-use technologies. As Emelia Probasco, a Senior Fellow at Georgetown University's Center for Security and Emerging Technology, told the BBC: "They need to get to a resolution... We owe it to [service members] to figure this out." The resolution, whatever it may be, will reverberate through Silicon Valley and the Pentagon for years to come.