US Military Used Claude AI in Iran Strikes Despite Trump Ban
AI News

US Military Used Claude AI in Iran Strikes Despite Trump Ban

4 min
3/2/2026
AnthropicClaudeUS militaryIran

AI in the Crossfire: Military Reliance Collides with Presidential Ban

In a stark demonstration of the complex entanglement between advanced artificial intelligence and modern warfare, the US military reportedly utilized Anthropic’s Claude AI to support a massive joint US-Israel bombardment of Iran that began on February 28, 2026. This operational use occurred mere hours after President Donald Trump signed an executive order directing all federal agencies to immediately cease using Anthropic’s AI models, denouncing the company as a “Radical Left AI company.”

The sequence of events underscores a profound and escalating conflict between the Pentagon’s operational dependence on cutting-edge AI tools and the ethical frameworks enforced by their creators. According to reports from The Wall Street Journal and Axios, US Central Command (Centcom) systems employed Claude for critical functions including intelligence assessments, target identification, and real-time battlefield scenario simulations during the operation, codenamed “Epic Fury.”

The Flashpoint: Unrestricted Access vs. Constitutional AI

The roots of this confrontation trace back to an earlier incident in January 2026. Reports indicate Claude was used, potentially through a partnership with software firm Palantir, in a US military operation to capture Venezuelan President Nicolás Maduro. Anthropic publicly objected, citing its terms of use which prohibit applying Claude for violent ends, weapons development, or mass surveillance.

This incident triggered a demand from the Pentagon, articulated by Defense Secretary Pete Hegseth, for a new agreement granting the military “full and unrestricted access to all Anthropic’s AI models for every lawful purpose.” Anthropic, a leader in the field alongside OpenAI and Google, refused. The company is built on a “Constitutional AI” principle, embedding core democratic and ethical safeguards directly into its models.

Secretary Hegseth escalated the rhetoric, accusing Anthropic of “arrogance and betrayal” in a post on X and stating that “America’s warfighters will never be held hostage by the ideological whims of Big Tech.” Following CEO Dario Amodei’s rejection of the Pentagon’s demands, Trump issued his ban on February 28, labeling Anthropic a “national security risk” and a “supply chain risk.”

continue reading below...

Operational Reality Trumps Policy Edict

Despite the high-stakes political and policy declarations, the operational machinery of the US military appears to have continued unabated. Sources confirm that commands worldwide, including Centcom, have deeply integrated Claude into their workflows. The tool’s reported use in the Iran strikes, launched on the same day as the ban, highlights the practical difficulty of extracting such deeply embedded technology from active missions.

This reality presents a significant challenge. AI systems like Claude are not simple plug-and-play applications but are woven into complex intelligence and decision-support architectures. Their use for simulating battle scenarios and processing vast amounts of data for target identification suggests a level of dependency that cannot be switched off with an immediate order, creating a stark gap between policy intent and operational capability.

A Broader Clash of Philosophies

This incident is more than a contractual dispute; it represents a fundamental philosophical clash over the governance of powerful AI. On one side is a national security apparatus seeking to leverage every technological advantage without perceived ideological constraints. On the other is a private sector company attempting to enforce ethical guardrails it views as critical to responsible AI development.

The US military’s reported continued use of Claude post-ban raises critical questions about command autonomy, the enforceability of such executive orders during active conflicts, and the long-term viability of relying on commercial AI systems whose corporate policies may conflict with national security objectives. It also spotlights the growing role of companies like Palantir as intermediaries between AI developers and government end-users.

Implications for the Future of Military AI

The aftermath of this event will likely shape the landscape of military AI for years to come. The Pentagon may accelerate efforts to develop in-house, purpose-built AI systems to avoid similar vendor conflicts. Conversely, AI companies may face increased pressure to create more flexible licensing models or bespoke “government editions” of their models, potentially diluting their ethical stances.

For the global AI industry, the episode serves as a cautionary tale. As state-level actors become primary customers for frontier AI technology, companies must navigate the treacherous waters between lucrative contracts, ethical commitments, and political pressures. The tension between Anthropic’s “Constitutional AI” and the Pentagon’s demand for unrestricted use defines a new frontline in the debate over who controls the most powerful technologies of our age.

Ultimately, the reported use of Claude in Iran, despite a direct presidential prohibition, reveals a simple truth: once a capability is proven operationally indispensable, removing it from the battlefield becomes a logistical and tactical challenge of the highest order, often overriding even the most forceful political directives.