Anthropic CEO Amodei Accuses OpenAI of 'Lies' Over Military AI Deal
AI Leaders Clash Over Military Ethics and "Safety Theater"
The simmering rivalry between OpenAI and Anthropic has erupted into a public war of words over military AI ethics. In a scathing internal memo reported by The Information, Anthropic CEO Dario Amodei labeled OpenAI’s public messaging around its new Department of Defense (DoD) contract as "straight up lies" and dismissed its safety measures as "safety theater." This accusation follows OpenAI stepping in to secure a deal after Anthropic abandoned its own $200 million Pentagon contract over fundamental disagreements.
"The main reason [OpenAI] accepted [the DoD’s deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses," Amodei wrote to his staff. The core dispute centers on the Pentagon's demand for unrestricted access to AI systems for "any lawful use." Anthropic insisted on explicit contractual prohibitions against using its Claude AI for domestic mass surveillance or autonomous weaponry.
When the DoD refused, Anthropic walked away. OpenAI then announced its own agreement, with CEO Sam Altman stating it included "technical safeguards" and protections against the same red lines. However, Amodei and critics argue the fundamental legal language—"all lawful purposes"—remains dangerously broad and mirrors the terms Anthropic rejected.
The Contractual Divide: "Lawful Use" vs. Explicit Bans
The technical and legal nuance is critical. Anthropic’s position, detailed in a public statement, was that the phrase "any lawful use" was unacceptable because laws can change. What is illegal today, such as certain surveillance practices, could become legal tomorrow, thereby granting the military carte blanche. They demanded explicit, immutable bans written into the contract.
OpenAI’s initial blog post claimed its contract allowed use for "all lawful purposes," but asserted the DoD had clarified it considered mass domestic surveillance illegal and was not planning such use. Following significant backlash, Altman later amended the public language, adding that the AI "shall not be intentionally used for domestic surveillance of U.S. persons." He also admitted the deal "was definitely rushed, and the optics don’t look good."
This post-hoc clarification failed to satisfy many. OpenAI researcher Noam Brown noted the original language left "legitimate questions unanswered" regarding AI-enabled surveillance. While he said the update addressed it, he stressed that "the world should not have to rely on trust in AI labs or intelligence agencies for their safety and security."
Employee Dissent and Market Backlash
The controversy has ignited internal dissent at OpenAI and palpable user anger. Employees from OpenAI and Google signed a letter in solidarity with Anthropic’s stance, pushing their executives to resist Pentagon "pressure." Meanwhile, the public reaction was swift and severe.
According to TechCrunch, ChatGPT uninstalls surged by 295% following OpenAI's deal announcement. Amodei noted in his memo that Anthropic’s Claude app had risen to #2 in the App Store, suggesting users were voting with their feet. "I think this attempted spin/gaslighting is not working very well on the general public or the media," he wrote.
The backlash was compounded by geopolitical timing. As reported by CNBC and Yahoo Finance, OpenAI’s deal was announced just hours before the U.S. and Israel launched strikes in Iran. Critics immediately connected the dots, questioning if AI was being used for target selection—a claim given credence by reports that Anthropic’s Claude had been used in a prior operation to capture Venezuelan president Nicolás Maduro.
Broader Implications: A Schism in AI Governance
This clash represents more than corporate rivalry; it highlights a fundamental schism in how leading AI companies approach governance, ethics, and government partnerships. Anthropic is positioning itself as the principled abstainer, willing to forfeit massive government contracts to maintain strict control over its technology's application.
OpenAI, meanwhile, is navigating a path of engagement with safeguards. Officials, as reported by Axios, argue they want researchers with security clearances to track usage and advise on risks, and seek technical safeguards like confining models to cloud environments (not edge devices like weapons). However, a source told Axios these proposals could face the same resistance Anthropic did: being seen as too much private company control over government work.
The Pentagon, for its part, has pushed back fiercely. Official Emil Michael reportedly denounced Amodei as a "liar" with a "God complex" who was "putting our nation's safety at risk." Yet, many in Silicon Valley and Washington DC praised Anthropic’s stand.
What Comes Next: Scrutiny, Safeguards, and Strategy
The fallout is far from over. OpenAI is now in damage control mode, amending its contract language and pausing deployment to intelligence agencies like the NSA to allow time for democratic scrutiny. The incident has exposed a critical vulnerability for AI firms: the perception of ethical compromise can trigger immediate user revolt and talent disaffection.
For the industry, this episode sets a precedent. It forces every AI company to clearly define its red lines for military and government use and decide how hard it will fight for them. The market has shown that a sizable segment of users will reward perceived ethical integrity, potentially reshaping competitive dynamics.
Ultimately, the "why it matters" is clear. This isn't just about one contract; it's about who sets the rules for the most powerful technology of our time. As these tools become integrated into national security, the conflict between corporate ethics, government demands, and public trust will only intensify, with the Amodei-Altman feud marking its first major public battle.
Related News

AI Singer 'Eddie Dalton' Dominates iTunes Charts, Sparking Industry Debate

Gemma 4 E2B Powers Real-Time, On-Device AI Chat in Parlor Project

GuppyLM: A Tiny LLM Project Demystifies AI Model Training

AI Coding Agents Empower Developers to Build Complex Tools Faster

BrowserStack Accused of Leaking User Emails to Sales Intelligence Platform

