Anthropic CEO Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies,’ report says
Anthropic CEO Slams OpenAI’s Pentagon Deal as “Safety Theater” Amid Growing Public Backlash
In a scathing internal memo that has now leaked to the public, Anthropic CEO Dario Amodei launched a blistering attack on OpenAI’s recently announced Department of Defense contract, calling it “safety theater” and accusing Sam Altman of misleading the public about the true nature of the agreement.
The feud between these two AI giants has escalated dramatically after Anthropic walked away from negotiations with the Pentagon last week, citing concerns over potential misuse of their technology. Meanwhile, OpenAI moved quickly to secure a lucrative defense contract that has sparked controversy across the tech industry and among consumers.
The Breaking Point: Anthropic’s Stand Against “Any Lawful Use”
The conflict centers on a fundamental disagreement about how AI technology should be governed in military applications. Anthropic, which already had a $200 million contract with the Department of Defense, insisted that the military affirm it would not use their AI for domestic mass surveillance or autonomous weaponry. When the DoD refused to provide these assurances, Anthropic walked away from the negotiating table.
“OpenAI’s deal is nothing more than safety theater,” Amodei wrote in the memo to his staff. “The main reason they accepted the DoD’s deal and we did not is that they cared about placating employees, and we actually cared about preventing abuses.”
OpenAI’s Counter-Move: A Deal With Technical Safeguards
Just days after Anthropic’s withdrawal, OpenAI announced its own agreement with what is now being called the Department of War under the Trump administration. Sam Altman took to social media to tout the deal, claiming it included protections against the very red lines Anthropic had demanded.
However, Amodei wasn’t buying it. In his memo, he referred to OpenAI’s messaging as “straight up lies” and accused Altman of “falsely presenting himself as a peacemaker and dealmaker.”
The technical details reveal the core of the disagreement. Anthropic specifically took issue with the DoD’s insistence on their AI being available for “any lawful use.” OpenAI stated in their blog post that their contract allows use of their AI systems for “all lawful purposes.”
“It was clear in our interaction that the DoW considers mass domestic surveillance illegal and was not planning to use it for this purpose,” OpenAI’s blog post stated. “We ensured that the fact that it is not covered under lawful use was made explicit in our contract.”
The Legal Minefield: Today’s Laws, Tomorrow’s Loopholes
Critics have been quick to point out a critical vulnerability in OpenAI’s approach: the law is subject to change. What is considered illegal today might be permitted tomorrow, creating a dangerous precedent for AI deployment in military contexts.
This legal ambiguity has become a central point of contention between the two companies. Anthropic’s stance suggests a more precautionary approach, while OpenAI appears willing to work within existing legal frameworks, even if those frameworks could shift.
Public Opinion Turns: ChatGPT Uninstall Surge
The public reaction to these competing approaches has been swift and dramatic. Following OpenAI’s announcement of the Pentagon deal, ChatGPT uninstall rates jumped an astonishing 295%, according to recent data.
“I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with the DoW as sketchy or suspicious, and see us as the heroes (we’re #2 in the App Store now!),” Amodei wrote in his memo.
This public sentiment shift represents a significant reputational challenge for OpenAI, which had previously enjoyed broad consumer support. The dramatic uninstall numbers suggest that many users are willing to abandon the platform over ethical concerns about military applications.
The Twitter Divide and Employee Concerns
Amodei’s memo reveals particular concern about how OpenAI’s messaging is being received within the tech community. “It is working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees,” he wrote.
This internal focus suggests that Amodei sees the battle for public opinion as extending beyond consumers to the very talent pools that both companies need to compete in the rapidly evolving AI landscape.
The Bigger Picture: AI Ethics in the Age of Defense Contracts
This conflict represents a broader debate about the role of artificial intelligence in military applications. Anthropic’s stance suggests a more cautious, principle-based approach to AI deployment, while OpenAI’s position reflects a more pragmatic view that working within existing legal frameworks is sufficient.
The stakes are enormous. As AI systems become increasingly powerful and autonomous, the question of who controls them and how they’re used becomes more critical. Anthropic’s willingness to walk away from a $200 million contract demonstrates a commitment to ethical principles that could define the next phase of AI development.
What’s Next: The Battle for AI’s Future
As these two AI powerhouses continue their public feud, the tech industry watches closely. The outcome of this conflict could set precedents for how AI companies engage with military and government entities in the future.
For now, Anthropic appears to be winning the public relations battle, with their principled stand resonating with consumers and industry observers alike. Whether this translates to long-term competitive advantage remains to be seen, but one thing is clear: the debate over AI ethics and military applications is far from over.
The contrast between these two approaches—Anthropic’s precautionary principle versus OpenAI’s legal pragmatism—may ultimately define how society navigates the complex relationship between cutting-edge technology and national security in the coming years.
Tags: Anthropic, OpenAI, Sam Altman, Dario Amodei, Department of Defense, Pentagon deal, AI ethics, military AI, ChatGPT, tech controversy, safety theater, mass surveillance, autonomous weapons, Trump administration, Department of War, tech industry drama, AI governance, public backlash, app store rankings, Twitter drama, employee morale, legal frameworks, ethical AI, tech competition
Viral Sentences:
- “Safety theater” – the phrase that’s shaking the AI world
- ChatGPT uninstalls jump 295% after Pentagon deal announcement
- Anthropic walks away from $200M contract over ethical concerns
- Sam Altman accused of “straight up lies” by rival CEO
- The battle between principle and pragmatism in AI development
- Public sides with Anthropic as #2 app store ranking proves
- “Twitter morons” can’t spin this one, says Anthropic CEO
- AI ethics debate heats up as two tech giants clash
- The law today, the loopholes tomorrow – critics sound alarm
- Employees caught in the crossfire of AI’s military ambitions
- When $200M isn’t enough: the price of ethical compromise
- Two approaches, one industry – which will prevail?
- The spin that backfired: how OpenAI lost public trust
- From allies to adversaries: the OpenAI-Anthropic split
- The memo that changed everything – Amodei’s blistering attack
- Safety first or legal first? The AI dilemma nobody wanted
- Public opinion as the ultimate judge in tech’s ethical battles
- The Twitter divide that mirrors a deeper industry split
- When principles cost millions – but might be worth it
- The future of AI hangs in the balance of this feud
,



Leave a Reply
Want to join the discussion?Feel free to contribute!