OpenAI will amend Defense Department deal to prevent mass surveillance in the US
OpenAI Responds to Uproar: Amending Defense Department Deal to Block Mass Surveillance
In a dramatic turn of events that sent shockwaves through the AI community, OpenAI CEO Sam Altman has announced the company will amend its controversial partnership with the U.S. Department of Defense to explicitly prohibit mass surveillance of American citizens.
The firestorm erupted when OpenAI revealed its agreement with the Pentagon just days after President Trump ordered federal agencies to cease using Claude, Anthropic’s AI system. The timing sparked immediate backlash, with critics accusing OpenAI of capitalizing on political tensions while potentially enabling government overreach.
The Memo That Changed Everything
Altman published an internal memo on X (formerly Twitter) addressing employee concerns and outlining the company’s position. In the message, he revealed OpenAI will modify the agreement to include explicit language prohibiting the AI system’s use for domestic surveillance of U.S. persons and nationals.
The proposed amendment states:
“Consistent with applicable laws, including the Fourth Amendment to the United States Constitution, National Security Act of 1947, FISA Act of 1978, the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
Altman emphasized that this limitation would prohibit “deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”
Standing Firm on Constitutional Principles
In a bold declaration, Altman stated that if presented with what he believed was an unconstitutional order, he would “rather go to jail than follow it.” This uncompromising stance represents a significant moment in the ongoing debate about AI ethics and government surveillance.
The OpenAI CEO also admitted the company “shouldn’t have rushed to get the deal out” on February 27, acknowledging that the issues were “super complex and demand clear communication.” He explained that while OpenAI was “trying to de-escalate things and avoid a much worse outcome,” the execution “looked opportunistic” in retrospect.
The Anthropic Connection
The controversy cannot be separated from the broader context of OpenAI’s relationship with Anthropic. The Pentagon had been pressuring Anthropic to remove guardrails from its Claude AI system, seeking to enable its use for “lawful” purposes including mass surveillance and development of fully autonomous weapons.
Anthropic refused, with the company stating that “no amount of intimidation or punishment” would change its “position on mass domestic surveillance or fully autonomous weapons.” In response, President Trump issued an executive order banning federal agencies from using Anthropic services.
The Defense Department subsequently moved to designate Anthropic as a “supply chain risk” – a label typically reserved for Chinese companies with alleged government ties. Altman, in his memo, reiterated that Anthropic shouldn’t receive this designation and expressed hope that the Pentagon would offer Anthropic the same deal OpenAI agreed to.
Market Fallout and Public Reaction
The controversy triggered immediate market consequences. ChatGPT experienced a staggering 295% increase in daily uninstall rates, according to Sensor Tower data. Meanwhile, Anthropic’s Claude surged to the number one position in the App Store’s Top Free Apps leaderboard, surpassing both ChatGPT and Google Gemini.
Capitalizing on the momentum, Anthropic launched a memory import tool to facilitate switching from other AI chatbots, making it easier for users to migrate their conversation histories.
The Bigger Picture
This incident highlights the growing tension between AI companies, government agencies, and public expectations regarding privacy and surveillance. As AI systems become more powerful and ubiquitous, questions about their appropriate use in national security contexts become increasingly urgent.
The OpenAI-Trump administration dynamic also reveals the complex political landscape tech companies must navigate. Altman’s decision to amend the agreement suggests a recognition that public trust and ethical considerations must take precedence over rapid expansion into government contracts.
Looking Forward
As OpenAI works to revise its Defense Department agreement, the tech industry watches closely. The outcome could set precedents for how AI companies engage with government agencies while maintaining ethical boundaries and public trust.
The incident also raises questions about the future of AI regulation and the role of constitutional protections in an era of advanced artificial intelligence. With both OpenAI and Anthropic taking strong positions on surveillance and autonomous weapons, the industry appears to be establishing ethical red lines even as it pursues lucrative government contracts.
For now, Altman’s willingness to amend the agreement and his commitment to constitutional principles represent a significant moment in AI governance. Whether this will be enough to restore public trust remains to be seen, but it’s clear that the intersection of AI, government, and civil liberties will remain a critical battleground in the years to come.
Tags: OpenAI, Sam Altman, Defense Department, mass surveillance, AI ethics, government contracts, Anthropic, Claude, Trump administration, privacy, Fourth Amendment, FISA, autonomous weapons, tech controversy, App Store rankings, uninstalls surge, AI regulation, constitutional rights
Viral Sentences:
- “I would rather go to jail than follow an unconstitutional order”
- “OpenAI shouldn’t have rushed to get the deal out”
- “No amount of intimidation or punishment will change our position”
- “ChatGPT uninstalls jumped by 295% day-over-day”
- “Claude climbed to number one in the App Store”
- “The issues were super complex and demand clear communication”
- “The company looked opportunistic”
- “AI system shall not be intentionally used for domestic surveillance”
- “Deliberate tracking, surveillance, or monitoring of U.S. persons”
- “The Pentagon had been pressuring Anthropic to remove guardrails”
- “Supply chain risk designation typically reserved for Chinese companies”
- “Trying to de-escalate things and avoid a much worse outcome”
- “Capitalizing on Claude’s sudden popularity”
- “The timing sparked immediate backlash”
- “Questions about AI’s appropriate use in national security contexts”
,



Leave a Reply
Want to join the discussion?Feel free to contribute!