OpenAI updates Department of War deal after backlash

OpenAI updates Department of War deal after backlash

OpenAI’s Controversial Department of War Deal: A Timeline of Backlash, Backtracking, and Betrayal

In a stunning turn of events, OpenAI CEO Sam Altman has publicly admitted that the company’s rushed partnership with the U.S. Department of War (DOW) was “opportunistic and sloppy.” The admission comes after a firestorm of criticism from users, privacy advocates, and even competitors like Anthropic, who refused to bow to the DOW’s demands for unchecked AI surveillance and autonomous weapons.

The Rush to Secure a Deal

The controversy began when OpenAI announced its partnership with the DOW late last week, just days after President Donald Trump ordered federal agencies to stop using competitor Anthropic. According to Anthropic CEO Dario Amodei, the split was due to Anthropic’s refusal to remove safeguards against using AI for mass domestic surveillance and fully autonomous weapons. Instead, the DOW wanted to use Anthropic’s AI tools for “any lawful use,” as outlined in their AI strategy document.

OpenAI, however, moved swiftly to fill the void, sparking immediate backlash from its civilian user base. Despite OpenAI’s claims that its deal includes even more safeguards than Anthropic’s original agreement, the contract appeared to allow for both mass surveillance and AI-controlled weapons as long as such use is legal. The deal even laid out specific circumstances in which these technologies could be deployed.

Damage Control and Amendments

In an internal memo shared on X (formerly Twitter), Altman acknowledged the company’s hasty approach, writing, “We shouldn’t have rushed to get this out on Friday. The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

Following the backlash, OpenAI announced it had worked with the DOW to add new language to the contract directly addressing the use of its technology for domestic surveillance. The company stated, “Throughout our discussions, the Department [of War] made clear it shares our commitment to ensuring our tools will not be used for domestic surveillance.”

However, the new amendments continue to rely on legality as the primary restraint, leaving the door open for mass surveillance should the U.S. government change the law. The amendments also fail to address the issue of autonomous weapons, a significant oversight given the ethical concerns surrounding their use.

Public Skepticism and Ethical Concerns

Many social media users reacted to OpenAI’s contract changes with skepticism, arguing that the specific prohibition of “deliberate” surveillance leaves notable loopholes. Political researcher Tyson Brody (@tysonbrody) tweeted, “Hard not to read as admitting to an AI dragnet. ‘Intentionally’ and ‘deliberate’ – so Americans will be swept up in this data, but the government can claim ‘incidental collection’ and thus legal.”

Others pointed out that the term “not intentionally used” isn’t a real safeguard in an autonomous AI system. @Andy_Bloch wrote, “It can wind up doing surveillance because of what it was trained on, what it figures out, or how people use it afterward.”

Altman previously indicated that OpenAI would only limit the use of its AI tools along legal lines, not ethical ones, during a Q&A held shortly after the DOW deal was announced. The CEO expressed a reluctance to take an ethical stance, stating that OpenAI prefers to follow the government’s directions rather than consider such issues itself.

Deference to Democratic Processes

Despite criticism of this apparent abdication of responsibility, Altman reiterated this position in his new memo, framing it as deference to “democratic processes.” He wrote, “It should be the government making the key decisions about society. We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty. But we are clear on how the system works (because a lot of people have asked, if I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it).”

Altman did state that DOW intelligence agencies such as the National Security Agency (NSA) won’t use OpenAI’s technology without an amendment to their contract. However, it currently seems unlikely that OpenAI would deny legal requests for such modifications, regardless of any ethical issues that may arise.

User Backlash and Market Impact

Numerous OpenAI customers have cancelled their ChatGPT subscriptions in response to the company’s deal with the DOW, with uninstalls reportedly jumping 295 percent in the wake of the news. Anthropic’s AI chatbot Claude has since dethroned ChatGPT as the most downloaded free app in the U.S. Apple App Store.

Conclusion

OpenAI’s rushed deal with the Department of War has sparked a significant backlash, forcing the company to backtrack and amend its contract. However, the amendments fail to address key ethical concerns, leaving many users skeptical of OpenAI’s commitment to responsible AI development. As the debate over AI ethics and government surveillance continues, it remains to be seen how OpenAI will navigate these complex issues in the future.

Tags: OpenAI, Department of War, Sam Altman, AI Ethics, Mass Surveillance, Autonomous Weapons, ChatGPT, Anthropic, Claude, NSA, Donald Trump, Artificial Intelligence, Tech Controversy, User Backlash, Democratic Processes, Legal vs. Ethical, AI Regulation

Viral Phrases: “Opportunistic and Sloppy,” “Rushed to Get This Out,” “Hard Not to Read as Admitting to an AI Dragnet,” “Not Intentionally Used Isn’t a Real Safeguard,” “Deference to Democratic Processes,” “Seat at the Table,” “Fight for Principles of Liberty,” “Unconstitutional Order,” “295 Percent Jump in Uninstalls,” “Dethroned ChatGPT,” “Backlash Forcing Backtracking,” “Ethical Concerns Left Unaddressed,” “Navigating Complex Issues.”

,

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *