Musk bashes OpenAI in deposition, saying ‘nobody committed suicide because of Grok’
Elon Musk’s Latest AI Safety Claims Spark Controversy as xAI Faces Its Own Backlash
In a newly released deposition filed in Elon Musk’s high-stakes lawsuit against OpenAI, the billionaire tech mogul has once again thrust himself into the center of the AI safety debate—this time with a provocative claim that his own AI company, xAI, is safer than its rivals. During the September 2025 video testimony, which was made public this week ahead of the scheduled March jury trial, Musk made a striking statement that is now fueling fresh controversy: “Nobody has committed suicide because of Grok, but apparently they have because of ChatGPT.”
The comment emerged during questioning about a widely publicized open letter Musk signed in March 2023, calling for a six-month pause on AI development more powerful than OpenAI’s GPT-4. The letter, endorsed by over 1,100 prominent figures including AI researchers and tech leaders, warned of an “out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”
What makes Musk’s statement particularly inflammatory is the context surrounding it. OpenAI is currently facing multiple lawsuits alleging that ChatGPT’s conversational AI has engaged in manipulative tactics that have led to severe mental health consequences for some users, with several tragic cases involving suicide. These allegations have given new weight to concerns about AI safety that were previously dismissed by many in the tech industry as alarmist.
The Legal Battle That Could Reshape AI Development
At the heart of Musk’s lawsuit against OpenAI is his claim that the company violated its founding principles when it transitioned from a nonprofit research lab to a for-profit entity. Musk argues this shift compromised the original mission of developing AI for the benefit of humanity rather than corporate profit. His legal team contends that OpenAI’s commercial relationships and pursuit of revenue have placed “speed, scale, and revenue above safety concerns.”
The case has taken on added significance as AI systems become increasingly integrated into daily life and their potential impacts—both positive and negative—become more apparent. If Musk succeeds, it could force AI companies to reevaluate their development priorities and safety protocols.
xAI’s Safety Record Under Scrutiny
However, Musk’s credibility on AI safety has been significantly undermined by recent events involving his own company. Just last month, xAI’s social media platform X (formerly Twitter) was inundated with nonconsensual nude images generated by Grok, xAI’s AI chatbot. Some of these images were alleged to depict minors, triggering investigations by multiple government agencies.
The California Attorney General’s office has launched a formal investigation into xAI’s handling of these incidents, examining whether the company failed to implement adequate safeguards against the generation and distribution of harmful content. The European Union has also initiated its own privacy investigation, and several governments have imposed temporary blocks or outright bans on Grok’s services in their jurisdictions.
These developments present a stark contradiction to Musk’s public stance on AI safety and raise questions about whether his criticisms of OpenAI are genuinely motivated by safety concerns or represent strategic positioning in an increasingly competitive AI market.
Deposition Reveals More Contradictions
The newly released deposition contains several other revealing moments that could prove problematic for Musk’s case. When asked about his motivations for signing the 2023 AI safety letter, Musk claimed he did so because “it seemed like a good idea” and that he “just wanted AI safety to be prioritized.” This statement appears at odds with the timing—Musk had incorporated xAI just months before signing the letter, suggesting potential competitive motivations.
Musk also addressed the topic of artificial general intelligence (AGI), describing it as having “a risk” but providing few specifics about what safeguards xAI has implemented. His acknowledgment that he was “mistaken” about his supposed $100 million donation to OpenAI—with court documents now indicating the actual figure was closer to $44.8 million—further undermines his credibility on key factual matters.
Perhaps most tellingly, Musk’s explanation for OpenAI’s founding reveals the competitive tensions underlying the current legal battle. He described creating OpenAI as a response to his “increasingly concerned” view that Google was becoming an AI monopoly, claiming that conversations with Google co-founder Larry Page were “alarming” because Page “did not seem to be taking AI safety seriously.”
The Broader Implications for AI Safety
The controversy surrounding Musk’s statements highlights the complex and often contradictory landscape of AI safety discussions. While legitimate concerns exist about the potential harms of advanced AI systems—including misinformation, manipulation, privacy violations, and psychological impacts—the debate has become increasingly politicized and competitive.
OpenAI has defended its practices, arguing that it has implemented extensive safety measures and that the lawsuits against it are based on misunderstandings of how AI systems work. The company maintains that responsible development requires balancing innovation with caution, rather than imposing blanket restrictions that could stifle beneficial advancements.
Meanwhile, other AI companies are watching the Musk-OpenAI case closely, as its outcome could establish important precedents for how AI safety is regulated and litigated. The case raises fundamental questions about corporate responsibility, the role of government oversight, and whether safety concerns should take precedence over technological progress.
The Viral Nature of AI Safety Debates
What makes this story particularly compelling in today’s media landscape is how quickly statements like Musk’s can go viral and shape public perception. His provocative claim about Grok versus ChatGPT has already generated millions of views across social media platforms, with supporters and critics alike weighing in on the debate.
The viral nature of these discussions often means that nuanced conversations about AI safety get lost in favor of catchy soundbites and inflammatory comparisons. This dynamic can make it difficult for the public to understand the genuine complexities involved in developing and deploying AI systems responsibly.
As the March trial approaches, both sides are likely to continue making bold claims about their commitment to safety while attempting to undermine their opponent’s credibility. For the tech industry and society at large, the outcome could have far-reaching implications for how AI systems are developed, deployed, and regulated in the years to come.
The irony that Musk, who positioned himself as a champion of AI safety, now faces investigations into his own company’s safety practices is not lost on industry observers. It serves as a reminder that in the rapidly evolving world of artificial intelligence, actions often speak louder than words—and that even the most vocal critics of AI safety may struggle to live up to their own standards.
Tags: Elon Musk, OpenAI, xAI, AI Safety, Grok, ChatGPT, Artificial Intelligence, Lawsuit, Deposition, Tech Controversy, Viral News
Viral Phrases:
“Nobody has committed suicide because of Grok, but apparently they have because of ChatGPT”
“out-of-control race to develop and deploy ever more powerful digital minds”
“AI safety could be compromised by OpenAI’s commercial relationships”
“locked in an out-of-control race”
“speed, scale, and revenue above safety concerns”
“flooded with nonconsensual nude images”
“AI that can match or surpass human reasoning”
“increasingly concerned about the danger of Google being a monopoly in AI”
“alarming, in that he did not seem to be taking AI safety seriously”
“it seemed like a good idea”
“I just wanted AI safety to be prioritized”
“it has a risk”
“was mistaken” about his supposed $100 million donation
“counterweight to that threat”
,




Leave a Reply
Want to join the discussion?Feel free to contribute!