Anonymous Sources Detail Sam Altman’s Alleged Untrustworthiness in New Report
OpenAI’s Controversial CEO: The Inside Story of Sam Altman’s Ousting and Return
In a shocking turn of events that sent ripples through Silicon Valley, OpenAI’s board of directors made the unprecedented decision to fire CEO Sam Altman in late 2023. What followed was a dramatic five-day saga that would become known internally as “the Blip,” drawing comparisons to the Marvel Cinematic Universe’s five-year disappearance of half the world’s population.
According to an explosive investigation by The New Yorker, the board’s decision to remove Altman was not made lightly. The report, based on interviews with dozens of insiders including Altman himself, reveals a pattern of mistrust that extended far beyond OpenAI’s walls.
The board compiled a damning 70-page document detailing Altman’s alleged history of deception, including instances where he reportedly lied about internal safety protocols and even to government officials. One particularly concerning accusation involves Altman telling U.S. intelligence officials that China had launched a major artificial general intelligence (AGI) development project, requesting government funding for a counteroffensive, only to fail to provide evidence when pressed.
This pattern of mistrust wasn’t new. Sources claim that at Altman’s previous startup, Loopt, a location-sharing service that has since shut down, senior employees asked the board to fire him due to concerns about his transparency. The late hacktivist and former Reddit co-owner Aaron Swartz, who was in Altman’s cohort when he first joined Y Combinator as an entrepreneur with Loopt, allegedly described him as “a sociopath” who could “never be trusted.”
At OpenAI, the accusations were even more severe. The report details how Altman allegedly gaslit Anthropic co-founder and then-OpenAI employee Dario Amodei regarding a provision in the billion-dollar Microsoft deal signed in 2019. This deal, which has since transformed OpenAI from a non-profit to a for-profit corporation, included clauses about AGI that Amodei had included in the company’s charter. The clause posited that if another company found a way to build AGI safely, OpenAI would “stop competing with and start assisting this project.”
Even Microsoft executives, with whom OpenAI has had a long partnership since the 2019 deal, reportedly described Altman as someone who “misrepresented, distorted, renegotiated, reneged on agreements.” One senior executive allegedly went so far as to say there’s “a small but real chance he’s eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.”
These are alarming words to read about any executive in charge of a company as large and consequential as OpenAI, but they carry even more weight considering that OpenAI is the leading company creating a technology that many, including its early employees, have defined as a possible existential threat to humanity.
Under Sam Altman’s leadership, OpenAI’s technology has infiltrated pretty much all aspects of modern life. OpenAI’s AI is used by tens of millions of people around the world for health advice, and by numerous others for everything from automating work across industries to finishing homework for students and even offering murky companionship to some lonely people who seek it. ChatGPT is used throughout the federal government as well, and Altman has also recently sold the technology to the Pentagon.
This is all fueled by Altman’s salesmanship. He has sold the potential and purported realities of ChatGPT to so many people, leading to an unprecedented and potentially fragile dealmaking spree that has garnered so much investment that some experts say it is propping up the entire American economy right now.
The New Yorker report also claims that Altman assured the board that GPT-4 had been approved by a safety panel, which turned out to be a misrepresentation when a board member requested documentation of the approvals. Sutskever claimed in the memos that Altman also downplayed the need for safety approvals in conversation with former OpenAI CTO Mira Murati, citing the company’s general counsel. But when Murati asked the general counsel about it, he said he was “confused where sam got that impression.”
The accusations around ChatGPT’s safety features are particularly damning, considering the fallout of GPT-4o, the iteration of ChatGPT that followed GPT-4. The model’s knack for sycophancy reportedly caused instances of “AI psychosis” in vulnerable users, with some cases ending in fatalities.
Some of Altman’s inconsistencies have been well-documented publicly, too. Time and again, the OpenAI chief has published contradictory statements on things like the merits of putting ads in AI chatbots, the need for AI regulation, or whether ChatGPT’s voice feature unveiled in 2024 was inspired by Scarlett Johansson’s performance in the movie “Her.” Altman was also scrutinized recently over a whopping $100 billion Nvidia deal that just did not materialize as initially announced.
The report also details how the company’s culture vastly changed following Altman’s reinstatement as CEO. Before “the Blip,” the company had approached the concept of AGI cautiously, while after, AGI reportedly became a North Star for the company, with slogans like “feel the AGI” seen on merchandise around its offices. The alleged difference was seen in practice, too, as OpenAI disbanded some key teams focusing on chatbot safety, like the existential AI risk team and the superalignment team, which was co-led by Sutskever.
The report comes as Altman’s leadership is put under a microscope as the company begins preparing for a potential IPO. According to a recent report by The Information, Altman seems to be at odds with executives once again, this time regarding OpenAI’s readiness for an IPO. Altman reportedly wants to go public as soon as the fourth quarter of this year and is committing to spend $600 billion in the next five years despite expectations that OpenAI will burn more than $200 billion before it starts making money. Meanwhile, the report claims that OpenAI CFO Sarah Friar does not believe the company is ready to go public this year at all, due to the risky spending commitments. Unlike Altman, Friar reportedly does not yet believe that OpenAI’s revenue growth can support its financial commitments, nor is she certain that the company will even need to pour that much money into AI servers.
This internal conflict, combined with the serious allegations about Altman’s leadership style and the potential risks of OpenAI’s technology, raises critical questions about the future of AI development and the companies leading the charge. As OpenAI prepares for what could be the most significant tech IPO in history, the world watches closely to see if Sam Altman’s vision for artificial intelligence will lead to unprecedented technological advancement or if the concerns raised by his own board members will prove prescient.
SamAltman #OpenAI #ArtificialIntelligence #TechControversy #SiliconValley #AIethics #GPT4 #Microsoft #AGI #TechLeadership #CorporateDrama #AIrisks #TechInvestigation #OpenAIboard #TechScandal
,




Leave a Reply
Want to join the discussion?Feel free to contribute!