Sam Altman, a major figure in Silicon Valley‘s burgeoning AI industry who earlier this year testified before Congress on the dangers of the technology, has been removed as CEO of OpenAI, according to a company statement. The surprise firing has set off a flurry of questions about why a startup currently positioned for a valuation of up to $90 billion would cut ties with its chief executive.
Shortly after news of Altman’s ousting, the company’s president Greg Brockman, who earlier stepped down from the board, announced he was quitting. “[W]e’ve been through some tough & great times together, accomplishing so much despite all the reasons it should have been impossible,” he wrote in a message to the OpenAI team, adding “but based on today’s news, i quit.”
In the press release, OpenAI’s board of directors indicated that Altman had not been truthful with them, leading to a breakdown of trust: “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” the statement reads. “The board no longer has confidence in his ability to continue leading OpenAI.”
Altman confirmed the news in a tweet, writing that he “loved” his time at the company and “will have more to say about what’s next later,” signing off with a saluting emoji.
Mira Murati, the company’s chief technology officer, will serve as interim CEO, “effective immediately.” Murati led the development of ChatGPT, OpenAi’s chatbot, which that took the world by storm in late 2022.
Altman was among the founders of OpenAI, formed in 2015 as a research nonprofit with funding from the likes of Elon Musk, Peter Thiel and Amazon Web Services. Altman and Musk served as the original board members; Musk left in 2018, with Altman becoming CEO the following year. He supervised the creation of a for-profit OpenAI subsidiary and a partnership with Microsoft, which invested $1 billion at the time.
The company became more of a household name in the past year thanks to ChatGPT and DALL-E, a text-to-image model that allows users to generate digital art from written prompts — both saw widespread use and provoked debate across the internet. Microsoft committed to a further $10 billion investment in January. With OpenAI now a major AI firm that could even rattle a giant like Google, Altman was a leader of the latest computing revolution. Apart from sitting before the Senate Judiciary Committee in May, he joined other top tech executives to advise the White House on AI regulation.
While some had kind words for Altman on the occasion of his departure from OpenAI — former Google CEO Eric Schmidt in a tweet called him “a hero of mine” who “changed our collective world forever” — others felt vindicated in their view that Altman was never up to the high-profile position. Malcolm Harris, author of Palo Alto, a history of tech in California, reshared an August interview in which he said Altman was “doling out other people’s money […] despite him not doing anything successfully.”
Another person of the opinion that Altman did not belong at the forefront of the AI industry is Rumman Chowdhury, a data scientist and AI ethics pioneer profiled in the Rolling Stone feature “These Women Tried to Warn Us About AI” in August. “Altman’s outing is a recognition that these powerful companies need mature leadership grounded in the realities of a socio-technical world,” Chowdhury says. “If AI is truly to ‘serve humanity’ as OpenAI’s mission states, leadership must reflect a dedication to those goals.”
Timnit Gebru, a computer scientist who works on algorithmic bias and was interviewed for the same Rolling Stone article, is puzzled by the decision, though no more complimentary of Altman. “I’m not sure what he did that the board would fire him so publicly,” she says. Joy Buolamwini, a researcher of AI harms profiled in the feature with Gebru and Chowdhury, conversed with Altman in one of his last events as CEO. “Just ten days ago I was on stage with Sam discussing the future of AI and the need to prioritize the excoded, those who are harmed by AI systems,” she says, already looking toward the future of OpenAI and its competitors: “Mira and all leaders aiming to develop responsible and/or beneficial AI systems cannot ignore growing issues around consent, compensation, creative rights, biometric rights, and civil rights. We must advance the capacities to mitigate AI harms immediately.”
Speculation on the circumstances of Altman’s ouster continued to swirl on Friday afternoon, with guesses ranging from an ugly personal scandal to a hostile takeover or a clash between the CEO and his board over OpenAI’s future direction. Some wondered if Microsoft had any influence over the decision. (Musk, who has been critical of OpenAI’s transition to a for-profit model, argued in February that it is now “effectively controlled” by Microsoft.) The near-simultaneous news that OpenAI has registered the first federal lobbyists to represent its interests in Washington made interesting coincidence. And one observer jokingly tweeted that “Sam Altman was actually typing out all the ChatGPT responses himself and the board just found out.”
But so far, no theory has been substantiated, leaving interested parties to mull the reasons for what appeared to be a rushed announcement before the close of markets — and its guarded language. “We are grateful for Sam’s many contributions to the founding and growth of OpenAI,” the company stated. “At the same time, we believe new leadership is necessary as we move forward.”