OpenAI CEO Sam Altman was fired for ‘outright lying,’ says former board member


A former OpenAI board member has explained why directors made the now-infamous decision to fire CEO Sam Altman last November. Speaking in an interview on The TED show on AI podcast, AI researcher Helen Toner accused Altman of lying and obstructing the OpenAI board, retaliating against those who criticized him, and creating a “toxic atmosphere.”

“The (OpenAI) Board of Directors is a non-profit board that was created explicitly with the goal of ensuring that the company’s public good mission is paramount – it comes before profits, interests of investors and other things,” Toner told The TED show on AI host Bilawal Sidhu. “But for years, Sam had made it very difficult for the board to do that job by, you know, withholding information, misrepresenting things that were happening at the company, and in some cases outright lying to the board. board of directors.”

OpenAI fired Altman on November 17 of last year, a shock decision that surprised many inside and outside the company. According to Toner, the decision was not taken lightly and required weeks of intense discussions. The secrecy surrounding it was also intentional, she said.

“It was very clear to all of us that as soon as Sam had an inkling that we might do something that went against him, he would pull out all the stops, do everything in his power to undermine the board, to stop us from doing it, you I know, even to the point of being able to fire him,” Toner said. “So we were very careful, very deliberate about who we told, which was basically almost no one in advance other than, obviously, our legal team.”

Unfortunately for Toner and the rest of the OpenAI board, their careful planning did not produce the desired result. While Altman was initially ousted, OpenAI quickly rehired him as CEO after days of outcry, accusations, and uncertainty. The company also installed an almost entirely new board of directors, removing those who had tried to remove Altman.

Why did the OpenAI board fire CEO Sam Altman?

Toner did not specifically discuss the aftermath of this tumultuous time on the podcast. However, she explained exactly why the OpenAI board came to the conclusion that Alman needed to go.

Earlier this week, Toner and former board member Tasha McCauley published an opinion piece in The Economist stating that they decided to expel Altman due to “long-standing patterns of behavior.” Toner has now provided examples of this behavior in his interview with Sidhu – including the claim that OpenAI’s own board was not informed of ChatGPT’s release, having only discovered it through the social media.

“When ChatGPT was released (in) November 2022, the board was not informed in advance. We heard about GPT on Twitter,” Toner claimed. “Sam did not inform the board that he owned the OpenAI seed fund, even though he consistently claimed to be an independent board member with no financial interest in the company. On several occasions, he gave us inaccurate information about the small number of official directors’ security processes the company had in place, meaning it was virtually impossible for the board to know the extent of these. security processes were working or what might need to change. “

SEE ALSO:

OpenAI launches new internal security team with Sam Altman at the helm

Toner also accused Altman of deliberately targeting her after she objected to a research paper she co-authored. Titled “Decoding Intentions: Artificial Intelligence and Costly Signals,” the paper discusses the dangers of AI and includes an analysis of the security measures of OpenAI and its competitor Anthropic.

However, Altman reportedly considered the academic article overly critical of OpenAI and complimentary of its rival. Toner said The TED show on AI that after the paper was published in October of last year, Altman began spreading lies to other board members in an attempt to have her removed. This alleged incident only further shook the board’s confidence in him, she said, because they had already seriously discussed firing Altman by that point.

Crushable speed of light

“[F]or any individual case, Sam could always come up with some sort of seemingly innocuous explanation as to why it wasn’t serious or misinterpreted or whatever,” Toner said. “But the end result was that after years of this stuff, all four of us who fired him (OpenAI board members Toner, McCauley, Adam D’Angelo and Ilya Sutskever) We came to the conclusion that we just couldn’t believe the things Sam was telling us.

“And that’s a totally unworkable place for a board to be, especially a board that’s supposed to provide independent oversight of the company, not just, you know, help the CEO raise more money. Don’t “Trusting the word of the CEO, who is your main channel of access to the company, your main source of information about the company, is just totally, totally impossible.”

Toner said the OpenAI board has attempted to address these issues, instituting new policies and processes. However, other executives then allegedly began telling the board about their own negative experiences with Altman and the “toxic atmosphere he created.” This included allegations of lying and manipulation, saved screenshots of conversations and other documents.

“They used the phrase ‘psychological abuse,’ telling us that they didn’t think he was the right person to lead the company into (artificial general intelligence), telling us that they didn’t believe that “he could or would change, that there was no point in giving him feedback, there was no point in trying to resolve these issues,” Toner said.

OpenAI CEO accused of retaliating against critics

Toner further responded to the outcry from OpenAI employees over Altman’s firing. Many posted messages on social media in support of the ousted CEO, while more than 500 of the company’s 700 employees said they would resign if he was not reinstated. According to Toner, staff were led to believe the false dichotomy that if Altman did not return “immediately, without any accountability (and with a) completely new board of directors of his choosing,” OpenAI would be destroyed.

“I understand why many people didn’t want the company destroyed. Whether it was because they were in some cases poised to make a lot of money from this upcoming takeover bid, or simply because they love their team, they didn’t want to lose their job, they cared about the work they were doing,” Toner said. “And of course, a lot of people didn’t want the company to collapse, including us.”

She also claimed that fear of retaliation for his opposition to Altman may have contributed to the support he received from OpenAI staff.

“They saw him retaliating against people, retaliating against them for past instances of criticism,” Toner said. “They were really afraid of what might happen to them. So when some employees started saying, ‘Wait, I don’t want the company to collapse, let’s bring Sam back,’ it was very hard for these people who “I had been through terrible experiences for saying this, for fear that if Sam remained in power as he ultimately did, it would make their lives miserable. ”

Finally, Toner highlighted Altman’s checkered work history, which initially emerged after his failed dismissal from OpenAI. Pointing to reports that Altman was fired from his previous position at Y Combinator due to his alleged self-serving behavior, Toner claimed that OpenAI was far from the only company to have the same issues with him.

“And then at his job before that – which was his only other job in Silicon Valley, his startup Loopt – apparently the management team went to the board twice and asked the board to ‘administration to fire him for what they call ‘deceptive and chaotic behavior,’” Toner continued.

“If you actually look at his track record, he doesn’t really have a stellar list of credentials. It wasn’t a problem specific to board personalities, although he would like to describe it that way.”

Toner and McCauley are far from the only OpenAI alumni who have expressed doubts about Altman’s leadership. Jan Leike, a senior security researcher, resigned earlier this month, citing disagreements with management’s priorities and arguing that OpenAI should focus more on issues such as security, safety and security. societal impact. (Chief scientist and former board member Sutskever also resigned, although he cited a desire to work on a personal project.)

In response, Altman and President Greg Brockman defended OpenAI’s approach to security. The company also announced this week that Altman will lead OpenAI’s new safety and security team. Meanwhile, Leike joined Anthropic.





Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top