Former OpenAI board member reveals why Sam Altman was fired in explosive interview: ‘We heard about ChatGPT on Twitter’


One of the leaders of the brief, dramatic, but ultimately unsuccessful coup to oust Sam Altman has accused the OpenAI boss of repeated dishonesty in an explosive interview that marked his first in-depth remarks since the flashpoint events of November .

Helen Toner, an AI policy expert at Georgetown University, served on the nonprofit board that controlled OpenAI from 2021 until resigning late last year following her role in Altman’s ouster. After staff threatened to leave en masse, it returned reinforced by a new board of directors, with only Quora CEO Adam D’Angelo remaining of the original four conspirators.

Toner disputed speculation that she and her board colleagues were spooked by a technological advancement. Instead, she blamed the coup on pronounced dishonest behavior on Altman’s part, which gradually eroded trust as key decisions were not shared in advance.

“For years, Sam had made the board’s job very difficult by withholding information, misrepresenting what was happening at the company, and in some cases outright lying to the board,” he said. she declared. The TED show on AI in comments published Tuesday.

Even the very launch of ChatGPT, which sparked the generative AI frenzy when it debuted in November 2022, was held back from the board, according to Toner. “We heard about ChatGPT on Twitter,” she said.

Toner claimed that Altman always had a convenient excuse at hand to downplay the board’s concerns, which is why for so long no action was taken.

“Sam could always come up with some sort of seemingly innocuous explanation for why it wasn’t a big deal, or why it was misinterpreted or whatever,” she continued. “But the end effect was that after years of this sort of thing, the four of us who fired him came to the conclusion that we just couldn’t believe the things that Sam was telling us and that’s a completely impractical place as a person advice.

OpenAI did not respond to a request for Fortune for comment.

Things finally came to a head, Toner said, after he co-published a paper in October last year that presented Anthropic’s approach to AI security in a better light than that of ‘OpenAI, which made Altman angry.

“The problem was that after the paper came out, Sam started lying to the other board members to try to push me off the board. So that was another example that really hurt our ability to trust him,” she continued, adding that this behavior coincided with discussions in which the board was “already discussing quite seriously the need to fire him.”

Taken in isolation, these and other disparaging remarks made by Toner about Altman could be downplayed as sour grapes from the leader of a failed coup. The pattern of dishonesty she described, however, is a continuation of equally damaging accusations from a former senior AI security researcher, Jan Leike, as well as Scarlett Johansson.

Attempts at self-regulation are doomed to failure

The Hollywood actress said Altman approached her about using her voice for his latest flagship product: a ChatGPT voice bot that users can converse with, reminiscent of the fictional character played by Johansson in the film. Her. When she refused, she suspected he may have mixed up some of his voice, violating her wishes. The company disputes his claims but agreed to suspend its use anyway.

Leike, meanwhile, served as co-leader of the team responsible for creating safeguards that would allow humanity to control hyperintelligent AI. He left this month, saying it had become clear to him that management had no intention of diverting valuable resources to his team as promised, leaving behind a scathing rebuke of his Former employer. (On Tuesday, he joined the same OpenAI rival that Toner hired in October, Anthropic.)

Once key members of its AI security team dispersed, OpenAI disbanded the team entirely, unifying control in the hands of Altman and his allies. It remains to be seen whether those responsible for maximizing financial results will be better responsible for putting in place safeguards that could constitute a commercial barrier.

Although some staff members had doubts, few other than Leike chose to speak out. Thanks to reporting by Vox earlier this month, it emerged that a key motivating factor behind this silence was an unusual non-disparagement clause that, if violated, would void an employee’s equity in perhaps the hottest startup in the world.

This followed earlier statements by former OpenAI security researcher Daniel Kokotajlo that he had voluntarily sacrificed his share of the equity in order not to be bound by the exit agreement. Altman later confirmed the validity of these claims.

“Even though we never recovered anything, it should never have been in any document or communication,” he said earlier this month. “It’s my fault and one of the few times I’ve been really embarrassed to use OpenAI; I didn’t know this was happening and I should have.

Toner’s comments follow his editorial in the Economistin which she and former OpenAI director Tasha McCauley argued that no AI company could be trusted to self-regulate, as the evidence showed.

“If any company could have successfully governed itself while developing advanced AI systems in a safe and ethical manner, it would have been OpenAI,” they wrote. “Based on our experience, we believe that self-government cannot reliably withstand the pressure of profit incentives. »

Subscribe to the Eye on AI newsletter to stay informed about how AI is shaping the future of business. Free registration.





Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top