OpenAI promised 20% of its computing power to fight the most dangerous type of AI, but never delivered, sources say


In July 2023, OpenAI unveiled a new team tasked with ensuring that future AI systems that could be smarter than all humans combined can be safely controlled. To show how serious the company was about this goal, it publicly promised to dedicate 20% of its then-available IT resources to the effort.

Now, less than a year later, that team, called Superalignment, has been disbanded amid staff resignations and accusations that OpenAI prioritizes product launches over AI safety . According to half a dozen sources familiar with the operation of OpenAI’s Superalignment team, OpenAI never fulfilled its commitment to provide the team with 20% of its computing power.

Instead, according to the sources, the team repeatedly had its requests for access to graphics processing units, the specialized computer chips needed to train and run AI applications, repeatedly rejected by OpenAI management. , even though the team’s total compute budget was never close to the team’s total compute budget. promised a threshold of 20%.

These revelations call into question how serious OpenAI has always been about honoring its public commitment and whether other public commitments made by the company can be trusted. OpenAI did not respond to requests for comment for this story.

The company is currently facing backlash over its use of a voice for its AI voice generation features, which is strikingly similar to that of actress Scarlett Johansson. In this case, questions have been raised about the credibility of OpenAI’s public statements that the similarity between the voice of the AI ​​it calls “Sky” and Johansson’s voice is purely coincidental. Johansson says OpenAI co-founder and CEO Sam Altman approached her last September, when the Sky voice debuted, asking for permission to use her voice. She refused. And she says Altman asked for permission to use her voice again last week, just before a closely watched demonstration of its latest GPT-4o model, which used Sky voice. OpenAI has denied using Johansson’s voice without her permission, saying it paid a professional actress, whose name it cannot legally disclose, to create Sky. But Johansson’s claims now cast doubt on this, with some speculating on social media that OpenAI actually cloned Johansson’s voice or perhaps mixed another actress’ voice with Johansson’s from a one way or another to create Sky.

OpenAI’s Superalignment team was created under the leadership of Ilya Sutskever, OpenAI co-founder and former chief scientist, whose departure from the company was announced last week. Jan Leike, a longtime OpenAI researcher, co-led the team. He announced his resignation on Friday, two days after Sutskever left. The company then informed the remaining employees of the team, which numbered approximately 25 people, that it was disbanding and that they were being reassigned within the company.

It was a rapid fall for a team whose OpenAI work had been positioned less than a year earlier as vital to the company and essential to the future of civilization. Superintelligence is the idea of ​​a hypothetical future AI system that would be smarter than all humans combined. It’s a technology that would go even beyond the company’s stated goal of creating artificial general intelligence, or AGI, a single AI system as intelligent as any person.

Superintelligence, the company said when announcing the team, could pose an existential risk to humanity by seeking to kill or enslave people. “We do not have a solution to pilot and control potentially superintelligent AI, or to prevent it from becoming malicious,” OpenAI said in its announcement. The Superalignment team was supposed to research these solutions.

It was such a large task that the company said in its announcement that it would dedicate “20% of the calculation we have achieved to date over the next four years” to the effort.

But a half-dozen sources familiar with the Superalignment team’s work said the group never received that calculation. Instead, it received significantly less in the company’s regular compute allocation budget, which is re-evaluated every quarter.

A source close to the Superalignment team’s work said there had never been clear metrics on exactly how the 20% amount should be calculated, leaving it open to wide interpretation. For example, the source said the team was never informed whether the promise meant “20% per year for four years” or “5% per year for four years” or a variable amount that could end up being ” 1% or 2% for the first three years, then most of the commitment in the fourth year. In any case, all sources Fortune The person interviewed for this story confirmed that the Superalignment team never received nearly 20% of OpenAI’s secure compute as of July 2023.

OpenAI researchers can also request what’s called “flexible” computing (access to additional GPU capacity beyond what has been budgeted) to manage new projects between quarterly budget meetings. But the Superalignment team’s requests for flexibility were systematically rejected by supervisors, these sources said.

Bob McGrew, OpenAI’s vice president of research, was the executive who informed the team that those requests were being denied, the sources said, but others at the company, including chief technology officer Mira Murati, were involved in decision-making. Neither McGrew nor Murati responded to requests for comment for this story.

Although the team did some research (they published a paper detailing their experiments to get a less powerful AI model to control a more powerful model in December 2023), the lack of computation blocked the ideas most ambitious of the team, indicated the source.

Following his resignation, Leike published a series of messages on X (formerly Twitter) on Friday in which he criticized his former employer, saying that “safety culture and processes took a backseat to shiny products.” He also said that “for several months, my team has been sailing against the wind. Sometimes we had difficulty calculating and it became increasingly difficult to complete this crucial research.

Five sources familiar with the Superalignment team’s work confirmed Leike’s account, saying the compute access problems worsened following the pre-Thanksgiving confrontation between Altman and the foundation’s board at non-profit OpenAI.

Sutskever, who was on the board, had voted to fire Altman and was the person chosen by the board to break the news to Altman. When OpenAI staff rebelled in response to the decision, Sutskever later posted on X that he “deeply regretted” his participation in Altman’s firing. Ultimately, Altman was rehired and Sutskever and several other board members involved in his firing resigned from the board. Sutskever never returned to work at OpenAI after Altman was rehired, but only officially left the company last week.

One source disputed how other sources Fortune described the computational problems the Superalignment team faced, saying they predated Sutskever’s participation in the failed coup, affecting the group from the start.

Although there were reports that Sutskever continued to co-lead the Superalignment team remotely, sources familiar with the team’s work said that was not the case and that Sutskever did not have access to the team’s work and had played no role in the team’s leadership afterward. Thanksgiving.

With Sutskever’s departure, the Superalignment team lost the only person on the team who had enough political capital within the organization to successfully advocate for his compute allocation, the sources said.

Besides Leike and Sutskever, OpenAI has lost at least six other AI security researchers from different teams in recent months. One researcher, Daniel Kokotajlo, told the Vox news site that he “gradually lost confidence in OpenAI’s leadership and its ability to manage AGI responsibly, so I resigned.”

In response to Leike’s comments, Altman and co-founder Greg Brockman, president of OpenAI, posted on X that they were “grateful to (Leike) for everything he has done for OpenAI.” The two men went on to write: “We must continue to elevate our security work to meet the challenges of each new model. »

They then outlined their views on the company’s future approach to AI security, which would involve placing much more emphasis on testing models under development than on trying to develop theoretical approaches on how to make future more powerful models safer. “We need a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony between security and capabilities,” wrote Brockman and Altman, adding that “empirical understanding can help illuminate the path forward.” »

People who spoke to Fortune did so anonymously, either because they said they feared losing their job, or because they feared losing their equity in the company, or both. Employees who left OpenAI were forced to sign separation agreements that include a strict non-disparagement clause stipulating that the company can recoup their acquired assets if they publicly criticize the company, or even if they acknowledge the existence of the clause. And employees were told that anyone who refused to sign the separation agreement would also lose their equity.

After Vox reported on the separation terms, Altman posted on X that he was unaware of the arrangement and was “genuinely embarrassed” by it. He said OpenAI never attempted to enforce the clause and recoup any equity acquired by anyone. He said the company was in the process of updating its exit documents to “resolve” the issue and that any former employees concerned about the provisions of the exit documents they signed could contact him directly about it and that these would be modified.



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top