Internal divisions persist within OpenAI after November coup attempt


OpenAI has struggled to contain internal squabbling over its leadership and security as divisions that led to last year’s attempted coup against Chief Executive Sam Altman reverberate into the public domain.

Six months after Altman’s abortive exit, a series of high-profile resignations speak to lingering divisions within OpenAI between those who want to develop AI quickly and those who would prefer a more cautious approach, according to current and former employees.

Helen Toner, one of the former OpenAI board members who tried to remove Altman in November, spoke publicly for the first time this week, saying he misled the board into error “repeatedly” on its security processes.

“For years, Sam had made it very difficult for the board to do its job by withholding information, misrepresenting what was happening at the company, in some cases outright lying to the board,” he said. she declared on the TED AI Show podcast. .

The most significant departure in recent weeks has been that of OpenAI co-founder Ilya Sutskever. One person close to his resignation described him as being caught up in Altman’s “conflicting promises” before last year’s leadership shake-up.

In November, OpenAI directors – which at the time included Toner and Sutskever – ousted Altman as chief executive, in an abrupt decision that shocked investors and staff. He returned a few days later under a new board, without Toner and Sutskever.

“We take our role extremely seriously as a board of directors of a not-for-profit organisation,” Toner told the Financial Times. The decision to fire Altman “took a tremendous amount of time and thought,” she added.

Sutskever said at the time of his departure that he was “confident” that OpenAI would build artificial general intelligence – AI as intelligent as humans – “that is both safe and beneficial” under its current leadership, including Altman .

However, the November affair does not appear to have resolved the underlying tensions within OpenAI that contributed to Altman’s expulsion.

Another recent departure, Jan Leike, who led OpenAI’s efforts to drive and control super-powerful AI tools and worked closely with Sutskever, announced his resignation this month. He said his differences with company management had “reached a breaking point” as “safety culture and processes took a back seat to shiny products.” He has now joined OpenAI rival Anthropic.

The unrest at OpenAI — which resurfaced despite the vast majority of employees calling for Altman to be reinstated as CEO in November — comes as the company prepares to launch a new generation of its AI software. It also plans to raise capital to finance its expansion, people familiar with the negotiations said.

Altman’s OpenAI focus on shipping products rather than publishing research led to his revolutionary chatbot ChatGPT and launched a wave of AI investment in Silicon Valley. After securing over $13 billion in backing from Microsoft, OpenAI’s revenue is on track to surpass $2 billion this year.

Yet this focus on commercialization has come into conflict with those within the company who would prefer to prioritize security, fearing that OpenAI is rushing into creating “superintelligence” that it cannot. control correctly.

Gretchen Krueger, an AI policy researcher who also left the company this month, listed several concerns about how OpenAI was handling technology that could have far-reaching ramifications for businesses and the public.

“We (at OpenAI) need to do more to improve fundamental things,” she said in an article on X, “like decision-making processes; responsibility; transparency; Documentation; policy enforcement; the care with which we use our own technology; and measures to mitigate impacts on inequalities, rights and the environment.

Altman, in response to Leike’s departure, said his former employee was “right, we still have a long way to go; We are committed to doing so.” This week, OpenAI announced the creation of a new safety and security committee to oversee its AI systems. Altman will serve on the committee alongside other board members.

“(Even) with the best of intentions, without external oversight, this type of self-regulation will eventually become unworkable, especially under the pressure of immense profit incentives,” Toner wrote alongside Tasha McCauley, who was also a member of the OpenAI board of directors until November 2023. , in an opinion article for The Economist magazine, published days before OpenAI announced its new board.

In response to Toner’s comments, Bret Taylor, president of OpenAI, said the board had worked with an external law firm to review the events of last November, concluding that “the board’s previous decision to “administration was not based on concerns about product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”

“Our focus remains on moving forward and continuing OpenAI’s mission to ensure AGI benefits all of humanity,” he said.

A person close to the company said that since the uproar in November, OpenAI’s biggest backer, Microsoft, had put more pressure on the company to prioritize commercial products. This has amplified tensions with those who would prefer to focus on scientific research.

Many within the company still want to focus on its long-term goal of AGI, but internal divisions and an unclear strategy from OpenAI management have demotivated staff, the person said.

“We are proud to create and launch models that lead the industry in capabilities and security,” OpenAI said. “We work hard to maintain this balance and believe it is essential to have a thorough debate as technology advances.”

Despite the scrutiny sparked by its recent internal divisions, OpenAI continues to build more advanced systems. It announced this week that it recently began training the successor to GPT-4, the large AI model that powers ChatGPT.

Anna Makanju, OpenAI’s vice president of global affairs, said policymakers contacted her team about the recent releases to find out if the company was “serious” about security.

She said security was “something that is the responsibility of many teams within OpenAI.”

“It is very likely that (AI) will be even more transformational in the future,” she said. “There will certainly be many disagreements about the right approach to prepare society (and) how to regulate it. »

Video: AI: a blessing or a curse for humanity? | FT technology



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top