According to WIRED, the entire OpenAI team that examined the existential risks posed by AI has either left or joined other research teams.
A new research team headed by OpenAI was established in July of last year with the goal of preparing for the development of extremely intelligent artificial intelligence that will be able to outsmart and even outclass its creators. The head of this new team is OpenAI’s principal scientist and one of the company’s cofounders, Ilya Sutskever. The team would get 20% of OpenAI’s processing capacity, according to the company.

According to the company, OpenAI’s “superalignment team” has now disbanded. This follows the resignation of the team’s other collaborator, the notification on Tuesday that Sutskever was leaving the company, and the departure of a number of the researchers involved. The team’s efforts will be integrated into OpenAI’s other research initiatives.

Sutskever’s exit garnered media attention as, while having assisted CEO Sam Altman in founding OpenAI in 2015 and directing the course of the research that resulted in ChatGPT, he was among the quartet of board members that dismissed Altman in November. After a turbulent five days, Altman was reinstated as CEO following a widespread uprising by the OpenAI personnel and the arranging of a deal that resulted in Sutskever and two other firm directors resigning from the board.

Jan Leike, the other co-leader of the superalignment team and a former DeepMind researcher, announced his resignation on X a few hours after Sutskever’s departure was made public on Tuesday.

Requests for comments were not answered by Sutskever or Leike. Sutskever posted on X, endorsing OpenAI’s present direction but failing to provide a reason for his departure. Under its current leadership, he added, “the company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial.”

Leike explained his decision in a thread he made on X on Friday, citing a dispute about the company’s priorities and the amount of resources being provided to his team.

Leike added, “We finally reached a breaking point, as I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time.” “My squad has been sailing against the wind for the past few months. It was getting tougher and harder to complete this important research at times when we were having trouble with computation.

The disbandment of OpenAI’s superalignment team is another example of the company’s recent internal shakeup following the governance crisis in November. Leopold Aschenbrenner and Pavel Izmailov, two of the team’s researchers, were fired for disclosing trade secrets, according to a story published in The Information last month. William Saunders, another team member, departed from OpenAI in February, based on a forum post he made online.

It also looks like two additional OpenAI researchers who worked on AI governance and policy recently left the business. According to LinkedIn, Cullen O’Keefe resigned from his position as research lead on policy frontiers in April. According to a post on an online forum under his identity, Daniel Kokotajlo, an OpenAI researcher who has coauthored multiple publications on the risks associated with more powerful AI models, “quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI.” Requests for comment from the researchers who appear to have left were not answered.

Regarding Sutskever’s and the other members of the superalignment team’s departures, as well as the future of OpenAI’s research on long-term AI concerns, the company declined to comment. John Schulman, who now leads the team responsible for fine-tuning AI models after training, will now take the lead in research on the risks associated with more powerful models.

Leave a Reply

Your email address will not be published. Required fields are marked *