Weeks after disbanding the OpenAI team focused on artificial intelligence safety, CEO Sam Altman said he will lead a new team with the same responsibilities.
The company announced the formation of a safety and security committee in a blog post on Monday, saying the committee will be responsible for making recommendations on critical safety and security decisions for all OpenAI projects.
The announcement follows the exit of several key members of the security board earlier this month, including co-founders Ilya Sutskever and Jan Leike. Lake was particularly critical of OpenAI when he left, accusing the company of neglecting “safety culture and processes” in favor of “shiny products.” He said he chose to leave the company because “he had been at odds with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”
Altman’s oversight of the safety committee may come under scrutiny in light of the criticism. He will be joined by directors Bret Taylor (chairman of the board), Adam D’Angelo and Nicole Seligman.
“The Safety and Security Committee’s first priority will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days,” the company wrote. “At the end of the 90 days, the Safety and Security Committee will share their recommendations with the full board. Following a comprehensive review by the Board, OpenAI will publicly share an update on the adopted recommendations in a manner consistent with safeguards.
The blog post also revealed that OpenAI is training its “next frontier model,” a successor to the model currently powering ChatGPT, saying “we expect the resulting system will push our capabilities on the road to AGI.” capabilities to a new level.”
News of the new safety committee comes less than 10 days after OpenAI disbanded its “Superalignment” team and integrated the remaining members into the company’s broader research efforts. Lake and Suzkofer are key members of the Super League team.