Days after two top security executives announced they were leaving OpenAI, the company’s CEO Sam Altman and President Greg Brockman took to X (formerly Twitter) on Saturday to assuage concerns.
On May 14, co-founder and chief scientist Ilya Sutskever Xhe is Resign from the company. Sutskever is a member of the Board of Directors; Vote to remove Ultraman Received from the company in November 2023.
Later in the day, Jan Leike, co-head of OpenAI’s super-alignment group and a colleague of Sutskever’s, also said he was leaving.
“I have been at odds with OpenAI leadership regarding the company’s core priorities for quite some time, until we finally reached a breaking point,” Leike said in a May 17 X post.
In other posts that day, he said safety culture and processes are not getting the priority they need, especially when it comes to general artificial intelligence, which has the potential to surpass human capabilities at a variety of tasks.
“It’s long overdue that we take the impact of AGI seriously,” Lake wrote. “OpenAI must become a safety-first AGI company,” he added.
In a post co-signed by Altman and Stockman on Saturday, the two said they were aware of the risks and benefits of AGI.
“We have repeatedly demonstrated the incredible possibilities of scaling deep learning and analyzed their impacts; called for international governance of AGI before such calls became popular; and helped pioneer the field of assessing catastrophic risks of artificial intelligence systems,” they wrote. science.
“There is no proven playbook for how to get to AGI,” Ultraman and Stockman added. “We believe empirical understanding can help point the way forward. We believe there are significant benefits to be achieved while working to mitigate serious risks; we take our role here very seriously and carefully weigh feedback on our actions .
In January 2023, Microsoft (NASDAQ: MSFT) announced that it has invested “billions” of dollars in OpenAI.