OpenAI is forming a new security team, led by CEO Sam Altman and board members Adam D’Angelo and Nicole Seligman. The committee will make recommendations on “critical security decisions for OpenAI’s projects and operations” — concerns expressed by several leading artificial intelligence researchers when they left the company this month.
For its first task, the new team will “evaluate and further develop OpenAI’s processes and safeguards.” It will then present its findings to the OpenAI board of directors, on which all three leaders of the security team have a seat. The board will then decide how to implement the security team’s recommendations.
In addition to the new security board, OpenAI also announced that it is testing a new AI model, but did not confirm whether it is GPT-5.
Earlier this month, OpenAI unveiled a new voice for ChatGPT called Sky, which sounds eerily similar to Scarlett Johansson (whom Altman even mentioned on X ). However, Johnson later confirmed that she turned down Altman’s offer to provide her voice for ChatGPT. Altman later said Open AI “never intended” to make Sky sound like Johnson, whom he contacted after the company was casting a voice actor. The whole incident has concerned AI fans and critics alike.
Other members of OpenAI’s new security team include Aleksander Madry, lead of readiness, Lilian Weng, lead of security, John Schulman, lead of alignment science, Matt Knight, lead of security, and Jakub Pachocki, chief scientist. But with two board members — as well as Altman himself — leading the new safety committee, OpenAI doesn’t appear to have actually addressed the concerns of former employees.