OpenAI CEO, Sam Altman, has decided to step down from the internal committee created by the company to oversee critical safety decisions related to its projects and operations. This commission, established in May 2023, was designed to ensure that OpenAI’s developments are carried out in a safe and responsible manner.
In a blog post, OpenAI announced that the committee, named the Safety and Security Committee, will become an independent oversight group. It will be led by Carnegie Mellon professor Zico Kolter and will include notable figures such as Quora CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony Executive Vice President Nicole Seligman. All of them are already members of OpenAI’s board of directors.
Changes in Safety Oversight
OpenAI detailed that the committee reviewed the safety of its latest model, o1, after Altman’s departure. This group will continue to receive regular reports from OpenAI’s safety team and will have the authority to delay releases if safety concerns are not adequately addressed.
According to the company’s statement, the committee will continue to oversee the technical aspects of current and future models, as well as manage post-launch monitoring. Additionally, OpenAI indicated that they are working to integrate a more robust safety framework into their model launch processes, with clearly defined success criteria.
Altman’s exit from the safety committee comes after five U.S. senators questioned OpenAI’s policies in a letter addressed to him. Furthermore, nearly half of the staff focusing on long-term AI risks have left the company. Some former researchers have accused Altman of opposing real AI regulation in favor of advancing OpenAI’s commercial interests.
Growing Commercial Interests
Despite the creation of this independent committee, some critics argue that the group is unlikely to make decisions that significantly affect OpenAI’s commercial roadmap. The company, which is reportedly seeking to raise over $6.5 billion in a new funding round, might even abandon its hybrid nonprofit structure, raising concerns about whether its objectives remain aligned with its original mission of benefiting humanity through artificial intelligence.