OpenAI is transforming its internal safety committee into an “independent” oversight board, the company said in a blog post on Monday. 

The ChatGPT-maker also said that the so-called Safety and Security Committee will be chaired by Carnegie Mellon professor Zico Kolter.

The committee unveiled in May originally included CEO Sam Altman but will now have “independent governance”.

OpenAI has come under criticism over its safety culture. In June, a group of current and former OpenAI employees published an open letter, warning about the “the serious risks posed by these technologies”.

Many high-profile employees, including co-fouder Ilya Sutskever, resigned from the company, citing safety concerns. 

A month later, five US senators raised questions about how OpenAI is addressing emerging safety concerns in a letter to Altman.

OpenAI said the “independent” committee will be briefed on new models and that with the board can delay the release of any new models. 

The company also said that the committee had reviewed the new o1 model, code-named Strawberry and said it was “medium risk”.

“As part of its work, the Safety and Security Committee … will continue to receive regular reports on technical assessments for current and future models, as well as reports of ongoing post-release monitoring,” OpenAI wrote in its blog post. 

“We are building upon our model launch processes and practices to establish an integrated safety and security framework with clearly defined success criteria for model launches,” it added. 

Other committee members include Quora CEO Adam D’Angelo, retired US Army General and former NSA chief Paul Nakasone, and former Sony general counsel Nicole Seligman.

Share.
Exit mobile version