ChatGPT-maker OpenAI created a safety and security committee made up of its board members with plans to evaluate its existing safeguards and practices in the next 90 days, as it started training a new advanced model.

In a brief statement, the company explained the committee will be led by its board including CEO Sam Altman. OpenAI’s technical and policy experts will also join the advisory body, and it will continue working with external security specialists including Rob Joyce, a US official who served as security advisor under Donald Trump’s administration.

OpenAI’s head of alignment, Jan Leike, who led safety research, recently resigned.

The committee will begin to review OpenAI’s current safety and security practices in the next three months and will then “share their recommendations with the full board”.

Creation of the committee comes as OpenAI started training its “next frontier model”, a system it expects will bring the company closer to Artificial General Intelligence (AGI).

Earlier this month, OpenAI unveiled a new flagship model GPT-4o, a system it claims is faster than GPT-4 Turbo.

The move also came on the heels of a recent controversy linked to its alleged replication of movie star Scarlett Johansson’s voice for its Sky AI voice assistant, which has since been suspended from the market.