The administration of US President Joe Biden outlined a plan to require developers of the most powerful AI systems to report their safety test results to the government, one of the outcomes of an executive order issued in 2023.

As part of the command, the Defence Production Act now requires AI developers to share information about their systems with the Department of Commerce and report the large computing clusters capable of training them.

The executive order tasked the National Institute of Standards and Technology with creating standards for the extensive testing to ensure safety before they are publicly released.

The Department of Commerce has proposed a draft rule requiring US cloud companies that provide servers to overseas AI developers to tell the government if the companies are training “the most powerful models, which could be used for malign activity”.

A total of nine government agencies including the departments of defence, transportation, treasury and health and human services have submitted their AI risk assessments to the Department of Homeland Security.

Those assessments will be the basis for continued government action to ensure the US “is ahead of the curve in integrating AI safely into vital aspects of society, such as the electric grid”.

The government also launched a programme to accelerate hiring of AI professionals, including a large-scale action for data scientists.

A committee was scheduled to meet yesterday (29 January) to discuss progress in implementing the Presidential order, which aims to provide guidelines covering some of the biggest perceived threats AI poses to safety and security.