The UK and US penned a Memorandum of Understanding (MoU) to establish a partnership around enhancing safe development and usage of AI, detailing plans that include the testing of powerful models.

In a joint statement, the governments explained the work and research will be done through newly-created institutes charged with evaluating AI safety, as well as developing guidance to mitigate risks associated with the technology.

Under the partnership, the countries aim to conduct “at least one joint testing exercise on a publicly accessible model” and “tap into a collective pool of expertise” from the two institutes.

The UK and US further noted plans to build a common approach to AI safety testing have been laid out, and the two countries iterated a commitment to establish “similar partnerships with other countries to promote AI safety across the globe”.

The MoU was signed by US Commerce Secretary Gina Raimondo and the UK Technology Secretary Michelle Donelan, with the partnership built on commitments made at the 2023 AI Safety Summit.

“By working together, we are furthering the long-lasting special relationship between the US and UK and laying the groundwork to ensure that we’re keeping AI safe both now and in the future”, Raimondo commented.

The partnership will take effect immediately.

Last September, the cybersecurity arms of the two countries also inked a cooperation deal to establish a set of global safety guidelines around AI developments.