Cybersecurity authorities from the UK and US teamed to establish a set of global safety guidelines for the development of AI, measures endorsed by more than a dozen international agencies and aimed at curbing threats linked to the technology.

In a press release, the UK’s National Cyber Security Centre and US’ Cybersecurity and Infrastructure Security Agency stated the protocol was led by the former country and is the first of its kind to be agreed on a global level. 

In total 18 countries have endorsed the guidelines, which have been developed in cooperation with 21 international ministries.

At its centre, the AI safety protocol intends to help developers create systems that are secure by design, with aims to assess the end-to-end safety of AI from its development stage to deployments and updates. 

The government said this will aid developers “ensure that cyber security is both an essential pre-condition of AI safety system and integral to the development process from the outset and throughout”.  

The guidelines are split into four key areas that evaluate the safety of the design, development, deployment and “operation and maintenance” stage. The UK’s cyber arm claimed it will prioritise transparency and accountability for a secure AI infrastructure and in turn make the tools safer for customers. 

“When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system,” the statement read.