Nations including the UK, the US and China agreed to establish a shared understanding of the risks posed by frontier AI, pledging to ensure the technology is developed and deployed safely.

Announced at a two-day UK government AI Safety Summit running this week, 28 countries also including Brazil, India, Nigeria and Saudi Arabia, along with the European Union, signed the agreement dubbed the Bletchley Declaration on AI.

The UK government stated the declaration fulfils key objectives of the summit in establishing shared agreement and responsibility on the risks, opportunities and forward process for international collaboration on AI safety and research, particularly through greater scientific collaboration.

Countries involved in the pact agreed substantial risks may arise from potential intentional misuse, along with highlighting concerns around cybersecurity, biotechnology, disinformation, bias and privacy risks.

The declaration cites the “potential for serious, even catastrophic harm, either deliberate or intentional, stemming from the most significant capabilities” of AI models.

To that end, the countries involved have agreed to encourage transparency and accountability among frontier AI developers covering measuring, monitoring and mitigating harmful capabilities.

UK Prime Minister Rishi Sunak described the accord as a landmark which “sees the world’s greatest powers agree on the urgency behind understanding the risks of AI”.