AI companies including Anthropic and Google DeepMind agreed to a new round of commitments to ensure the safe development of the technology, a move built on a declaration announced in 2023.

Signatories of the so-called “Frontier AI Safety Commitments” count 16 technology companies from across North America, Asia and the Middle East including Meta Platforms, Mistral AI, Microsoft, OpenAI and Samsung Electronics.

The UK government stated the move is an expansion of the Bletchley agreement announced at the AI Safety Summit it hosted last year.

As part of the new agreement, the companies are expected to publish a report or “safety framework” detailing how they can examine risks of misuse of advanced AI models “by bad actors” and how they are able to measure and mitigate these threats.

The UK authorities added the companies will be taking feedbacks from “trusted actors” including their home governments as appropriate to identify these AI thresholds or safety standards, which will be published ahead of the AI Action Summit in France next year.

UK Prime Minister Rishi Sunak said the new commitments set “a precedent for global standards on AI safety that will unlock the benefits of this transformative technology”, adding it was a “world’s first” to have so many leading companies “agreeing to the same commitments on AI safety”.

Tom Lue, general counsel and head of governance at Google DeepMind stated the agreement “will help establish important best practices on frontier AI safety among leading developers”.

Earlier this year, the UK and US governments struck a deal to jointly evaluate AI safety.