Open AI, Microsoft, Google, Anthropic and various philanthropic organisations pledged an initial $10 million to a fund designed to help raise AI security standards and identify controls needed by authorities to keep the technology in check.

The announcement of the AI Safety Fund formed part of an update on the Frontier Model Forum, an industry association established by the quartet in July focused on pushing the safe and responsible use of future, advanced large-scale machine-learning models.

Cash is set to back academic research into techniques to evaluate AI models for what the four companies described as “potentially dangerous capabilities of frontier systems”.

“We believe that increased funding in this area will help raise safety and security standards, and provide insights into the mitigations and controls industry, governments and civil society need to respond to the challenges presented by AI systems.”

Funding will be handed out by Meridian Institute, supported by an advisory committee comprising external experts, representatives from AI companies and grant specialists.

Alongside the research cash, the partners unveiled Chris Meserole as the first executive director of the Frontier Model Forum. He was formerly director of the AI and Emerging Technology Initiative at US think tank Brookings Institution.

The group aims to appoint an advisory board in the “coming months”.

Frontier Model Forum was founded as an increasing number of authorities and major industry names expressed concerns on the pace of development and AI ethics.