Singapore’s Infocomm Media Development Authority (IMDA) and the AI Verify Foundation expanded on an existing AI governance framework by jointly developing a draft model for generative AI to address new issues.

IMDA stated the significant transformative potential of AI also carries risks.

It explained “there is growing global consensus that consistent principles are needed to create a trusted environment”.

The media watchdog noted the impact of AI is not limited to individual countries, so the proposed framework aims to spur international conversations to “enable trusted development globally”.

IMDA argued building international consensus is key, pointing to the mapping and interoperability of national AI governance frameworks between the IMDA and the US National Institute of Science and Technology.

The framework looks at nine dimensions including accountability, incident reporting and security, and a comprehensive and trusted AI ecosystem.

Core elements are “based on the principles that decisions made by AI should be explainable, transparent and fair”.

“Beyond principles, it offers practical suggestions that model developers and policymakers can apply”.

The AI Verify Foundation was set up in 2023 and counts Google and Microsoft among its members. Its purpose is to support responsible governance and standards for AI. The group is an IMDA subsidiary.