Sam Altman, CEO of Microsoft-backed OpenAI, called for the US to regulate deployment of advanced large language models, warning of the dangers of generative AI without solid policy frameworks in place.

In a hearing with the US Senate to discuss governance of generative AI, Altman argued regulations will be critical to mitigate the risks of increasingly powerful models, as fears grow about threats to society.

Threats discussed during his testimony centred on the spread of misinformation caused and violations of data privacy laws associated with how OpenAI trains its models.

Senator Richard Blumenthal told the hearing the prospect of inadequately trained AI is “more than a little scary” and argued new technologies must be “held accountable”.

The official pointed to the ability of OpenAI’s ChatGPT to mimic and simulate authentic human interactions, after opening his testimony with a speech generated by the model which was trained in his voice.

Altman suggested the formation of a government agency to regulate AI training and deployments would be helpful for development, explaining the need for “a combination of licensing and testing requirements” for developers.

He also proposed revoking licences of developers which launch AI tools that exceed certain “threshold” or “capabilities”, for example models capable of self replication or generating harmful content.

Altman argued for a governing rule around how machine learning tools gather data from the internet to generate responses.

“Users should be able to opt-out from having their data used by companies like ours, or other social media companies,” he added.