Brad Smith, president and vice chair at Microsoft, waded into the ethical AI debate, presenting a blueprint for governing the technology which argues humans should remain in control of systems and deployment of safeguards in cases where it is used for critical infrastructure.
Smith wrote in a blog post that AI, in many ways, offers more potential for the good of humanity than any invention that has preceded it, but added there is a need to “think early on and in a clear-eyed way about the problems that could lie ahead”, pointing to how social media had become both a weapon and a tool.
“As technology moves forward, it’s just as important to ensure proper control over AI as it is to pursue its benefits.”
Microsoft is establishing itself as one of the pacesetters in the generative AI arms race, after it invested a major sum in developer OpenAI to integrate the ChatGPT platform into a number of its products.
Smith said the company is committed and determined to develop and deploy AI in a safe and responsible way, but added the guardrails needed require a “broadly-shared sense of responsibility and should not be left to technology companies alone”.
Smith presented a five point plan premised on companies implementing and building-on new government-led AI safety frameworks, pointing to recent work completed in the US.
Secondly, there should be “safety breaks” for AI systems that control critical infrastructure, specifically when the technology could be used to control areas including electrical grids, water systems and city traffic flows, allowing for human intervention if required.
Smith also called for the development of a broader legal and regulatory framework based on technology architecture for AI; promoting transparency to ensure academic and public access to AI; and pursuing new public-private partnerships.
“We must always ensure that AI remains under human control. This must be a first-order priority for technology companies and governments alike,” he added.