Microsoft announced it is restricting access to certain facial recognition tools and scrapping the sale of services which guess a person’s emotions based on images, as part of attempts to reduce the risks AI systems pose to society.

In a blog, chief responsible AI officer Natasha Crampton unveiled a Responsible AI Standard framework designed to guide how Microsoft builds systems.

The framework details how the company thinks AI systems should run and be developed, putting values including fairness, reliability, safety, privacy and security, inclusiveness, transparency and accountability at the forefront.

As part of its push, the company has decided it will retire “emotional recognition”, which infers “emotional states and identity attributes such as gender, age, smile, facial hair, hair and makeup”, she said.

Misidentified
Crampton explained facial recognition had become a growing civil rights and privacy concern, and studies had shown technology misidentified female subjects and those with darker skin tones, creating serious implications in situations of surveillance.

Existing customers will have one year before they lose access to the tool.

The company is also adding limits to its Azure Face recognition service, with users now having to apply to use the tool, and tell the the company exactly how and where they plan to use it.

Microsoft is also adding restrictions to its Custom Neural Voice feature, allowing customers to create voices based on recordings.

“We recognise that for AI systems to be trustworthy, they need to be appropriate solutions to the problems they are designed to solve,” said Crampton.