Mobile World Live spoke to Carla Echevarria, design lead for Google Assistant, about how bias can creep into AI systems, why that matters and how to combat the problem.

Why is it so important to be conscious of bias in a world which is increasingly driven by algorithms? How can bias reinforce existing social inequality?
Algorithms operate invisibly in nearly every aspect of our lives, and determine which books or movies are recommended to us, or which friends we should connect with on social media, but also have impact on far more critical things, such as the ability to get a job, a mortgage, a credit card.

When bias enters those algorithms, the result is that the scales are tipped in the favour of one group of people, to the detriment of other groups. Because bias tends to be a reflection of existing inequalities, it tends to perpetuate those inequalities.

Can you share an example or two of how bias can creep in and show up in an algorithm?
Amazon recently discontinued use of a candidate recruiting system after it was discovered that it had a bias against female candidates. The algorithm had been trained on historical data from applicants to Amazon over the past decade, who had been predominantly white males.

An algorithm widely used by judges in the US to determine the likelihood of criminal recidivism was found to be biased against African Americans. This resulted in longer detention times for that group.

Do we know how widespread of a problem this is?
As every aspect of our lives is now determined to some degree by algorithms, affecting everything from our shopping habits to our job opportunities to our potential mates, the problem is as widespread as it can possibly be.

What can be done to combat bias in AI systems and ensure they behave in an ethical way? How do you detect and mitigate such things? Are there any steps the technology industry more broadly can take to help address the problem?
As a user experience designer, my approach is to enable the end user to train the algorithms directly. If we leave the training of machine learning systems to a small group of engineers and technologists, then their biases will inevitably be reflected in those systems. But if we enable all users of these sytems to have input, to reject and correct the output of a system when it does not reflect their own experience, we will build systems that are reflective of all users, not just a few.

This article was originally due to appear in the MWC20 Barcelona Show Daily newspapers as part of our conference speaker coverage. Due to the cancellation of the event we are instead publishing them online.