Sparrho founder and CEO Vivian Chan (pictured) explained the hyper-connected world is creating increased mistrust as fears grow over new technologies and policies, despite strong evidence showing little support for the anxiety.

Speaking in a keynote session today (28 February) Chan suggested the world is facing a massive class divide in terms of trust inequality, with two groups emerging: the informed public, where trust has been increasing; and the mass population, where trust continues to stagnant.

This is due to fear of the unknown about topics such as immigration, artifical intelligence (AI) and automation, which has caused an increase in news and media engagement.

She said this isn’t just about consumption but amplification, with people sharing news more than ever and discussing topics relevant to them.

Trust used to be top down, but now it’s established peer-to-peer, horizontally.

Losing control
“We find ourselves at a critical juncture in human history. More than half the world has access to the internet, and in theory should have access to the vast and ever-growing body of knowledge. But in reality, we’re just at the tip of iceberg. Technology has dramatically improved how we communicate with each other and access information. But at what cost? Have we lost accountability and control?” she said.

New technologies including AI are often judged by confirmation bias, she explained, which means people tend to remember things that support their existing beliefs or assumptions. For example, if someone is already suspicious of AI, they are less likely to remember a positive AI story; but if bad news is reported, they use it as supporting evidence to back their beliefs.

Chan said AI is still very much a mystery: “we don’t necessarily know how the algorithms are classified and how they determine outputs. People want to understand how the decisions are made.”

Sparrho, which uses AI to help organisations and people stay up-to-date with new scientific publications and patents, is working to battle this human bias by helping people create a psychological safety net around AI. This can be done by combining a person’s experiences on how other people think and act on a situation or technology as well how reliable the information is likely to be, she said.