A European Union (EU) watchdog urged the bloc to clarify its position on the use of AI in a number of different fields, warning of pitfalls along with threats to fundamental rights and data protection rules.

In a report released ahead of an EU review planned for 2021, The European Union Agency for Fundamental Rights (FRA) highlighted the risks of using AI in predictive policing, medical diagnoses, social services and targeted advertising.

The report is based on interviews from more than 100 public and private organisations using AI.

Based on its findings, FRA called on the bloc and EU member states to take a number of steps to ensure the technology benefits the continent in the right way, including: ensuring AI respects all fundamental rights; a guarantee people can challenge decisions taken by the technology; assessments before and during its use to reduce negative impacts; providing more guidance on data protection rules; assessing potential discrimination; and creating an effective oversight system.

Human error
FRA director Michael O’Flaherty warned AI was not “infallible”, and was made by people, who can make mistakes.

“This is why people need to be aware when AI is used, how it works and how to challenge automated decision.”

He added the EU needs to clarify how existing rules apply to AI and organisations must assess how their technologies can interfere with people’s rights.

“We have an opportunity to shape AI that not only respects our human and fundamental rights, but that also protects and promotes them,” O’Flaherty said.