From Brussels and Davos, the CEOs of Google and Microsoft respectively, recently voiced support for regulating AI. Over the last months, we have seen acceleration of ΑΙ regulation efforts coming out of the European Union (EU) with the European Commission (EC) vowing to deliver a legislative framework for AI within its five-year term to 2024.

Details about whether this will represent tweaks of existing frameworks, principally GDPR, the EU data regulation which set the standard for data privacy globally, or a more comprehensive set of additional measures, remain unknown. However, leaked documents provide indications about where the EC is heading with AI regulation, highlighting three trends and associated implications.

  1. AI applications versus risk levels
    A would-be AI regulatory framework might require companies to self-assess the potential risk attributable to their own AI-based services. Such a categorisation would exist along a continuum based on the sector and the application. For example, healthcare will be a high-risk sector when combined with AI, while autonomous cars will be a high-risk application because if AI fails, it could cause fatalities.
    In this scenario, companies would have to put new procedures in place to understand and justify the levels of risk their AI-based services involve. Assessments would likely include the potential impact on consumers and seek to quantify the risk from algorithmic bias and data sourcing by third-party developers which contribute to a given company’s services. Regulators would have to carefully craft legislation that does not unduly put extra operational burden or re-engineering costs, particularly when GDPR is already in force. Against this backdrop, and given AI is a fast-moving field, it may be more feasible and instructive to adopt a phased approach prioritising high risk areas in the first tranche.
  1. Tech innovations, privacy risks and empowering individuals
    Another option the Commission is reportedly mulling involves giving people an enhanced personal data portability right. In the recent Consumer Insights GSMA Intelligence survey, we discovered a relatively large proportion of respondents (22 per cent, second in order of preference among tech companies, operators, regulators and state authorities) believed they themselves should be responsible for their own data safety. In this spirit, regulation should encourage AI services vendors to create tools which help users manage their privacy risks. This could include some or all of:
  • Explainable AI, an emerging branch of AI systems aiming to explain and reason every decision made in a manner easily understandable by humans affected by or related to it.
  • Digital sovereign identity, based on Blockchain, and enabling individual users to fully own and control their digital history, along with any data they shared.
  • Zero knowledge proofs, cryptographic methods by which a user can prove to another user, say an AI service vendor, that they know something to be true without conveying any additional information.
  1. Data and data hubs – the role of public sector and industrial data
    The EU announced it will spend €1 billion on creating “common European data spaces”, expanding its existing public sector data hub to also include industrial data sets. The EC’s strategy is to help European companies capitalise on the data they generate and eventually catch up in the AI race against the US and China. These can be a prime instrument for innovation, benefiting smaller companies and individuals with regards to more open, cheaper, better quality and more diverse data access. All of this is particularly relevant for accelerating AI innovation. However, interoperability across existing data hubs is far from a done deal. Scaling these at an EU-wide level would also raise intense debate about their governance.

To sum up, a mix of light and smart regulation won’t be an easy task as it would have to balance out a number of trade-offs like stricter rules versus space for companies to innovate. Emphasis on technologies which empower individuals and companies to manage AI-related risks is probably the most prudent thing to do for policy makers and regulators, but it’s also up to the tech companies to deliver through tech tools. Finally, without a doubt, the EU perceives AI as a core element of its Digital sovereignty strategy. However, the EC’s intentions towards common European data spaces remains unclear for now, risking delays to progress. What exactly does ‘European’ data mean and how is it possible that only European companies could be profiting from these, a point Politico reported Internal Market Commissioner Thierry Breton raised at a tech conference in December 2019 when he outlined a goal for European companies to have access to domestic data to “create value” in the bloc.

What difference would that make to current requirements for data centres to reside in the countries they operate in? And, how is that conducive to AI innovation?

The EU is setting the bar very high. It needs to give fast and bold answers before criticism starts to gather.

– Christina Patsioura – senior analyst, Emerging Technologies, GSMA Intelligence

The editorial views expressed in this article are solely those of the author and will not necessarily reflect the views of the GSMA, its Members or Associate Members.