Overview of the European Commission’s proposed AI regulation
26/04/21 – The European Commission aims to turn the EU into ‘the global hub for trustworthy Artificial Intelligence (AI)’. With that objective in mind, on 21st April 2021 the Commission published its Proposal for a Regulation on a European approach for Artificial Intelligence.
Very interesting, I’m sure. But presumably not relevant to those of us who are no longer in the EU? Or to those of us who aren’t building robots to conquer the human race, haha?
On the EU point, the regulation applies to both EU and non-EU providers who market or deploy AI system in the EU, all users of AI systems in the EU, as well as providers and users of AI systems that are located outside the EU but where the outputs of the AI systems are used in the EU. In other words, the regulation potentially extends far beyond the EU’s borders.
And for the Asimov fans out there, the regulation’s definition of ‘AI system’ is perhaps a little disappointing: ‘software that is developed with one or more of the techniques and approaches listed in Annex I and [which] can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing environments they interact with’.
Annex I in full:
‘(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
(c) Statistical approaches, Bayesian estimation, search and optimization methods.’
Ah I see what you mean. So what do I need to know?
Well, the proposed regulation runs to 107 pages (not including the Annexes), so there’s quite a bit to digest. But by way of an overview:
- Timing. The regulation will now be reviewed and debated by the European Parliament, and then by the Council of Europe. Given the subject matter, the regulation is also likely to generate extensive comments from AI providers and other interested parties. Once adopted by the Commission, the regulation is then subject to a 24-month grace period before it applies fully (Article 85(2)). Being realistic we’re looking at go-live in 2023, and very possibly 2024.
- Risk-based approach. The regulation takes a risk-based approach, with AI systems falling into one of three categories: prohibited AI practices, high-risk systems, and lower-risk systems.
- Prohibited AI practices. The regulation prohibits four specific practices involving AI (Article 5):
- Marketing or deploying AI systems that ‘deploy subliminal techniques beyond a person’s consciousness’ in order to distort their behaviour in a way that causes or may cause harm.
- Marketing or deploying AI systems that exploit vulnerabilities due to age, physical or mental disability in order to distort someone’s behaviour in a manner that causes or may cause harm.
- Marketing or deploying by public authorities AI systems that evaluate or classify the trustworthiness of people with a social score (social scoring).
- Use of ‘real-time’ remote biometric identification systems (e.g. facial recognition systems) for law enforcement purposes, with broad exemptions for certain criminal justice-related purposes. Biometric testing is likely to be one of the more controversial aspects of the regulation; the European Data Protection Supervisor (EDPS) has already issued a press release criticising the Commission for not adopting a stricter approach.
- High-risk systems. The regulation specifies two categories of high-risk AI systems:
- The first category consists of AI systems used as safety components of products, or AI systems which are themselves products, that are regulated under the ‘New Legislative Framework’ legislation listed in Annex II to the regulation, e.g. toys, medical devices, motor vehicles, gas appliances etc. Checking that these AI safety components, or AI systems, comply with the regulation (‘conformity assessments’) will be incorporated into the existing third-party compliance and enforcement mechanisms for the relevant products.
- The second category are stand-alone AI systems that the Commission considers have ‘fundamental rights implications’. These are listed in Annex III to the regulation, and include AI systems used for:
-
-
-
-
- biometric identification (to the extent not a prohibited AI practice).
- management of road traffic and other critical infrastructure (water, gas, heating and electricity)
- education and vocational training
- recruitment and hiring of candidates
- making decisions in connection with management and termination of workers
-
-
-
Stand-alone systems will be subject to conformity assessments, as well as quality and risk management systems and post-market monitoring. Following the conformity assessments, the AI systems must then be registered in a European Commission-managed database, to ensure public transparency and assist ongoing supervision.
- Lower-risk systems. AI systems which are not prohibited or high-risk are subject to relatively light-touch regulation. There are no conformity assessment for lower-risk systems. And although all providers must inform individual users that they are interacting with an AI system (unless it is ‘obvious from the circumstances and the context of use’), there is no obligation for providers of lower-risk AI systems to provide information about the system’s algorithm or how it operates, as is the case for providers of high-risk systems.
- Data governance. Providers of high-risk systems are required to adopt rigorous data governance and management practices in relation to training, validation and testing datasets to reduce the risk of potential biases and other inaccuracies.
- Sandboxes. The regulation encourages EU member states to establish sandboxes (i.e. controlled environments) to enable providers to test innovative technologies on the basis of an agreed testing plan, and to reduce the regulatory burden (including conformity assessment fees) for SMEs and start-ups.
- Penalties. For corporate providers of AI systems there are three levels of fines:
- Non-compliance with Article 5 (prohibited AI practices, see para 3 above) or Article 10 (data governance, see para 6 above) is subject to a fine of up to €30,000,000 or 6% of total annual worldwide turnover, whichever is the higher.
- For non-compliance of any other provision of the regulation, up to €20,000,000 or 4% of total annual worldwide turnover, whichever is the higher.
- For the supply of incorrect, incomplete or misleading information to regulatory bodies, up to €10,000,000 or 2% of total annual worldwide turnover, whichever is the higher.
I see what you mean about quite a bit to digest. Anything I need to do now?
Although the regulation is likely to be subject to various changes over the next few months – particularly in the areas of biometric testing and social scoring – the fundamental principles are unlikely to change. So if you’re involved with the development, marketing, sale or distribution of software that constitutes a high-risk AI system then you may want to start thinking about how the regulation will impact areas such the accuracy of your datasets, risk of bias, and algorithmic transparency.