Posts Tagged ‘brexit’

|

Overview of the European Commission’s proposed AI regulation

26/04/21 – The European Commission aims to turn the EU into ‘the global hub for trustworthy Artificial Intelligence (AI)’.  With that objective in mind, on 21st April 2021 the Commission published its Proposal for a Regulation on a European approach for Artificial Intelligence.

Very interesting, I’m sure.  But presumably not relevant to those of us who are no longer in the EU?  Or to those of us who aren’t building robots to conquer the human race, haha?

On the EU point, the regulation applies to both EU and non-EU providers who market or deploy AI system in the EU, all users of AI systems in the EU, as well as providers and users of AI systems that are located outside the EU but where the outputs of the AI systems are used in the EU.  In other words, the regulation potentially extends far beyond the EU’s borders.

And for the Asimov fans out there, the regulation’s definition of ‘AI system’ is perhaps a little disappointing: ‘software that is developed with one or more of the techniques and approaches listed in Annex I and [which] can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing environments they interact with’.

Annex I in full:

(a)        Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;

(b)         Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;

(c)         Statistical approaches, Bayesian estimation, search and optimization methods.’

Ah I see what you mean.  So what do I need to know?

Well, the proposed regulation runs to 107 pages (not including the Annexes), so there’s quite a bit to digest.  But by way of an overview:

  1. Timing. The regulation will now be reviewed and debated by the European Parliament, and then by the Council of Europe.  Given the subject matter, the regulation is also likely to generate extensive comments from AI providers and other interested parties.  Once adopted by the Commission, the regulation is then subject to a 24-month grace period before it applies fully (Article 85(2)).  Being realistic we’re looking at go-live in 2023, and very possibly 2024.
  2. Risk-based approach. The regulation takes a risk-based approach, with AI systems falling into one of three categories: prohibited AI practices, high-risk systems, and lower-risk systems.
  3. Prohibited AI practices. The regulation prohibits four specific practices involving AI (Article 5):
    1. Marketing or deploying AI systems that ‘deploy subliminal techniques beyond a person’s consciousness’ in order to distort their behaviour in a way that causes or may cause harm.
    2. Marketing or deploying AI systems that exploit vulnerabilities due to age, physical or mental disability in order to distort someone’s behaviour in a manner that causes or may cause harm.
    3. Marketing or deploying by public authorities AI systems that evaluate or classify the trustworthiness of people with a social score (social scoring).
    4. Use of ‘real-time’ remote biometric identification systems (e.g. facial recognition systems) for law enforcement purposes, with broad exemptions for certain criminal justice-related purposes. Biometric testing is likely to be one of the more controversial aspects of the regulation; the European Data Protection Supervisor (EDPS) has already issued a press release criticising the Commission for not adopting a stricter approach.
  4. High-risk systems. The regulation specifies two categories of high-risk AI systems:
    1. The first category consists of AI systems used as safety components of products, or AI systems which are themselves products, that are regulated under the ‘New Legislative Framework’ legislation listed in Annex II to the regulation, e.g. toys, medical devices, motor vehicles, gas appliances etc. Checking that these AI safety components, or AI systems, comply with the regulation (‘conformity assessments’) will be incorporated into the existing third-party compliance and enforcement mechanisms for the relevant products.
    2. The second category are stand-alone AI systems that the Commission considers have ‘fundamental rights implications’. These are listed in Annex III to the regulation, and include AI systems used for:

Stand-alone systems will be subject to conformity assessments, as well as quality and risk management systems and post-market monitoring. Following the conformity assessments, the AI systems must then be registered in a European Commission-managed database, to ensure public transparency and assist ongoing supervision.

  1. Lower-risk systems. AI systems which are not prohibited or high-risk are subject to relatively light-touch regulation.  There are no conformity assessment for lower-risk systems.  And although all providers must inform individual users that they are interacting with an AI system (unless it is ‘obvious from the circumstances and the context of use’), there is no obligation for providers of lower-risk AI systems to provide information about the system’s algorithm or how it operates, as is the case for providers of high-risk systems.
  2. Data governance. Providers of high-risk systems are required to adopt rigorous data governance and management practices in relation to training, validation and testing datasets to reduce the risk of potential biases and other inaccuracies.
  3. Sandboxes. The regulation encourages EU member states to establish sandboxes (i.e. controlled environments) to enable providers to test innovative technologies on the basis of an agreed testing plan, and to reduce the regulatory burden (including conformity assessment fees) for SMEs and start-ups.
  4. Penalties. For corporate providers of AI systems there are three levels of fines:
    1. Non-compliance with Article 5 (prohibited AI practices, see para 3 above) or Article 10 (data governance, see para 6 above) is subject to a fine of up to €30,000,000 or 6% of total annual worldwide turnover, whichever is the higher.
    2. For non-compliance of any other provision of the regulation, up to €20,000,000 or 4% of total annual worldwide turnover, whichever is the higher.
    3. For the supply of incorrect, incomplete or misleading information to regulatory bodies, up to €10,000,000 or 2% of total annual worldwide turnover, whichever is the higher.

I see what you mean about quite a bit to digest.  Anything I need to do now?

Although the regulation is likely to be subject to various changes over the next few months – particularly in the areas of biometric testing and social scoring – the fundamental principles are unlikely to change.  So if you’re involved with the development, marketing, sale or distribution of software that constitutes a high-risk AI system then you may want to start thinking about how the regulation will impact areas such the accuracy of your datasets, risk of bias, and algorithmic transparency.

Tags: , , , , ,
Posted in Technology, Updates | No Comments »

UK adequacy decisions – lukewarm thumbs-up from the EDPB

15/04/21 – If you’ve been following the progress of the UK adequacy decisions (see updates from December 2020 and March 2021), you will know that we have been waiting for the European Data Protection Board’s opinions on the draft UK adequacy decisions.  As per the EDPB’s press release yesterday, these opinions have now been adopted.

Although the full texts are not yet available, the press release suggests that the EDPB’s opinions broadly supports the adequacy decisions, noting that the UK has “for the most part” mirrored the GDPR and the Law Enforcement Directive in its data protection framework, and that as a result many aspects of the UK’s law and practice are “essentially equivalent”.

However, the EDPB also emphasises that the alignment of the EU and UK data protection frameworks must be maintained going forward, and welcomes the European Commission’s decision to limit the duration of the adequacy decisions (to 4 years).  The EDPB also urges the Commission to closely monitor how the UK applies restrictions to onward transfers of EEA personal data, including transfers pursuant to adequacy decisions adopted by the UK, international agreements concluded between the UK and third countries, or derogations.

Next step is for the adequacy decisions to be approved by representatives of all 27 EU member states via the so-called ‘comitology procedure’, following which they can be adopted by the Commission.  I will keep you posted.

Tags: , , , , ,
Posted in Privacy, Updates | No Comments »

EU-UK data transfers – update

30/03/21 – As part of the Trade and Cooperation Agreement announced just before Christmas, the EU and the UK agreed a six-month ‘bridging period’ allowing transfers of personal data from the EEA to the UK to continue freely until 30th June 2021 – more detail here.  Half-way through the bridging period is probably a good time for an update.

Update?  Didn’t I read a few weeks ago that the EU issued the UK adequacy decision, and it’s now all done and dusted?

No, not really.  What happened is that on 19th February 2021 the European Commission issued two UK adequacy decisions (one for transfers under the GDPR, and the other for transfers under the Law Enforcement Directive), but only in draft form.  The drafts have now been passed to the European Data Protection Board (EDPB) for them to review and issue their non-binding (but influential) ‘advisory opinions’.  After the advisory opinions have been issued, and any EDPB-recommended changes have been incorporated into the text of the adequacy decisions, the drafts will then need to be approved by representatives of all 27 EU member states via the so-called ‘comitology procedure’.  Once approved, the adequacy decisions can be formally adopted by the Commission, and become legally effective.

Ah, so not quite done and dusted.  Will this all be wrapped up by 30th June?

Probably.  The good news is that the draft adequacy decisions were issued by the European Commission without any material conditions attached to them, i.e. the Commission considers that the UK’s data protection laws and systems are adequate.  Also positive was the prediction of the EU Head of International Data Flows, Bruno Gencarelli, who said in a LinkedIn webinar on 27th January 2021 that he was confident the UK adequacy decisions would be adopted “by the end of the bridging period”.  Ditto the prediction of the EU Commissioner for Justice, Didier Reynders, who, according to Vincent Manancourt of politico.eu, said on 16th February 2021 that the EDPB’s “opinion on UK data flows decision [is] expected mid-April […] Whole process to be wrapped up by Brussels by end of May/early June”.

Less positive were the widely-publicised comments of the UK culture secretary Oliver Dowden, who in his FT article on 27th February said: “we do not need to copy and paste the EU’s rule book, the General Data Protection Regulation, word-for-word”; and that the UK can now be more “agile” when it comes to “[striking] our own international data partnerships with some of the world’s fastest growing economies. […] The EU has been slow to act on this, declaring only 12 countries ’adequate’ in the past few decades”.  Announcing the UK’s intention to diverge from the GDPR and criticising the EU’s historic approach to adopting adequacy decisions, all while the EDPB is busy considering the UK’s application, may not have been Mr Dowden’s best idea.

All very interesting, but I’ve got data flows with EU customers and other data partners which need to continue after 30th June.  What do I need to do?

You’ve got a number of options, including:

  1. Do nothing. If the GDPR adequacy decision isn’t adopted by 30th June 2021 (and the bridging period isn’t extended), then deal with the situation on 1st  If this option appeals, then bear in mind that although you may be willing to take a risk-based view on the legality of your post-30th June data flows, your EEA data partner may not.
  2. Put in place a valid transfer mechanism or safeguard (most likely Standard Contractual Clauses (SCCs)) ASAP, even though they may end up not being needed. This is clearly ‘best practice’, and consistent with the ICO’s recommendation:  “If you receive personal data from the EEA, we recommend you put alternative safeguards in place before the end of April”.
  3. Contact each of your EEA data partners, and suggest to them that if the GDPR adequacy decision has not been adopted by say end of May, or even mid-June, then you will both work together with a view to putting in place SCCs by 30th June.

Tags: , , , , ,
Posted in Privacy, Updates | No Comments »

Get in touch

  • Your email address will only be used to respond to your message