The EU’s proposed AI Regulation
High-risk requirements
High-risk AI systems must comply with several mandatory requirements before the system can be placed on the market or put into service, or before its output can be used in the EU.
Conformity assessment (as described above) is intended to certify that the system in question meets these requirements:
-
Risk management systems must be established, implemented, documented, maintained and regularly updated. The risk management system must identify and analyse foreseeable risks associated with the AI and eliminate or reduce those risks to the extent possible and otherwise implement control measures in relation to those risks.
-
Data and data governance. High-risk AI systems which involve training models with data must use training, validation and testing data sets which are subject to appropriate data governance and management practices, are relevant, representative, free of errors and complete, and take into account the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the AI system is intended to be used.
-
Technical documentation, containing as a minimum the information detailed in Annex IV, including a detailed description of the elements of the AI system and the process of its development, must be drawn up before the AI systems are placed on the market or put into service, and must be kept up-to-date.
-
Record keeping. High-risk AI systems must have logging capabilities ensuring traceability of the AI system’s functioning throughout its lifecycle, at a level appropriate to its intended purpose.
-
Transparency and provision of information to users. The operation of high-risk AI systems must be (i) sufficiently transparent to enable users to interpret the AI system’s output and use it appropriately; and (ii) accompanied by instructions for use, including any known and foreseeable circumstances that may lead to risks to health and safety or fundamental rights, human oversight measures, and the expected lifetime of the high-risk AI system. The information must be concise, complete, correct and clear, and must be relevant, accessible and comprehensible to users.
-
Human oversight. High-risk AI systems must be capable of being overseen by natural persons, with the aim of preventing or minimising risks to health, safety or fundamental rights. The provider is to identify and build (where possible) oversight measures into the AI system. The designated individual should fully understand the capacities and limitations of the AI system and be able to monitor its operation and output for signs of anomalies, dysfunctions and unexpected performance. Humans should be able to intervene and stop the system.
-
Accuracy, robustness and cybersecurity. High-risk AI systems must, in light of their intended purpose, be appropriately accurate, and the accuracy metrics must be declared in the accompanying instructions of use. The systems must also be appropriately robust and resilient to errors, faults or inconsistencies and resilient to third parties intending to exploit system vulnerabilities, including data poisoning and adversarial examples.
These high-risk requirements will be onerous to comply with. Potential issues include:
-
The requirement that data sets be “representative” does not sit easily with GDPR requirements regarding sensitive personal data. Similarly, a provider of a high-risk AI system is allowed to process the GDPR special categories of personal data for the purposes of ensuring bias monitoring, detection and correction in those systems, subject to certain safeguards. But minimal detail is provided as to what those safeguards would encompass.
-
The requirement that data used to train AI systems be “free of errors and complete” may well be unachievable in practice, given the scale of the data sets used in machine learning.
-
The requirement, in certain circumstances, for a designated individual to be able to “fully understand” the operation of a complex AI system, sets a very high bar; this is unlikely to be an attractive role for anyone to take on.
-
Traceability requirements may pose problems for certain deep learning AI systems, where it is difficult to clearly explain and trace how the system is functioning.
On whom do these obligations fall?
A Provider is anyone who develops an AI system, or has it developed with a view to putting it on the market or into service under its own name or trademark. Providers of high-risk AI systems have primary responsibility for ensuring compliance with the AI Regulation. They must:
-
ensure that the AI system complies with the high-risk requirements
-
manage the conformity assessment procedures and inform national competent authorities of any non-compliance
-
put in place a post-market monitoring system to collect, document and analyse data throughout the lifetime of the AI system and to evaluate the AI system’s continuous compliance
-
a third party will also be treated as a provider if, for example, it places on the market or puts into service an AI-enabled product under its own name or brand, or makes a substantial modification to an existing high-risk AI system
A User is anyone deploying an AI system under its authority, but does not include personal and non-professional uses (e.g. everyday consumers). Users have more limited obligations than providers, but still have various monitoring and information obligations:
-
use the AI systems in accordance with the instructions of use
-
ensure that input data is relevant in view of the intended purpose of the AI system
-
monitor the operation of the AI system on the basis of the instructions of use
-
inform the provider or distributor of any risk to health and safety or fundamental rights
-
keep automatically generated logs to the extent that such logs are under their control
Manufacturers of products which are already regulated under EU sectoral legislation (cars, medical devices etc) and which use high-risk AI, are subject to the same obligations as providers. There are also obligations on importers and distributors of high-risk AI systems.
Authors
Giles Pratt Partner
London
Dr. Christoph Werkmeister Partner
Düsseldorf
Sascha Schubert Partner
Brussels
Satya Staes Polet Partner
Brussels
Ben Cohen Senior Associate
London