My favourites

CHAPTER III – High-risk AI systems (Art. 6-49)

Art. 6 AI Act – Classification rules for high-risk AI systems arrow_right_alt

Art. 7 AI Act – Amendments to Annex III arrow_right_alt

Art. 8 AI Act – Compliance with the requirements arrow_right_alt

Art. 9 AI Act – Risk management system arrow_right_alt

Art. 10 AI Act – Data and data governance arrow_right_alt

Art. 11 AI Act – Technical documentation arrow_right_alt

Art. 12 AI Act – Record-keeping arrow_right_alt

Art. 13 AI Act – Transparency and provision of information to deployers arrow_right_alt

Art. 14 AI Act – Human oversight arrow_right_alt

Art. 15 AI Act – Accuracy, robustness and cybersecurity arrow_right_alt

  1. High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle.
  2. To address the technical aspects of how to measure the appropriate levels of accuracy and robustness set out in paragraph 1 and any other relevant performance metrics, the Commission shall, in cooperation with relevant stakeholders and organisations such as metrology and benchmarking authorities, encourage, as appropriate, the development of benchmarks and measurement methodologies.
  3. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use.
  4. High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems. Technical and organisational measures shall be taken in this regard.The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans.

    High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures.

  5. High-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities.The technical solutions aiming to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks.

    The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws.

Related
Close tabsclose
  • 66
  • 74
  • 75
  • 76
  • 77
  • 78

Recital 66

Requirements should apply to high-risk AI systems as regards risk management, the quality and relevance of data sets used, technical documentation and record-keeping, transparency and the provision of information to deployers, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights. As no other less trade restrictive measures are reasonably available those requirements are not unjustified restrictions to trade.

Recital 74

High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity, in light of their intended purpose and in accordance with the generally acknowledged state of the art. The Commission and relevant organisations and stakeholders are encouraged to take due consideration of the mitigation of risks and the negative impacts of the AI system. The expected level of performance metrics should be declared in the accompanying instructions of use. Providers are urged to communicate that information to deployers in a clear and easily understandable way, free of misunderstandings or misleading statements. Union law on legal metrology, including Directives 2014/31/EU (35) and 2014/32/EU (36) of the European Parliament and of the Council, aims to ensure the accuracy of measurements and to help the transparency and fairness of commercial transactions. In that context, in cooperation with relevant stakeholders and organisation, such as metrology and benchmarking authorities, the Commission should encourage, as appropriate, the development of benchmarks and measurement methodologies for AI systems. In doing so, the Commission should take note and collaborate with international partners working on metrology and relevant measurement indicators relating to AI.


(35) Directive 2014/31/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of non-automatic weighing instruments (OJ L 96, 29.3.2014, p. 107).
(36) Directive 2014/32/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of measuring instruments (OJ L 096 29.3.2014, p. 149).

Recital 75

Technical robustness is a key requirement for high-risk AI systems. They should be resilient in relation to harmful or otherwise undesirable behaviour that may result from limitations within the systems or the environment in which the systems operate (e.g. errors, faults, inconsistencies, unexpected situations). Therefore, technical and organisational measures should be taken to ensure robustness of high-risk AI systems, for example by designing and developing appropriate technical solutions to prevent or minimise harmful or otherwise undesirable behaviour. Those technical solution may include for instance mechanisms enabling the system to safely interrupt its operation (fail-safe plans) in the presence of certain anomalies or when operation takes place outside certain predetermined boundaries. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system.

Recital 76

Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks or membership inference), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures, such as security controls, should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.

Recital 77

Without prejudice to the requirements related to robustness and accuracy set out in this Regulation, high-risk AI systems which fall within the scope of a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements, in accordance with that regulation may demonstrate compliance with the cybersecurity requirements of this Regulation by fulfilling the essential cybersecurity requirements set out in that regulation. When high-risk AI systems fulfil the essential requirements of a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements, they should be deemed compliant with the cybersecurity requirements set out in this Regulation in so far as the achievement of those requirements is demonstrated in the EU declaration of conformity or parts thereof issued under that regulation. To that end, the assessment of the cybersecurity risks, associated to a product with digital elements classified as high-risk AI system according to this Regulation, carried out under a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements, should consider risks to the cyber resilience of an AI system as regards attempts by unauthorised third parties to alter its use, behaviour or performance, including AI specific vulnerabilities such as data poisoning or adversarial attacks, as well as, as relevant, risks to fundamental rights as required by this Regulation.

Recital 78

The conformity assessment procedure provided by this Regulation should apply in relation to the essential cybersecurity requirements of a product with digital elements covered by a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements and classified as a high-risk AI system under this Regulation. However, this rule should not result in reducing the necessary level of assurance for critical products with digital elements covered by a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements. Therefore, by way of derogation from this rule, high-risk AI systems that fall within the scope of this Regulation and are also qualified as important and critical products with digital elements pursuant to a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements and to which the conformity assessment procedure based on internal control set out in an annex to this Regulation applies, are subject to the conformity assessment provisions of a regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements insofar as the essential cybersecurity requirements of that regulation are concerned. In this case, for all the other aspects covered by this Regulation the respective provisions on conformity assessment based on internal control set out in an annex to this Regulation should apply. Building on the knowledge and expertise of ENISA on the cybersecurity policy and tasks assigned to ENISA under the Regulation (EU) 2019/881 of the European Parliament and of the Council (37), the Commission should cooperate with ENISA on issues related to cybersecurity of AI systems.


(37) Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act) (OJ L 151, 7.6.2019, p. 15).

Art. 16 AI Act – Obligations of providers of high-risk AI systems arrow_right_alt

Art. 17 AI Act – Quality management system arrow_right_alt

Art. 18 AI Act – Documentation keeping arrow_right_alt

Art. 19 AI Act – Automatically generated logs arrow_right_alt

Art. 20 AI Act – Corrective actions and duty of information arrow_right_alt

Art. 21 AI Act – Cooperation with competent authorities arrow_right_alt

Art. 22 AI Act – Authorised representatives of providers of high-risk AI systems arrow_right_alt

Art. 23 AI Act – Obligations of importers arrow_right_alt

Art. 24 AI Act – Obligations of distributors arrow_right_alt

Art. 25 AI Act – Responsibilities along the AI value chain arrow_right_alt

Art. 26 AI Act – Obligations of deployers of high-risk AI systems arrow_right_alt

Art. 27 AI Act – Fundamental rights impact assessment for high-risk AI systems arrow_right_alt

Art. 28 AI Act – Notifying authorities arrow_right_alt

Art. 29 AI Act – Application of a conformity assessment body for notification arrow_right_alt

Art. 30 AI Act – Notification procedure arrow_right_alt

Art. 31 AI Act – Requirements relating to notified bodies arrow_right_alt

Art. 32 AI Act – Presumption of conformity with requirements relating to notified bodies arrow_right_alt

Art. 33 AI Act – Subsidiaries of notified bodies and subcontracting arrow_right_alt

Art. 34 AI Act – Operational obligations of notified bodies arrow_right_alt

Art. 35 AI Act – Identification numbers and lists of notified bodies arrow_right_alt

Art. 36 AI Act – Changes to notifications arrow_right_alt

Art. 37 AI Act – Challenge to the competence of notified bodies arrow_right_alt

Art. 38 AI Act – Coordination of notified bodies arrow_right_alt

Art. 39 AI Act – Conformity assessment bodies of third countries arrow_right_alt

Art. 40 AI Act – Harmonised standards and standardisation deliverables arrow_right_alt

Art. 41 AI Act – Common specifications arrow_right_alt

Art. 42 AI Act – Presumption of conformity with certain requirements arrow_right_alt

Art. 43 AI Act – Conformity assessment arrow_right_alt

Art. 44 AI Act – Certificates arrow_right_alt

Art. 45 AI Act – Information obligations of notified bodies arrow_right_alt

Art. 46 AI Act – Derogation from conformity assessment procedure arrow_right_alt

Art. 47 AI Act – EU declaration of conformity arrow_right_alt

Art. 48 AI Act – CE marking arrow_right_alt

Art. 49 AI Act – Registration arrow_right_alt