My favourites

CHAPTER III – High-risk AI systems (Art. 6-49)

Art. 6 AI Act – Classification rules for high-risk AI systems arrow_right_alt

Art. 7 AI Act – Amendments to Annex III arrow_right_alt

Art. 8 AI Act – Compliance with the requirements arrow_right_alt

Art. 9 AI Act – Risk management system arrow_right_alt

Art. 10 AI Act – Data and data governance arrow_right_alt

Art. 11 AI Act – Technical documentation arrow_right_alt

Art. 12 AI Act – Record-keeping arrow_right_alt

Art. 13 AI Act – Transparency and provision of information to deployers arrow_right_alt

Art. 14 AI Act – Human oversight arrow_right_alt

  1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.
  2. Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section.
  3. The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk AI system, and shall be ensured through either one or both of the following types of measures:
    1. measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;
    2. measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the deployer.
  4. For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate:
    1. to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance;
    2. to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
    3. to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available;
    4. to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system;
    5. to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state.
  5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 of this Article shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority.

    The requirement for a separate verification by at least two natural persons shall not apply to high-risk AI systems used for the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the application of this requirement to be disproportionate.

Related
Close tabsclose
  • 66
  • 73

Recital 66

Requirements should apply to high-risk AI systems as regards risk management, the quality and relevance of data sets used, technical documentation and record-keeping, transparency and the provision of information to deployers, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights. As no other less trade restrictive measures are reasonably available those requirements are not unjustified restrictions to trade.

Recital 73

High-risk AI systems should be designed and developed in such a way that natural persons can oversee their functioning, ensure that they are used as intended and that their impacts are addressed over the system’s lifecycle. To that end, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role. It is also essential, as appropriate, to ensure that high-risk AI systems include mechanisms to guide and inform a natural person to whom human oversight has been assigned to make informed decisions if, when and how to intervene in order to avoid negative consequences or risks, or stop the system if it does not perform as intended. Considering the significant consequences for persons in the case of an incorrect match by certain biometric identification systems, it is appropriate to provide for an enhanced human oversight requirement for those systems so that no action or decision may be taken by the deployer on the basis of the identification resulting from the system unless this has been separately verified and confirmed by at least two natural persons. Those persons could be from one or more entities and include the person operating or using the system. This requirement should not pose unnecessary burden or delays and it could be sufficient that the separate verifications by the different persons are automatically recorded in the logs generated by the system. Given the specificities of the areas of law enforcement, migration, border control and asylum, this requirement should not apply where Union or national law considers the application of that requirement to be disproportionate.

Art. 15 AI Act – Accuracy, robustness and cybersecurity arrow_right_alt

Art. 16 AI Act – Obligations of providers of high-risk AI systems arrow_right_alt

Art. 17 AI Act – Quality management system arrow_right_alt

Art. 18 AI Act – Documentation keeping arrow_right_alt

Art. 19 AI Act – Automatically generated logs arrow_right_alt

Art. 20 AI Act – Corrective actions and duty of information arrow_right_alt

Art. 21 AI Act – Cooperation with competent authorities arrow_right_alt

Art. 22 AI Act – Authorised representatives of providers of high-risk AI systems arrow_right_alt

Art. 23 AI Act – Obligations of importers arrow_right_alt

Art. 24 AI Act – Obligations of distributors arrow_right_alt

Art. 25 AI Act – Responsibilities along the AI value chain arrow_right_alt

Art. 26 AI Act – Obligations of deployers of high-risk AI systems arrow_right_alt

Art. 27 AI Act – Fundamental rights impact assessment for high-risk AI systems arrow_right_alt

Art. 28 AI Act – Notifying authorities arrow_right_alt

Art. 29 AI Act – Application of a conformity assessment body for notification arrow_right_alt

Art. 30 AI Act – Notification procedure arrow_right_alt

Art. 31 AI Act – Requirements relating to notified bodies arrow_right_alt

Art. 32 AI Act – Presumption of conformity with requirements relating to notified bodies arrow_right_alt

Art. 33 AI Act – Subsidiaries of notified bodies and subcontracting arrow_right_alt

Art. 34 AI Act – Operational obligations of notified bodies arrow_right_alt

Art. 35 AI Act – Identification numbers and lists of notified bodies arrow_right_alt

Art. 36 AI Act – Changes to notifications arrow_right_alt

Art. 37 AI Act – Challenge to the competence of notified bodies arrow_right_alt

Art. 38 AI Act – Coordination of notified bodies arrow_right_alt

Art. 39 AI Act – Conformity assessment bodies of third countries arrow_right_alt

Art. 40 AI Act – Harmonised standards and standardisation deliverables arrow_right_alt

Art. 41 AI Act – Common specifications arrow_right_alt

Art. 42 AI Act – Presumption of conformity with certain requirements arrow_right_alt

Art. 43 AI Act – Conformity assessment arrow_right_alt

Art. 44 AI Act – Certificates arrow_right_alt

Art. 45 AI Act – Information obligations of notified bodies arrow_right_alt

Art. 46 AI Act – Derogation from conformity assessment procedure arrow_right_alt

Art. 47 AI Act – EU declaration of conformity arrow_right_alt

Art. 48 AI Act – CE marking arrow_right_alt

Art. 49 AI Act – Registration arrow_right_alt