My favourites

CHAPTER IX – Post-market monitoring, information sharing, market surveillance (Art. 72-94)

Art. 72 AI Act – Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems arrow_right_alt

Art. 73 AI Act – Reporting of serious incidents arrow_right_alt

Art. 74 AI Act – Market surveillance and control of AI systems in the Union market arrow_right_alt

Art. 75 AI Act – Mutual assistance, market surveillance and control of general-purpose AI systems arrow_right_alt

Art. 76 AI Act – Supervision of testing in real world conditions by market surveillance authorities arrow_right_alt

Art. 77 AI Act – Powers of authorities protecting fundamental rights arrow_right_alt

Art. 78 AI Act – Confidentiality arrow_right_alt

Art. 79 AI Act – Procedure at national level for dealing with AI systems presenting a risk arrow_right_alt

Art. 80 AI Act – Procedure for dealing with AI systems classified by the provider as non-high-risk in application of Annex III arrow_right_alt

Art. 81 AI Act – Union safeguard procedure arrow_right_alt

Art. 82 AI Act – Compliant AI systems which present a risk arrow_right_alt

Art. 83 AI Act – Formal non-compliance arrow_right_alt

Art. 84 AI Act – Union AI testing support structures arrow_right_alt

Art. 85 AI Act – Right to lodge a complaint with a market surveillance authority arrow_right_alt

Art. 86 AI Act – Right to explanation of individual decision-making arrow_right_alt

Art. 87 AI Act – Reporting of infringements and protection of reporting persons arrow_right_alt

Art. 88 AI Act – Enforcement of the obligations of providers of general-purpose AI models arrow_right_alt

Art. 89 AI Act – Monitoring actions arrow_right_alt

Art. 90 AI Act – Alerts of systemic risks by the scientific panel arrow_right_alt

Art. 91 AI Act – Power to request documentation and information arrow_right_alt

Art. 92 AI Act – Power to conduct evaluations arrow_right_alt

  1. The AI Office, after consulting the Board, may conduct evaluations of the general-purpose AI model concerned:
    1. to assess compliance of the provider with obligations under this Regulation, where the information gathered pursuant to Article 91 is insufficient; or
    2. to investigate systemic risks at Union level of general-purpose AI models with systemic risk, in particular following a qualified alert from the scientific panel in accordance with Article 90(1), point (a).
  2. The Commission may decide to appoint independent experts to carry out evaluations on its behalf, including from the scientific panel established pursuant to Article 68. Independent experts appointed for this task shall meet the criteria outlined in Article 68(2).
  3. For the purposes of paragraph 1, the Commission may request access to the general-purpose AI model concerned through APIs or further appropriate technical means and tools, including source code.
  4. The request for access shall state the legal basis, the purpose and reasons of the request and set the period within which the access is to be provided, and the fines provided for in Article 101 for failure to provide access.
  5. The providers of the general-purpose AI model concerned or its representative shall supply the information requested. In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to represent them by law or by their statutes, shall provide the access requested on behalf of the provider of the general-purpose AI model concerned.
  6. The Commission shall adopt implementing acts setting out the detailed arrangements and the conditions for the evaluations, including the detailed arrangements for involving independent experts, and the procedure for the selection thereof. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2).
  7. Prior to requesting access to the general-purpose AI model concerned, the AI Office may initiate a structured dialogue with the provider of the general-purpose AI model to gather more information on the internal testing of the model, internal safeguards for preventing systemic risks, and other internal procedures and measures the provider has taken to mitigate such risks.
Related
Close tabsclose
  • 164

Recital 164

The AI Office should be able to take the necessary actions to monitor the effective implementation of and compliance with the obligations for providers of general-purpose AI models laid down in this Regulation. The AI Office should be able to investigate possible infringements in accordance with the powers provided for in this Regulation, including by requesting documentation and information, by conducting evaluations, as well as by requesting measures from providers of general-purpose AI models. When conducting evaluations, in order to make use of independent expertise, the AI Office should be able to involve independent experts to carry out the evaluations on its behalf. Compliance with the obligations should be enforceable, inter alia, through requests to take appropriate measures, including risk mitigation measures in the case of identified systemic risks as well as restricting the making available on the market, withdrawing or recalling the model. As a safeguard, where needed beyond the procedural rights provided for in this Regulation, providers of general-purpose AI models should have the procedural rights provided for in Article 18 of Regulation (EU) 2019/1020, which should apply mutatis mutandis, without prejudice to more specific procedural rights provided for by this Regulation.

Art. 93 AI Act – Power to request measures arrow_right_alt

Art. 94 AI Act – Procedural rights of economic operators of the general-purpose AI model arrow_right_alt