Skip to main content

Which AI systems are considered high-risk?

Applications of AI that may pose a serious risk to health, safety or fundamental rights are classified as high risk. These high-risk use cases include:

  • AI systems alone or as safety components in critical infrastructures (e.g. transport), the failure of which could endanger the life and health of citizens.
  • AI solutions used in educational institutions that can determine access to education and the course of professional life (e.g. exam scoring)
  • AI systems as safety components of AI-based products (e.g. use of AI in robot-assisted operations)
  • AI tools for recruitment, workforce management and access to self-employment (e.g. CV ranking software for recruitment)
  • Some examples of AI used to provide access to basic private and public services (e.g. credit scoring that prevents citizens from obtaining credit)
  • AI systems used for remote biometric identification, emotion recognition and biometric categorisation (e.g. an AI system for retroactive thief identification)
  • Examples of the use of AI in the prevention, detection and investigation of crimes that may affect people's fundamental rights (e.g. assessing the reliability of evidence)
  • Examples of the use of AI in migration, asylum and border control management (e.g. automated processing of visa applications)
  • AI solutions used in justice and democratic processes (e.g. AI solutions for the preparation of judicial decisions)

Relevant Article: 6(1), Annex I and Annex III

 

Are there any cases where the AI systems listed in Annex III are not considered as high risk?

If an Annex III AI system does not pose a significant risk of harm to health, safety or fundamental rights, for example because it does not significantly affect the outcome of decision-making, such a system is not classified as high-risk AI. In order for this exception to apply, one of the conditions set out in Article 6(3) of the AI Act must be met.

If an AI system listed in Annex III profiles individuals, it will always be considered high-risk; such AI systems are not covered by the above exception.

Relevant Articles: Article 6(3), recital 53

 

We have already developed and marketed a high-risk AI system. Do we need to ensure compliance with the AI Act even though the provisions of the Act were not yet applicable when the AI system was marketed?

The AI Act applies to high-risk AI systems that were placed on the market or put into service before 2 August 2026 only if those systems have undergone a significant change in design since that date. In any case, providers and deployers of high-risk AI systems to be used by public authorities shall take the necessary measures to comply with the requirements and obligations of this AI Act by 2 August 2030.

The EC will provide guidance on the practical implementation of the provisions relating to the substantial modification, which will be published on this website once adopted.

Relevant Articles: 3(23), 43(4), 111