The four risk classes of the EU AI Act

Lesedauer 6 Minuten

The EU AI Act follows a risk-based approach. According to this, AI systems are divided into four risk classes. An overview of the most important requirements and consequences.

Knowledge of the four risk classes of the EU AI Act is one of the most important basics. A distinction must be made between prohibited, high, medium and low risks of AI systems.

Articles of the EU AI Act mentioned in this post (German):

  • Article 3 Nr. 1 EU AI Act
  • Article 5 EU AI Act
  • Article 6 EU AI Act
  • Article 10 EU AI Act
  • Article 50 EU AI Act
  • Annex I, II, III EU AI Act

Four risk classes

These four risk classes are to be distinguished within the meaning of the EU AI Act:

  • unacceptable risk (prohibited AI practices),
  • high risk (high-risk AI systems),
  • medium risk (AI systems that interact with humans) and
  • minimal and/or no risk (all other AI systems).

All four risk classes refer to AI systems within the meaning of Article 3 No. 1 EU AI Act. AI models are subject to other criteria, more on this in another article.

The four risk classes of the EU AI Act as a color-coded overview:

EU AI Act - CAIR4.eu

The four-level risk class pyramid has now become an iconic symbol for the EU AI Act. It should be known and internalized in any case. Even if most AI applications ultimately involve medium and, above all, low risks (and these contain comparatively low requirements for providers and operators), it is important for the general public to know that they are well protected in a tiered form. The balance between the interests of the general public and the interests of AI players is weighted individually for each level.

1. Prohibited AI Practices

According to Article 5 EU AI Act, certain AI practices are prohibited in their entirety in the EU. These AI systems endanger European values in an unacceptable manner.

This applies to the following AI systems, among others:


  1. Those for the purpose of social scoring,
  2. AI systems for the purpose of cognitive behavior manipulation,
  3. Biometric identification systems in publicly accessible spaces for the purpose of law enforcement,
  4. AI systems that create or enhance facial recognition databases by analyzing facial images from the internet,
  5. AI systems for emotion recognition in workplaces and educational institutions.

Exceptions apply with regard to identification (point 3), e.g. for targeted searches for specific potential victims of crime, see also Annex II.

Here again as a graphical overview:

EU AI Act - CAIR4.eu

In business, the topics of education and workplace and employee protection will play a particularly important role. Everyone should be familiar with the case of HR-Software that favors certain applicants and disadvantages others (gender, skin color, age, social background, etc.) during recruiting due to an unconcious Bias The EU AI Act aims to reduce these and similar risks through targeted bans.

2. High risk AI

Most of the provisions of the EU AI Act relate to AI systems that pose a significant risk to the health, safety or fundamental rights of individuals. The most important standards in this regard are Articles 6 et seq. EU AI Act (including Annexes I and III).

2.1 Two categories

High-risk AI systems are divided into two main categories:

  1. The first category includes AI systems that are either used as safety components in products or operated autonomously and are listed in Annex I. Products containing these AI systems must be subject to third-party conformity assessment. Typical examples include AI systems used as safety components in medical devices, elevators, certain vehicles and aircraft.
  2. The second category includes stand-alone AI systems that have a direct impact on fundamental rights. These are listed in Annex III and include, for example
    • AI systems used to secure critical infrastructure,
    • systems in the context of educational institutions, e.g. assessing students and conducting admission tests
    • AI systems in the field of employment, including recruitment, promotions and performance monitoring of employees, and systems for assessing the creditworthiness of individuals, excluding those used to combat fraud

Here again as a grafical overview:

EU AI Act - CAIR4.eu

High-risk AI applications are a central topic in the EU AI Act. The entire Chapter III (= Articles 6-49) is dedicated to them – that is more than a third of all standards of the EU AI Act. In addition, there are Annexes I & III. In practice, Chapter III will not affect too many economic operators as providers of an AI system. However, it is generally highly relevant in the context of the “AI value chain”, which is regulated in Article 25 EU AI Act. This important topic is dealt with in a separate article.

2.2 Exceptions

Exemptions from the high-risk assessment according to Annex III are possible, e.g. if the AI-system only performs supporting or preparatory tasks or improves the outcome of human activity. However, these exemptions require a documented assessment and notification of the AI system to the EU database for high-risk AI systems.

Taking into account a risk-based approach, high-risk AI systems may be offered on the European market provided that they meet certain mandatory requirements and are subject to prior conformity assessment.

2.3 Requirements

Providers must:

  • establish a quality management system within the meaning of Article 17 EU AI Act that ensures compliance with the EU AI Act,
  • as well as a risk management system within the meaning of Article 9 EU AI Act, which covers the entire life cycle of the AI system.
  • Technical documentation within the meaning of Article 11 EU AI Act is also required.
  • There are also stricter requirements for cybersecurity in accordance with Article 15 EU AI Act.

The use of data for training AI models must also comply with the requirements of Article 10 EU AI Act. Technical requirements for riskeiche AI systems include, for example, logging during operation for traceability and the implementation of mechanisms for user monitoring and control.

EU AI Act - CAIR4.eu

In many cases, the requirements for the affected providers are considerable. Medical technology in particular is “no easy fare” for those affected due to the interplay between the EU AI Act and the Medical Device Regulation (MDR). For this reason, however, a particularly long transition period of 36 months after the EU AI Act comes into force has been provided for the relevant requirements (see Article 113 EU AI Act).

2.4 Representatives and instructions

Suppliers outside the EU who offer AI systems on the EU market must appoint an authorized representative within the EU. Manufacturers of products containing high-risk AI systems, and importers and distributors of these systems, also have specific obligations.

End users must use high-risk AI systems in accordance with the instructions, carefully select input data, monitor operation and keep logs. Some users, such as public bodies and private operators of public services, must carry out a fundamental rights impact assessment before using such systems..

EU AI Act - CAIR4.eu

Perhaps the most important standard in practice is that of operator obligations under Article 26 EU AI Act; these are the obligations of users of high-risk AI. This affects, for example, doctors who use high-risk AI on patients. In this respect, the instructions and operating manuals are of great importance, as is the way in which information can be fed back from the user to the provider.

3. Medium Risk

The EU AI Act defines transparency requirements for certain systems that interact with humans. In particular, Articles 50 et seq. EU AI Act regulate these requirements. This primarily concerns three types of medium-risk AI systems:

Providers of AI systems that enable direct interactions with people, such as AI-based chatbots. Here, it must be ensured that users are clearly informed that or when they are interacting with an AI system – unless this is obvious from the context.

Providers of AI systems that generate synthetic audio, image, video or text content must ensure that the output is labeled as artificially created or manipulable in a machine-readable format, unless an exception applies (e.g. if the AI system only takes on a supporting role in standard processing or does not significantly change the input data).

Operators of emotion recognition or biometric categorization systems must inform the persons concerned about the operation of the AI system.

EU AI Act - CAIR4.eu

The “classic” of this risk class is certainly the AI chatbot, which already plays an important role in the practice of many companies. Due to the transition periods, there is still some time to make all AI chatbots “safe”. However, you should start clarifying this topic in detail at an early stage, for example together with an AI service provider. Keyword: Sanctions.

In addition, providers of AI systems that are able to create so-called deep fakes must disclose that the content has been artificially generated or manipulated.Media professionals and bloggers should read the following article on this topic (german only).

4. Low Risk

The vast majority of AI systems are probably classified as “low risk”.To date, there are no legal regulations for such applications. However, there is a recommendation to voluntarily create or comply with a code of conduct, see Article 95 EU AI Act, among others.

Links to the articles of the EU AI Act mentioned in this post (German):

About the author:

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *