The AI model in the sense of the EU AI Act – Part 2/3

Lesedauer 16 Minuten

There is no general definition for AI models within the meaning of the EU AI Act. Part 1 of this article outlined the challenges arising from terminological ambiguities. The second part classifies the 5-layer model and systematically interprets the EU AI Act with regard to AI models.

Part 1/3 of this post

EU AI Act - CAIR4.eu
  • The second part of this article shows why the 5-layer model is particularly important for high-risk AI.
  • To interpret the term “AI model”, the context in which the term is mentioned in the EU AI Act is then examined.
  • The references are interpreted methodically (grammatical, historical, teleological and systematic).
  • Result, among other things: In the first version of the EU AI Act, there was an important annex with definitions. It was deleted in the final version without replacement.

Articles of the EU AI Act mentioned in this post (German)

  • Article 2 EU AI Act
  • Article 3 EU AI Act
  • Article 6 EU AI Act
  • Article 10 EU AI Act
  • Article 15 EU AI Act
  • Article 25 EU AI Act
  • Article 53 EU AI Act
  • Article 72 EU AI Act
  • Article 111 EU AI Act
  • Annex I a.F.

Please note that the original text of this article is based on the official German translation of the EU AI Act. Its wording may differ in part from the English version of the EU AI Act.

The AI Model in the Sense of the EU AI Acts – Part 2

See the first part of points 1 – 3 of this article. It is continued with point 4.

4. Various challenges

Although the EU AI Act defines the term “AI system” (see Article 3 No. 1 EU AI Act), it does not define the term “AI model”. This leads to operational challenges for AI experts in research and development. But lawyers are also called upon, because they have to regulate and interpret a matter that they do not really understand in many respects.

4.1 Importance of the 5-layer model

In order to get closer to the interaction of AI and regulation in practice, the 5-layer model presented in Part 1 appears to be a well-suited instrument. It helps to shed light on the operational interaction of legal requirements and relevant AI components.

The 5-layer model is credited with the publication by Dubey, Akshat, Zewen Yang, and Georges Hattab: “A Nested Model for AI Design and Validation.” iScience (2024).

EU AI Act - CAIR4.eu

The publication should be read in the original. The text makes it clear (as well as countless other current AI publications) that the component of the AI model is of great importance in AI research and practice. The fact that the EU AI Act does not offer a general definition for this important asset can be irritating – especially since it uses the term over 100 times in different contexts.

The 5-layer model structures layer-specific risks and validation approaches:

  1. Basic 5-layer model for (medical) AI
  2. Exemplary use in the case of a diabetes diagnosis

4.2 Matching with EU AI Act

With the help of the 5-layer model, lawyers and AI experts can learn a lot from each other! However, it must also be clear whether and to what extent the specific terms of the EU AI Act and the 5-layer model correspond to each other.

But in which specific AI use cases is matching relevant at all? And what role does the (missing) definition of AI models play in this?

4.2.1 Requirements for high-risk AI systems

Lets start with the first question:

Matching is particularly relevant for AI models that are integrated into high-risk AI systems. The 5-layer model is important both before it is placed on the market, Article 3 No. 9 EU AI Act, and in the course of monitoring, Article 72 EU AI Act and amendments number (74).

Regarding the details:

AI models are explicitly mentioned as components of high-risk AI systems:

  • Article 10 (1) and (6) EU AI Act (Data Governance)
    • A distinction is made between two variants:
      • AI models trained by the AI system (paragraph 1)
      • AI models that are not trained by the AI system (paragraph 6)
  • Article 15 (5) sentence 3 EU AI Act (Resilience) speaks of errors in the AI model as well as model poisoning
EU AI Act - CAIR4.eu

According to the opinion represented here, the 5-layer model is a good blue print for identifying and validating threads in a high-risk AI system! It helps to define what needs to be checked or documented with regard to a single AI model along the life cycle: before it is placed on the market as well as during monitoring. However, it must also be ensured that the understanding of AI models within the meaning of the EU AI Act actually corresponds to that of the 5-layer model.

4.2.2 Overarching criteria

AI models are regulated independently of each other in at least three central areas in the AI Regulation – with regard to research and testing purposes before placing on the market, GPAI with general and last but not least with high-risk AI systems with a specific purpose.

For this reason alone, there is a need for uniform basic criteria across the board:

  • For AI models in high-risk AI systems – regardless of whether they are specific or GPAI models, or whether the models are trained by the AI system or not.
  • As well as for GPAI models – regardless of whether they are generative or not, and regardless of the risk class of an AI system in which they are used.
  • In both cases, the topic of open source plays an important role:
    • It must be comprehensible what criteria an AI model has in order to meet all aspects of open source – for high-risk AI as well as for GPAI.
    • In particular, the question of whether and to what extent partially opened AI models are to be counted as open source is controversial – even beyond the EU AI Act!
  • Irrespective of this, the EU AI Act does not apply to research, testing and development activities on AI models, Article 2 EU (8) AI Act.

Of course, a uniform understanding would also be important with regard to the content of codes of conduct.

EU AI Act - CAIR4.eu

Since the EU AI Act lacks an overarching definition of AI models and also lacks guidelines so far, the criteria for AI models must be evaluated by way of legal interpretation.

4.3 Life cycle

The criteria sought should also apply over the entire life cycle of an AI model. This is already evident from the fact that the transition from (free) research and testing to (regulated) placing on the market and monitoring can exist with one and the same AI model.

The life cycle will once again play an important role in the third part of this article in connection with the 5-layer model.

5. Methodological interpretation

In the following interpretation, it is important to examine, among other things:

  • In what context the term “AI model” or synonyms are used in the text of the EU AI Act, and
  • how the mentions are to be understood or whether (sub-)criteria can be found in specifications for AI models that can be generalized.

Point 5 deals with many legal details that need to be considered in an interpretation. If you want to know the results of the interpretation directly, you can jump directly to point 6 with one click.

5.1 Many interpretations possible

The special thing of the legal interpretation is that courts can interpret the content of the EU AI Act largely independently. The background is the judicial independence anchored in the EU. Therefore, different judicial instances can come to different interpretations of the same topic.

Of course, this also applies to the question of what is to be regarded as an AI model within the meaning of the EU AI Act in individual cases and what is not. The situation is similar for the components of AI models required with regard to monitoring: As long as there are no guidelines, they are also purely a matter of interpretation!

EU AI Act - CAIR4.eu

Although the regulations for high-risk AI systems do not have to be met until August 2027, it is already important in many cases to consider the future requirements for placing on the market and monitoring. In the design, conception and development phase, the foundations can already be laid today that cannot be easily changed later despite grandfathering within the meaning of Article 111 EU AI Act (see the restriction on conceptual change in paragraph 2). Without clear guidelines, there is hardly any legal certainty on the basis of interpretation. Even with guidelines, it is not always certain how they will be interpreted.
A hotly debated case in Germany regarding independent judicial interpretation concerns the Medical Device Regulation (MDR) – in particular the so-called Rule 11. The case presented in the link is quite revealing in the AI context, as the MDR (as well as the EU AI Act) is an EU regulation. It therefore applies directly in every EU member state.

5.2 Legal disputes likely – especially with AI

AI business should have no illusions! AI will lead to litigation – also with regard to AI models:

  • AI involves many risks, which makes critics take action.
  • A lot of money is at stake, which promotes competition disputes.
  • In addition, there are potential disputes in the supply chain.

In the end, many of these disputes end up in court. And a German saying is: “In court and on the high seas one is in God’s hands!”

EU AI Act - CAIR4.eu

This makes it all the more important to know recognized legal methods of interpretation in order to be able to estimate what could be understood by an AI model within the meaning of the EU AI Act – or not. The AI technical view is not always decisive!

5.3 Legal “attack surface”

With regard to disputes in court, the following aspects must be considered:

  • Article 85 of the EU AI Act grants natural and legal persons the right to lodge a complaint (in this context, there is an interesting study on complaints to BaFin on the part of consumer protection).
  • In addition, whistleblowers are particularly protected under Article 87 EU AI Act. For example, sensitive internals from AI projects can leak out and end up with supervisory authorities, competitors or consumer advocates.

In view of these tools, disputes on AI should not be ruled out:

In all this, it should not be forgotten that a new AI liability guideline (AI-LG) is also imminent. This refers several times to the EU AI Act. In particular, the provision of Article 4 AI-LG contains a presumption of fault or a reversal of the burden of proof.

5.4 Various demarcations necessary

With regard to the interpretation, it is not only a matter of distinguishing between AI system and AI model, but also, for example, of the question of the not always easy demarcation of AI models and databases. In addition, there are distinctions from a number of similar assets such as powerful algorithms: AI does not always have to be at work here!

A real example of the challenge of clear demarcation could be this link from a medical image AI. It seems to be universally applicable. Is it perhaps even a GPAI model or system? Nevertheless, the terms “AI system” and “AI model” are mixed up several times. This similar-looking service, on the other hand, tends towards databases.

EU AI Act - CAIR4.eu

Another example is the question of whether and to what extent OCR is to be regarded as AI. It can be based on AI, but it doesn’t have to. In the field of medicine, too, there are always questions of demarcation.

5.4.1 AI systems with multiple components

It could be particularly challenging if differently specialized AI models are integrated into one and the same high-risk AI system.

As shown in the following graphic:

  • On the left, a (potential) GPAI model,
  • on the right, a specific AI model for diagnosis.

From a legal point of view, both cases are to be assessed differently in several respects:

  • For the GPAI model on the left, it can be a provider within the meaning of Article 3 No. 3 EU AI Act. This would mean that the EU AI Act would be directly applicable to this AI model in accordance with Article 2 (1) of the EU AI Act.
  • On the right, on the other hand, the provider of the AI system is always responsible, because there is no provider for AI models that are not GPAI models, Article 2 (1) EU AI Act.
  • Thus, the AI model on the right is the service of a “third party”. This is only indirectly subject to the EU AI Act, cf. wording Article 25 (4) EU AI Act.
EU AI Act - CAIR4.eu

From the judges’ point of view, the first question must therefore be whether terms such as “AI system”, “AI model” or “database” are used correctly by the parties at all. The lawyer likes to say “falsa demonstratio non nocet” – a wrong designation doesn’t matter. This also applies to the public law “analogue”. Even in the administrative procedure, it must be conscientiously checked whether technical terms are actually used in the way envisaged by the EU AI Act. There, the Principle of official investigation!

5.4.2 The AI Value Chain

The interpretation should also take into account the following:

  • The EU AI Act not only regulates the requirements for high-risk AI systems and various AI models, but also the relationship between different actors. In the case of high-risk AI, for example, in Article 25 of the EU AI Act. In addition, there are 11 recitals in the amendments on the AI value chain.
  • All references are content in need of interpretation. You have to interpret them in order to be able to estimate how the responsibilities of the various parties are intertwined.
  • It is not only the wording that needs to be examined, but also the actual goal that the legislature intends to achieve with the provisions.

The following graphic gives an idea of which parties are to be considered here:

EU AI Act - CAIR4.eu

The interaction of the lack of a definition for AI models with the sometimes quite detailed regulations on the AI value chain often raises the question: “Has the legislator really thought about this or that consequence? If he had thought about it, what would he have wanted? What sense and purpose must be taken into account in the context of interpretation?”

5.4.3 Different roles within the meaning of the EU AI Acts

Along the previously outlined chain of high-risk medical AI, a distinction should be made:

  • The provider of a medical “AI system” basically bears all responsibility for each individual AI model used in it. It doesn’t matter whether it’s general purpose model or a specific one. Moreover, even if it is not produced by him.
  • Only the “GPAI model” (left) can provide an additional provider within the meaning of Article 3 No. 3 EU AI Act. The manufacturer of the medical AI model (second from left), on the other hand, is always considered only a “third party” within the meaning of the German Constitution. Article 25 (4) EU AI Act (if the AI model does not come directly from the provider of the AI system itself).

Attention! For both AI models, the requirements for civil law contracts of Article 25 (4) EU AI Act must be observed equally, despite different roles. The specifications, in turn, would have to have comparable content with regard to the criteria for AI models.

EU AI Act - CAIR4.eu

In order to comply with the requirements, however, the parties must first know that these requirements exist! Will the guidelines mentioned in Article 25 (4) sentence 3 EU AI Act ever come? It is only an “optional regulation”!

5.4.4 Different AI models within the meaning of the EU AI Act

If GPAI models are integrated into high-risk AI systems, additional rules apply compared to “other” AI models – also for the provider of the AI system:

  • On the one hand, the provider of the GPAI model is a “third party” within the meaning of Article 25 (4) EU AI Act (see above).
  • In this particular case, however, the provider of the AI system becomes a “downstream provider” of the GPAI model (cf. Article 3 No. 68 EU AI Act).
  • The provider of a GPAI model is subject to a variety of transparency requirements in the value chain, in particular those of Chapter V. In many respects, they also apply to the provider of the AI system.
EU AI Act - CAIR4.eu

As a provider of a high-risk AI system, you have to be aware of this: In the ongoing monitoring of the high-risk AI system in the sense of Article 72 of the EU AI Act , a possibly integrated GPAI model would therefore have to be monitored in the same way as one’s own specific AI model. According to the wording of the above-mentioned standards, the provider of the high-risk AI system would therefore be responsible for ensuring that google, Microsoft and openAI do not contain any model errors.

5.4.5 Hybrid Forms such as Tranfered Learning Models

Imagine the quite realistic case that the manufacturer of the “other” medical AI model uses the technique of transfer learning with multiple models (TLM). Among them is a GPAI model. If the manufacturer uses the resulting “TLM overall model” for only a single medical purpose (e.g. detection of skin cancer), then strictly speaking there would be no GPAI, although a GPAI is attached. The manufacturer of the TLM would only be a “third party” within the meaning of the Regulation, Article 25 (4) EU AI Act:

  • As a manufacturer of an AI model with a specific purpose, he would not have the same transparency rights vis-à-vis the provider of the GPAI model as the provider of a high-risk AI system.
  • The wording of Article 53 (1) b) i) EU AI Act explicitly refers only (!) to providers of AI systems – not to the connection of AI models with GPAI.
EU AI Act - CAIR4.eu

The roles are already important in the design and development of an AI system! The identification of specifications required in the regulation layer depends largely on the role of which components in the AI value chain! For a GPAI model that is integrated into a high-risk AI, there are different requirements in the 5-layer model than for the integration of a specific AI model.

6. Method and results of interpretation

Against this background, the key question is: What criteria for AI models can be derived from the mentions in the EU AI Act? The four recognised legal methods of interpretation are used to find the answer:

  • grammatical
  • historical
  • teleologically and
  • systematic.

6.1 General information on interpretation

There is an article on wikipedia that describes the legal interpretation well:

  • The term encompasses the concretization and interpretation of an indeterminate legal concept.
  • However, it also includes application-oriented interpretation.
  • The aim is to determine the “correct” meaning of the words of the law.

The last point makes it clear: One should first find all the references for “AI model” in the EU AI Act in order to be able to interpret the “word of the law” afterwards.

EU AI Act - CAIR4.eu

In many cases, interpretation is “hard work”! Against this background, CAIR4 was designed from the beginning as a “search engine” in such a way that statistical studies on the EU AI Act can be carried out by means of filters: When, where and how often is an (in)definite legal term mentioned in the recitals, the articles or the annexes? In part, it is also a matter of finding the references to other norms.

6.1.1 Mentions in the EU AI Act

So first of all, let’s get to the pure statistics! The term “AI model” is mentioned several times in each of the three broad areas of the EU AI Act:

  • The 113 articles,
  • the 13 annexes,
  • and the 180 recitals.

In addition to the term “AI model”, there are also hits in the text that only speak of the “model”. Strictly speaking, it would also have to be clarified whether and to what extent the term “model” is always identical with “AI model” or not. This is assumed here for reasons of simplification. In total, there are well over 100 mentions in different contexts for (AI) models.

EU AI Act - CAIR4.eu

In addition to the term “AI model”, there is another term that is directly related to them: “AI technology(s)”. This formulation is mainly found in the first version of the EU AI Act of 2021. It is also used in some places in the final version. This is important because the “AI techniques” in the first version with Annex I old version. comprehensive definitions existed, which essentially described AI models and correspond in several respects to the OECD’s definition of AI models (see Part 1 ).

A large number of references have been listed in a table including the context, which can be accessed here as a PDF. Context here means in particular the mention with reference to GPAI, high-risk AI, but also other general mentions, e.g. in the context of codes of conduct.

Dieses Bild hat ein leeres Alt-Attribut. Der Dateiname ist Bildschirmfoto-2024-08-12-um-20.32.26-1024x616.png

6.1.2 Miscellaneous criteria

If one summarizes the most important aspects of the table shown above, a three-part overall picture emerges with regard to the references:

6.1.2.1 Relevant mentions

  • AI models have general and special features (related to impact and components)
  • AI models require consideration of the entire life cycle
  • AI models use training, validation, and test datasets
  • There are training models that can be trained before and after
  • What is important is the reliability of the model, model fairness and model safety
  • Elements of AI models are: parameters, weights, and model architecture
  • Some AI models can replicate themselves
  • The model usage must be described
  • A distinction is made between generative and non-generative AI models
  • generative AI models are to be regarded as GPAI models from a certain size
  • AI model can have different modalities
  • the distinction from AI systems is ultimately primarily made via the user interface
  • A distinction is made between AI systems that train AI-M and other AI systems that do not train AI models.
  • There are also open source and closed source models

6.1.2.2 Special AI Models

  • AI Models in High-Risk AI Systems
  • Common Use AI Models
  • AI models with general purpose and systemic risks
  • All variants are available with and without open source

6.1.2.3 Annex I (old version) on AI concepts

  • Machine learning concepts, with supervised, unsupervised and reinforcement learning using a wide range of methods, including deep learning;
  • Logic and knowledge-based concepts, including knowledge representation, inductive (logical) programming, knowledge bases, inference and deduction machines, (symbolic) reasoning and expert systems;
  • Statistical approaches, Bayesian estimation, search and optimization methods.

6.1.3 Dimensions of AI Models

According to the differentiation recommended here, five different dimensions of criteria for AI models within the meaning of the EU AI Acts:

  1. The purpose or usage variance of an AI model
  2. The techniques and concepts of AI models
  3. The components of an AI model
  4. The way AI models evolve
  5. The interfaces of AI models
EU AI Act - CAIR4.eu

The references listed under point 6.1.2 can largely be subsumed under these five dimensions.

6.1.4 Other characteristics

According to the view represented here, the life cycle is not a dimension in its own right, but the state or the degree of development of the individual dimensions.

According to the view represented here, open source is also not a dimension in its own right, but rather an aspect of licensing law that helps determine the relevance of certain provisions.

The situation is similar with systemic risks, which, according to the view taken here, also do not represent an overarching criterion for AI models.

  • Rather, they are part of the concept or components of an AI model in the sense of effectiveness.
  • With regard to the reach or the number of users, these are factual properties that are not in the model itself and therefore appear irrelevant for an overarching definition.

6.1.5 Graphical results overviews

According to the view advocated here, the dimensions outlined above are all criteria that should be taken into account in an overarching definition of AI models. They can be sketched graphically together with the conclusions and a historical consideration as follows:

The previously listed references also allow some conclusions, e.g.:

  • The explicit naming of specific “GPAI models” leads to the conclusion that all other uses represent “other” AI models (single or multi purpose).
  • The statement that “large” generative AI models are an example of “GPAI models” allows the conclusion that:
    • “small” generative AI models and
    • all other “non-generative” AI models are to be classified as “other” AI models.
  • Since the provider within the meaning of the EU AI Act only exists for GPAI models, the manufacturers of all “other” AI models can at best be described as “third parties” in conclusion.

A further conclusion lies in the derivation that the wording “AI concepts” or “AI concepts” used in Annex I AI techniques” in high-risk AI systems are very closely related to AI models. Therefore, they are rated as the second of five dimensions.

7. Recommendation of a definition for AI models

A separate post explains how the results outlined above were converted into a recommendation for a definition of AI models within the meaning of the EU AI Act:

The following short version of an overarching definition of AI models within the meaning of the EU AI Act is based on the definition of AI systems within the meaning of Article 3 No. 1 of the EU AI Act in terms of language and scope:

An “AI model” is a computational entity designed to process data in order to generate targeted predictions, decisions, classifications or content. It has one or more purposes, uses algorithms, data and parameters to achieve these results and can evolve in real time through updates, retraining and learning. As a building block of AI systems, AI models only interact indirectly with users.

The interaction between AI model and AI system can be outlined as follows:

8. Transfer to the 5-layer model

Using recognized methods, the interpretation has brought to light five overarching dimensions for AI models within the meaning of the EU AI Act. The next task is to compare the results with the 5-layer model.

This is the core content of the third part of this article.

Links to the articles of the EU AI Act mentioned in this post (German)

About the author:

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *