The EU AI Act does not contain a definition that would explain what an AI model in the sense of the EU AI Act is and what components it has. Despite the vagueness, the EU AI Act contains a number of requirements for different variants of AI models. These include those with general purpose and others integrated into high-risk AI systems. How can the definition gap be filled? Part 1 of 3 of an analysis with recommendations for action.
- The first of three parts of this article shows the effects of the lack of a definition from the point of view of AI research and AI practice.
- A 5-layer model is presented with which professional, technical, legal and ethical requirements can be structured.
- In addition, a look at existing definitions for AI models from two international AI companies and the OECD will be taken.
- Finally, the thesis is that the upcoming development of the Code of Practice for GPAI models could solve the outlined problem in a timely manner.
Articles of the EU AI Act mentioned in this post (German)
- Article 3 EU AI Act
- Article 6 EU AI Act
- Article 15 EU AI Act
- Article 54 EU AI Act
- Article 56 EU AI Act
- Annex III EU AI Act
Please note that the original text of this article is based on the official German translation of the EU AI Act. Its wording may differ in part from the English version of the EU AI Act.
The AI model in the sense of the EU AI Act – part 1
The EU AI Act differentiates between AI systems and AI models. AI models are assets of AI systems. Various nests are possible (cf. e.g. Article 3 No. 63, 66 EU AI Act):
- Multiple AI models in one AI system,
- an AI model in multiple AI systems, and
- AI systems as a component of other AI systems.
So far there is deceptive clarity.
However, it seems irritating that an overarching definition has only been published for AI systems in Article 3 No. 1 EU AI Act. A corresponding definition for AI models is missing.
First of all, six examples of the relevance of the definitional vacuum:
- The four risk classes of the EU AI Act basically only apply to AI systems. Not for the AI models it contains. Nevertheless, there are “tough” requirements for the usually quite specific AI models of high-risk AI systems (cf. Article 15 (5) sentence 3 EU AI Act). For this alone, it should be clarified what (generally speaking) is an AI model in the sense of the EU AI Act and what is not!
- The situation is similar with (voluntary) codes of conduct for AI systems of medium and low risk. Corresponding codes should be able to explain, among other things, why the AI model used in them is trustworthy. It does not matter whether it has a “general purpose” or not. A basic definition for AI models is also useful for this.
- Finally, Chapter V contains comprehensive requirements for “general-purpose” AI models. Within the supply chain, it is important that all participants have a common idea of what constitutes an AI model: where it begins and where it ends. Once again, a clear definition would help!
- GPAI models with a free and open source license, as described in Article 53 (2) and Article 54 (6) EU AI Act and recital (102), also require clear criteria. All the more so because these GPAI models can be modified as open source by downstream actors: But when does a change in the AI model occur, and when is it a change in other components and assets of an AI system?
- With regard to “downstream providers”, Article 3 No. 68 of the EU AI Act generally refers to “AI models”. The downstream providers therefore always arise when “any” AI models are integrated into “any” AI systems.
- Finally, an AI system can also use several (modified) AI models, which can lead to a quite complex interaction of the individual models. Only a clear definition for AI models of all kinds makes it possible to determine the limits and transitions of the individual AI models and thus maintain a regulatory overview.
1. AI model without “general purpose” – what is it?
Let’s be honest: Which provider of any AI model will be interested in their “product” falling under the extensive requirements of Chapter V?
- But how do AI providers, developers and domain experts know whether and to what extent their “product” is to be considered an AI model in the sense of the EU AI Act at all – is this based on self-assessment or are there clear criteria?
- So what about the countless AI models that lack the “general purpose” at the beginning, but develop into one over time? Which new purposes are relevant and which are not?
- How should the “evolution” of an AI model be documented? Which part of the documentation concerns “only” the AI model? What documentation concerns other assets such as data or infrastructure? Where are the boundaries?
Only when these and similar questions have been clarified does the question even arise as to when an AI model is “with general purpose” in an individual case. Only when this basic question has been clarified will it be necessary to examine whether there are also “systemic risks” in the sense of Article 3 No. 65 EU AI Act.
A clear and overarching definition of an AI model in the sense of the EU AI Act would help with all this. But this definition is missing – at least so far.
Article 3 No. 63 EU AI Act does contain a definition of AI models with a general purpose. However, it remains unclear what an AI model actually is. The “other” AI models “without general purpose” are most likely the vast majority of all existing AI models – but what are their characteristics? Which ones are not? The lack of a definition would probably be less critical if specifications and recommendations did not also exist for the “other” AI models. In addition, smaller AI models can also pose risks, which requires a clear and regulatory understanding of common criteria for AI models of all kinds.
2. High-speed evolution of AI models
In addition, the innovation of AI models is increasingly becoming the actual driver for the increasing performance of AI systems. It is the turbo of the applications in which they are embedded.
- ChatGPT illustrates how, at the push of a button, by changing the AI model from one moment to the next, a much more powerful AI chatbot can be created: By activating the multimodal AI model GPT 4o, which, unlike its predecessors, can generate not only text, but also images and even interpret emotions.
- The technique of the “model merger” described in this CAIR4 article is also interesting. This refers to AI models that can virtually create themselves or continuously develop themselves. Without knowing exactly what limits the individual AI models have.
The challenge in the case of ” self-consuming” generative AI models should also not be underestimated. This phenomenon complicates, among other things, the regulatory distinction between generative AI models and AI systems, because the AI model presumably acts autonomously if it uses its own (virtual) outputs as (virtual) input, but the intended purpose is not recognizable.
The last example is reminiscent of the legal “self-dealing” in the sense of § 181 BGB: In this case, the same person concludes a transaction with himself within the framework of different legal roles (e.g. the managing director of a GmbH with himself as a private individual).
2.1 Most AI models have specific purposes
In other words: AI models are highly dynamic “products”! And the majority of them do not have a “general” but a “specific” purpose. This applies, among other things, to medicine, the financial sector or in the context of critical infrastructures. Sometimes AI models only grow over time to a level of development that allows them to be used for “multiple purposes”.
However, this article is not so much about the question of when any “other” AI model could become an AI model with “general purpose” in the sense of Article 3 No. 63 EU AI Act. It is about the much more fundamental question of what is to be considered an AI model in the sense of the EU AI Act and what is not!
The challenge can be represented with a “model tree” on which there are various “model apples”. Apples refer to different mentions of the term AI model in the AI Regulation. There are some of them, which will be specified in Part 2 of this article.
The EU AI Act defines and describes various “apple varieties” in many places, including their “size”, “color” and “taste”. However, the AI Regulation leaves open which criteria must be met in order to classify any “apple” into an “apple within the sense of the EU AI Act”. The lack of a definition makes it difficult to distinguish it from a symbolic “orange” or a “pear”.
So what is an AI model in the sense of the EU AI Act?
2.2 Differentiation between AI model and data
The need for clear criteria is all the more true because, among other things, Article 3 No. 63 EU AI Act and Article 15 (5) sentence 3 EU AI Act) also bring data into play in the context of AI models: When and how are these to be separated from the AI model? When are they considered part of an AI model in the sense of the EU AI Act?
- In order to be able to estimate regulatory efforts, investors and developers are already dependent on clear basic criteria for AI models of all kinds in the design and development phase. Subsequently, an existing overall architecture is difficult to change or adapt to regulatory requirements.
- This is all the more true because the regulatory requirements are also developing dynamically – requirements that affect not only the AI model, but also data protection and other areas of law such as cyber security, in some cases with a smooth transition.
- In the case of medicine or finance, the whole thing is flanked by additional regulatory requirements, due to which it may be necessary to clarify whether a potential error lies in the AI model, the training data and/or the input data. A definition or structural model for AI models of all kinds should take this into account.
3. Relevance of clear definitions
The importance of the most precise regulatory definitions possible from the point of view of AI research and AI practice is illustrated by a recent document:
- This refers to the scientific publication by Dubey, Akshat, Zewen Yang, and Georges Hattab: “A Nested Model for AI Design and Validation.” iScience (2024)
- The authors are Akshat Dubey, Dr. Zewen Yang and Dr. Georges Hattab from the Center of Artificial Intelligence in Public Health Research at the Robert Koch Institute (ZKI-PH).
- https://doi.org/10.1016/j.isci.2024.110603
The “5-layer model” for the design and validation of AI presented in it differentiates the aspects:
- “Regulation”,
- “Domain”,
- “Data”,
- “Model” and
- “Prediction”.
3.1 Terminological clarity
From the authors’ point of view, terminological clarity is indispensable in order to be able to implement trustworthy and even more explainable AI in practice. The delimitation of AI models and various forms of data plays a very important role in this. The same applies to the question of which elements within the AI model decide on the so-called “prediction” or make it possible to make decisions of the AI model transparent and comprehensible.
The 5-layer model is well done in several respects:
- You can understand its logic quickly and easily. The clear graphics and clear terminologies help to sort through all kinds of relevant information.
- This helps developers as well as clients and supervisory authorities to record specifications, find and eliminate sources of error in AI and thus make them more trustworthy and as legally secure as possible.
- How exactly to deal with the 5-layer model is specified in the paper on the basis of various use cases. Variable filling of the layers conveys even more complex challenges in a way that is easy to understand.
3.2 Matching with AI model in the sense of the EU AI Acts?
Despite this, or perhaps because of this, the question arises (also) with the 5-layer model to what extent it harmonizes systematically and terminologically with the existing (and missing) definitions of the EU AI Act or whether and how both approaches can complement each other. The fact that the 5-layer model comes from the field of medical AI plays a role here. In many cases, this in turn is to be classified as medical software of risk class MDR IIa and thus as high-risk AI in the sense of Article 6 (2) in conjunction with Annex I/III EU AI Act on the basis of Rule 11 – it must therefore comply with the AI model requirements of Article 15 (5) EU AI Act!
This leads to exciting questions: Can the 5-layer model also be applied to an AI model with general purpose (with and without) systemic risks? What about high-risk medical AI? What about other AI models?
Conversely, what does the EU AI Act itself offer to answer the challenges of practice in this regard? The AI Regulation does not offer at least one thing: an overarchingly clear definition of AI models! What is meant is a definition that, in the sense of the 5-layer model, would most likely be assigned to the fourth layer marked in green and the sub-layers contained therein, or would concretize it and distinguish it from other layers as clearly as possible.
3.3 Cooperation between lawyers and AI experts
Who is now obliged to solve corresponding tasks? Is it the lawyers who all too often leave important operational details unanswered? Is it the AI experts who like to focus on a technical language that no one else understands?
Either way, the paper of the Robert Koch Institute is to be understood as an offer for constructive dialogue between both sides! At the beginning of the paper, the authors rightly point out that AI regulation and AI research still hardly harmonize with regard to definitions and terminologies, among other things – and that this must change (on both sides).
Quote:
- … Regulatory concepts are necessary for AI researchers because these concepts allow risk and safety concerns to be considered and understood in the regulations proposed by the United States and Europe.
- … the fields of regulatory science and AI diverge and there is no major overlap in sight.
- … When it comes to regulating AI, many regulators step in and make compliance with laws mandatory, requiring explanations or interpretations for users when faced with algorithmic outcomes.
In other words, more cooperation and common understanding are needed – especially when regulatory issues become so specific that technicians need to know exactly what is required of them legally, ethically and/or technically in which areas of their work! Undefined legal terms and a lack of definitions for core components of an AI make this much more difficult!
3.2 Iterative concretization
And so we come full circle on the topic of AI models:
- AI models are not “any” part of AI systems! They are their heart – and one that must be clearly distinguishable from other “internal organs” of an AI with regard to the evaluation and treatment of “cardiac arrhythmias”.
- This clarity must be made possible as quickly as possible so that the EU AI Act does not become an investment trap and a brake on innovation due to its vagueness in this regard!
- In view of the lack of a definition for an AI model in the sense of the EU AI Act, an iterative, constructive, interdisciplinary cooperation seems not only possible, but also urgently necessary: For lawyers as well as for data, AI and domain experts as well as many other stakeholders.
Against this background, an attempt will be made in the following to build a bridge
- between the existing mentions and requirements for AI models of the EU AI Act
- and the 5-layer model of application-oriented AI research.
To anticipate the result: According to the opinion represented here, the 5-layer model can provide valuable help in bridging the gap of the missing definition for an AI model in the sense of the EU AI Act in the most practical way possible. How exactly will be explained below. It is recommended to read the original paper of the Robert Koch Institute . The 5-layer model even works for the premier class of “explainable” AI (XAI). According to the legal principle “a maiore ad minus“, it thus seems to be well suited for simple AI models and thus an ideal basis for filling an important part of the definition vacuum, at least to some extent!
3.3 Methodological approach
In order to get closer to the content of a general definition for AI models, several paths are taken in the following, which in the legal sense (at least in construction and planning law) are referred to as the “countercurrent principle” of mutual influence.
Accordingly, from the 5-layer model in the direction of the EU AI Act and vice versa from the EU AI Act in the direction of the 5-layer model, common but also different aspects must be sought in order to subsequently outline an integrated understanding that connects both worlds in the best possible way.
The 5-layer model appears to be a suitable instrument to answer the question “What is an AI model in the sense of the EU AI Acts”. For this purpose, however, criteria for terminological matching must be determined by way of the “counter-current principle”.
The matching process consists of several steps:
1. First of all, existing definitions for AI models are checked (hereinafter Part 1).
2. Then the references in the EU AI Act are listed (part 2 of the article).
3. These are compared with the 5-layer model and the other definitions (part 2 of the article)
4. Finally, the interaction between the AI model and the AI system is examined (part 3 of the article).
3.4 Existing definitions
In the following, three internationally common definitions for AI models are presented and examined with regard to their fit for the EU AI Act:
- two definitions from the international AI business (IBM and HP),
- an explanation of AI models in YouTube,
- and the existing OECD definition.
3.4.1 Definitions from the AI business
What seems relevant first of all is what experienced global business players such as IBM and HP have to say about the “AI Model” – some of their definitions existed long before the EU AI Act. The following overview of the AI evolution shows quite well that services such as IBM Watson have made a significant contribution to shaping the nature and understanding of today’s AI models.
IBM writes on the subject of AI Model (English version, as of August 2024):
- An AI model is a program that has been trained on a set of data to recognize certain patterns or make certain decisions without further human intervention.
- Simply put, an AI model is defined by its ability to autonomously make decisions or predictions, rather than simulate human intelligence.
- In simple terms, an AI model is used to make predictions or decisions and an algorithm is the logic by which that AI model operates.
HP writes about the AI Model (German version, as of August 2024):
- AI models or artificial intelligence models are programs that recognize certain patterns based on a collection of data sets.
- It is the representation of a system that can receive data input and draw conclusions or act based on these conclusions.
Both definitions are absolutely justified! However, from a regulatory point of view, they have a catch that should not be underestimated: By using terms such as “program” or “autonomous decision” or “acting”, they are very similar to the definition of AI systems in the sense of Article 3 No. 1 EU AI Act. In this respect, they are only of limited help, because a definition is sought that not only explains what an AI model in the sense of the EU AI Act is, but also a definition that is clearly differentiated from AI systems in the sense of the EU AI Acts.
3.4.2 YouTube Explanation of AI Models
It is exciting that there are a number of quite striking explanatory videos on AI, especially on YouTube, which use graphics and animations in addition to words to illustrate the dynamics of AI models in the context of their life cycle.
One of several examples of this is a video from the channel “The CISO-Perspective” – i.e. the perspective of IT security officers. The latter is a topic that also plays an important role in the context of the EU AI Act and the Cyber Resilience Act (CRA). Both standards are interwoven in the context of high-risk AI systems via Article 8 CRA and Article 15 EU AI Act – the latter standard, in turn, forms the bridge to AI models with paragraph 6 sentence 3.
In the following video, an AI model is described in general terms with the following – comparatively simple – equation:
AI Modell = Training (Algorithms + Data)
The advantage of this definition is: It is “short and sweet”! And its disadvantage is that it is “short and sweet” – simply too short to be able to grasp the diversity and challenges of AI models in a regulatory sense.
Nevertheless, striking videos, including graphic elements, can and should serve as a stylistic device to enable more transparency in an operational sense. In other words, (legal) texts alone rarely help to solve (regulatory) challenges that are as abstract as they are complex. However, many lawyers find it difficult to visualise regulatory requirements. This also applies to many EU guidelines, which in case of doubt only generate more text, even if, as in the case of many guidelines. This is designed to be more appealing than mere legal text.
An example that already goes in a somewhat more striking direction are the various guidelines of the EU on the Medical Device Regulation (MDR). But with regard to AI guidelines, significantly more graphical overviews and, if necessary, animations should be integrated in order to “symbolize” the complex interplay of various individual elements such as “AI system” and “AI model” in the truest sense of the word.
The EU can and should take bold steps in this direction with regard to the concretization of the EU AI Act in guidelines! The 5-layer model examined in more detail in the second part shows what corresponding approaches could look like. Even if these are static within the actual text of the guidelines, they can still be vividly expanded through animations (such as those in the video presented here).
3.4.3 OECD approach
The last point leads to the OECD as if by itself: In March 2024 (i.e. almost simultaneously with the final version of the AI Regulation), it published a policy paper on this very topic. It clearly distinguishes between AI systems and AI models. At least as important: There are not only definitions for both terms, but also overarching diagrams (albeit to a rather small extent).
Importantly, the OECD definition of AI systems has significantly influenced that of the AI Regulation!
For comparison: The two definitions for AI systems (as of March 2024)
OECD:
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
EU AI Act:
‘AI system’ means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;
3.4.3.1 Interplay of AI system and AI model
As already said, the OECD does not only have a definition for AI systems. The March 2024 document also includes a definition for AI models. The OECD deliberately points out that there are a large number of inconsistent ideas in this regard. Perhaps this is precisely why the OECD has revised its previously existing definition of AI models (see e.g. OECD document of 2022 ).
Here is the final result:
“An AI model is a physical, mathematical or otherwise logical representation of a system, entity, phenomenon, process or data in accordance with the ISO/IEC 22989 standard.
AI models include, but are not limited to:
- statistical models and different types of input-output functions (such as decision trees and neural networks).
- An AI model can represent the transition dynamics of the environment and allows an AI system to select actions by examining their possible consequences against the model.
- AI models can be created manually by human programmers or can be created automatically, such as through unsupervised, supervised, or amplifying machine learning techniques.“
The interplay of the two OECD definitions has therefore not only existed since 2024! As early as 2022, the terms “AI system”, “people and environment”, “economic context”, “data and input” and “AI model” as well as “tasks and output” were explicitly defined – but then also modified a little afterwards. As of 2024, the interaction between the AI system and the AI model is as follows, according to the OECD:
Actually, the EU could have adopted the OECD’s “combined model” outlined earlier – but it didn’t! Therefore, at some imaginary point, the EU must have made the decision:
- to integrate an OECD-based definition of AI systems into the EU AI Act,
- and, in addition, to omit the OECD’s definition of AI models.
3.4.3.2 Not taken into account in EU AI Act
To emphasize it again in other words:
- There has been and still is a definition of both AI systems and AI models on the part of the OECD – and has been since 2022.
- On the one hand, the OECD’s definition of AI systems in the EU AI Act has been largely adopted.
- On the other hand, the definition of AI models was not even hinted at: As of today, there is no overarching definition for AI models in the EU AI Act!
Due to their orientation towards the OECD, the drafters of the AI Regulation must have come across the question of whether it should include a definition of AI models in the EU AI Act in addition to the definition of AI systems, or which – e.g. as an additional No. 69 of Article 3 EU AI Act (this article also contains all other “important” definitions). That didn’t happen! However, the omission of a definition for AI models is neither mentioned nor justified.
3.4.4 Opportunity to concretize AI model in the sense of the EU AI Act
IWith regard to the question of why this did not happen, one can only speculate. The explanatory memorandum of the EU AI Act does not provide any indication of this. There is not much to be found in the media so far. In this respect, a quite interesting question arises:
Could it be an advantage that the definition of AI models is missing in the EU AI Act?
There are various reasons that suggest that it is not yet a disadvantage (at least for the moment):
- First of all, one can argue excellently about whether and to what extent the OECD’s definition of AI models is better or worse than the previously mentioned definitions from business. The last version from 2024 is no less cryptic than the already difficult definition for AI systems. Would their use in the EU AI Act have led to more clarity? One may doubt this.
- Consequently, it cannot be ruled out that the EU has deliberately adopted only the definition of AI systems from the OECD. Either it has forgotten the definition for the rapidly developing AI models or has deliberately omitted it or secretly let it fall “under the table”.
In view of the short time before the adoption of the EU AI Act and the many controversial details surrounding AI models with general purpose, it is conceivable that the waiver was seen as an opportunity with regard to flexible and timely interpretation – in which AI models are primarily specified in guidelines. Not in the actual text of the law.
However, if the EU had had the goal in this regard, it could also have been highlighted in the recitals or even as a task for the future in an article (e.g. in Article 56 EU AI Act). This is not the case. Nevertheless, there is still an opportunity to take up the topic in the context of the development of guidelines, or even better in the creation of the Code of Practice!
3.5 The Opportunity: The Development of the Code of Practice for GPAI Models
Perhaps the most important opportunity to make AI models more precise across the board is the obligation of the European AI Office to develop the practical guidelines for GPAI models prescribed in Article 56 (9) EU AI Act by 2 May 2025 at the latest.
Details of this participatory process can be found here
The Code of Practice cannot and should not only aim to provide guidelines for GPAI models. It should also establish (time-related) basic criteria for AI models of all kinds! Only then does the necessary clarity arise as to when and to what extent the terrain of the GPAI models is entered in individual cases.
In addition, the criteria for open source models, their modification and the interaction of several AI models can be specified. It would also be relevant to clarify how to proceed if one or more (GPAI) models are used in a high-risk AI. Then not only the specifications for GPAI models must be observed. The requirements of Article 15 (5) EU AI Act are also important.
It is advantageous to have correspondingly valid criteria for AI models for many reasons. For example, if a “single-purpose” AI model evolves into a GPAI model over time – or if this is exactly what is to be prevented.
3.6 Interim conclusion and outlook for part 2
First, it was shown that there is a need for clear definitions and the approximation of regulation and operational AI practice, especially in the field of AI research. This also applies to AI models. The 5-layer model is also an interesting instrument for building bridges between AI research and existing requirements of the EU AI Act.
Before this bridge is outlined in the following, it was examined to what extent the existing definitions of AI models from the AI business and the OECD could close the gap. However, neither the definition approaches of the economy nor the definition of the OECD seem to help with regard to the specific criteria for any AI model in the sense of the EU AI Act.
According to the view taken here, the lack of an overarching definition of AI models in
the EU AI Act is both an opportunity and a risk:
3.6.1 The Opportunities: Guidelines!
- Guidelines are much better suited to refining a highly dynamic field such as that of AI models than comparatively static legal norms.
- In addition, the development of the code of practice of GPAI models is currently imminent – on the one hand, there would be enough time. On the other hand, the waiting time until May 2025 is not too long either.
- The process also provides for the participation of “other stakeholders”: According to the view represented here, these are in particular the providers of all kinds of “other” AI models, who are likely to have a great interest in not inadvertently falling under the requirements of Chapter V .
- Systematically, it also seems very possible to include the clarification of the basic question “What is an AI model in the sense of the EU AI Act” in each of the four working groups.
3.6.2 The risk: Nothing happens!
- If the Code of Practice for GPAI models were not integrated into an overarching specification for AI models of all kinds, the problem would continue to exist as before for a longer period of time.
- Then the EU AI Act would still be missing an important building block to promote the urgently needed legal certainty or to support AI research and AI practice in the best possible way so that the lack of a definition does not become a brake on innovation.
3.6.3 Outlook for the second part
Since seizing the opportunities seems not only possible, but also sensible, the following scenario is assumed that the interdisciplinary development of the Code of Practice will proactively address the topic of “criteria for AI models”.
This is to be worked on in the following:
- One focus of the second part will be the listing of references that mention AI models of any kind in the text of the EU AI Act. Criteria are then extracted from the mentions, which presumably all types of AI models in the sense of the EU AI Act must meet.
- Based on this, the 5-layer model is analysed to see how it could meaningfully integrate or further develop the previously evaluated criteria of the EU AI Act – among other things, with regard to the interaction of the AI model and data or the addition or extension of the 5-layer model to include criteria such as the “general purpose” or “systemic risks” of an AI model with a general purpose.
- Finally, the third part examines the question of how the 5-layer model can be used in such a way that it can capture, map and operationally control regulatory requirements for AI systems of different risk classes as well as different AI models in the sense of the EU AI Act in the best possible way.
Links to the articles of the EU AI Act mentioned in this post (German)
Be First to Comment