5 plus 1 recs. on the Code of Practice for GPAI models

Lesedauer 12 Minuten

By May 2, 2025, the European AI Office must specify a Code of Practice for GPAI models. Below are five plus 1 recommendations on what should not be missing in the guideline. The recommendations concern the important role of a glossary. In addition, there is a methodological recommendation for operational implementation.

EU AI Act - CAIR4.eu
  • The Code of Practice for GPAI models has a high indirect legal relevance: under civil law as well as under administrative law.
  • The GPAI guide should therefore specifically address several previously undetermined aspects of GPAI – e.g. in a glossary.
  • In the sense of a matrix, four content recommendations are given for the glossary, including missing definitions and indefinite terms.
  • Finally, reference is made to the promotion of operational feasibility by means of innovative tools such as the 5-layer model.

Articles of the EU AI Act mentioned in this post (German):

  • Article 3 EU AI Act
  • Article 25 EU AI Act
  • Article 53 EU AI Act
  • Article 55 EU AI Act
  • Article 56 EU AI Act
  • Annex XI

Please note that the original text of this article is based on the official German translation of the EU AI Act. Its wording may differ in part from the English version of the EU AI Act.

Code of Practice for GPAI Models until May 2, 2025

One of the shortest deadlines of the EU AI Act concerns the European AI Office in Brussels. It must create a Code of Practice for GPAI models in an iterative process together with key stakeholders. This is regulated in Article 56 (9) EU AI Act. The EU has already called for participation, see e.g. here and here.

The following schedule shows what is planned by when – an ambitious challenge!

The following overview outlines 5 plus 1 recommendations for the planned GPAI Code of Practice. They are substantiated in this article.

But first a few words on the legal classification of codes of practice in general – and that of the Code of Practice for GPAI models in particular.

1. Legal significance of the GPAI Code of Practice

Unlike laws or directives, a Code of Practice is usually not legally binding. It is intended as a guide for the implementation of legal requirements. It can be used in the interpretation of laws and regulations. Especially if it is unclear how a provision is to be applied.

1.1 Significance under civil law

First of all, the civil law significance should be mentioned. The GPAI Code of Practice can play an important role in the interaction between GPAI providers and downstream providers, among other things:

  • On the contractual definition of technical AI standards:
    • The practical guide can serve as a guide to ensure that all parties are on the same page and work according to the same criteria.
    • It can also help to avoid misunderstandings and provide clarity about which technical and ethical standards must be met.
  • As a guideline for general business fundamentals within the AI value chain:
    • A unified guide can ensure that all actors along the chain work according to similar principles.
    • This can increase the efficiency and safety and reliability of the entire supply chain.
  • On the assessment of civil liability risks by providers of AI systems that integrate GPAI models:
    • The guideline can serve as a kind of safeguard, among other things, by proving that one has proceeded according to recognized standards.
    • It can also help clarify responsibilities in a complex AI ecosystem involving multiple actors (e.g., developers, vendors, integrators).
EU AI Act - CAIR4.eu

The relevance of this aspect should not be underestimated, especially for SMEs: Small companies or start-ups that integrate a GPAI model into their AI system usually have neither their own legal resources nor the opportunity to purchase expensive external legal advice. In this sense, the guideline can also be the basis of GPAI model contracts. Article 25 (4) sentence 3 of the EU AI Act for high-risk AI systems goes in a similar direction. The GPAI guide is certainly also likely to play a role in this.

1.2 Judicial relevance

In view of the much-criticised vagueness of the EU AI Act, the GPAI guide is also potentially of great importance in this respect. In the event of legal disputes, it can also be used as an aid to interpretation, among other things:

  • Courts could take it into account to decide how certain legal provisions should be understood or applied.
  • In this respect, a court could also use it to determine whether a market participant has acted in accordance with the requirements formulated by the supervisory authority.

1.3 Supervisory aspects

The Code of Practice for GPAI models can also gain importance in supervisory contexts. In particular, national supervisory authorities could hold companies that do not comply with it accountable. In particular, if non-compliance with the GPAI guide suggests a violation of the EU AI Act.

EU AI Act - CAIR4.eu

In summary, although the Code of Practice for GPAI models is not legally binding in itself, it can be taken into account when assessing compliance with the legal requirements. It is highly likely to play an important supporting role in the application and interpretation of the EU AI Act, including in the harmonisation of supervisory practices in the individual Member States. This is all the more true because the text of the law already provides concrete indications of the technical content to be clarified.

2. Contents of the GPAI Code of Practice

This brings us to the technical content that must or should be specified in the Code of Practice for GPAI models:

  • The law requires the concretization of the obligations under Articles 53 and 55 EU AI Act. In addition, there is the specification of specific topics around Annex XI.
  • As can be seen from the above-mentioned overview, systemic risks, risk assessment and risk mitigation measures will also be covered in the GPAI guide.
  • Not mandatory, but supplementary content would be useful. They could help to close existing and already identified regulatory gaps or uncertainties in the GPAI context in a comparatively unbureaucratic manner.

2.1 Mandatory content according to Article 56 EU AI Act

According to Article 56 (1) EU AI Act, the following must be specified in this process (wording of the article):

The Office of Artificial Intelligence and the AI Panel shall aim to ensure that the Practice Guides cover at least the obligations provided for in Articles 53 and 55, including the following aspects:

(a) means to ensure that the information referred to in points (a) and (b) of Article 53(1) is kept up to date in the light of market and technological developments;

b) the appropriate level of detail in summarizing the content used for the training;

(c) the identification at Union level of the nature and nature of the systemic risks, including, where appropriate, their causes;

(d) take into account the measures, procedures and modalities for the assessment and management of systemic risks at Union level, including their documentation, that are proportionate to the risks, take into account their severity and likelihood, and take into account the specific challenges of managing those risks in the light of the possible ways in which such risks arise and materialise along the AI value chain.

This content, which also concerns Annex XI, is without question the heart of the guide. They will only be discussed in passing below. This article refers primarily to the supplementary content that a GPAI Code of Practice should contain.

EU AI Act - CAIR4.eu

The development of the mandatory content is primarily the task of the various stakeholders in the AI value chain. In this regard, a high level of AI expertise is required, including on operational GPAI topics and technical details. However, from a legal point of view, there are some elements that a Code of Practice should include. They concern the bridging of legal and technical topics. What these could be according to the opinion represented here is the content of this article.

2.2 Supplementary content to promote legal clarity

The focus of this article is on the supplementary content. They provide a kind of formal framework for the previously listed AI technical content specified by the EU AI Act.

The following recommendations are therefore in many respects of a formal nature. This applies in particular with regard to the addition of the basic definition for GPAI models according to Article 3 No. 63 EU AI Act. It is more or less at the center of the five plus 1 recommendations.

EU AI Act - CAIR4.eu

Any ambiguity that already exists in the definition of GPAI models leads to further challenges, as if by itself in the technical details of Articles 53 to 55 and Annex XI EU AI Act based on it. Therefore, for pragmatic reasons, the necessary attention should be paid to this easily overlooked aspect of clarifying open basic questions around GPAI models!

3. Five recommendations for supplemental content

In the following, recommendations are given that concern the supplementary contents of the Code of Practice. This should integrate the following five elements that build on each other:

  1. A “GPAI glossary” that specifies important terms. It contains, among other things, the answer to four questions in the context of Article 3 No. 63 EU AI Act:
  2. What does the EU Act mean by an “AI model” in general?
  3. What constitutes “significant generality” in the GPAI context?
  4. What is a “wide range of distinct tasks in the same context?
  5. When are these “competently” performed, when not?
EU AI Act - CAIR4.eu

Ambiguities that exist in points 2-5 automatically ensure that the specific contents of Articles 53ff EU AI Act and Annex XI are on shaky ground: With all the specific norms, the basic question always arises from a legal point of view: Is the constituent element of a norm of Chapter V open at all? First and foremost, one ends up with the definition of Article 3 No. 63 EU AI Act – and the associated contents, which are often hardly determined. From a legal point of view, the specification is of great practical importance for all the variants mentioned above under point 1.

3.1 The GPAI Glossary

The first recommendation concerns the simple inclusion of a GPAI glossary in the Code of Practice for GPAI models. This glossary clarifies a number of GPAI-related terms. Especially those that have so far been comparatively undefined in the EU AI Act.

The fact that the GPAI Code of Practice is a guideline allows the comparatively flexible adaptation of the glossary to new developments in the field of AI models. A clear advantage over the alternative: the adaptation of the text of the law.

In the EU, there are many different examples of such glossaries in practice guidelines, including:

EU AI Act - CAIR4.eu

It is assumed that the Code of Practice for GPAI models will include a glossary even without this reference. In this respect, this point has been addressed! The following is mainly about the question of which terms and topics should not be missing in the glossary!

3.2 Overarching definition of AI models

The second recommendation concerns the content of the glossary: the EU AI Act lacks an overarching definition of AI models. Several articles have been dedicated to this topic on CAIR4. According to the opinion represented here, the lack of a definition has high practical relevance – not least for the entire value chain with and without GPAI models!

The question is how this gap can be filled. According to the view taken here in the previously recommended glossary of the GPAI Code of Practice to be prepared. Suggestions on how the missing definition could be shaped in terms of content can be found in this article:

The technical hook for the integration of the general definition for AI models in the GPAI guide is the explicit wording of Article 3 No. 63 EU AI Act:

  • There, clear general criteria for the determination of AI models are assumed.
  • Unfortunately, these criteria are not (yet) to be found anywhere in the final text of the law.
  • The annex containing the definition of AI techniques contained in the previous version of 2021 has been deleted without replacement.
EU AI Act - CAIR4.eu

The recommended article provides suggestions on how a overarching definition for “AI models” could be included in the glossary. A short definition would be conceivable, for example. However, there is also a long version for defining AI models created using GPAI tools. In the end, however, it doesn’t matter what kind of definition is included in the glossary. The main thing is that this is done in some form!

3.3 Specification of “significant generality”

The definition of GPAI models in Article 3 No. 63 EU AI Act, which is in many respects vague, is also the hook for the third recommendation. This applies, among other things, to the formulation “significant generality”.

EU AI Act - CAIR4.eu

On the one hand, the criterion of “significant generality” allows GPAI models to be captured, which can be used in a variety of contexts and for different purposes. This broad usability distinguishes GPAI models from AI systems that have been specially developed for a narrowly defined application area. However, there must also be rules that show where the boundary between a generally usable GAPI model and a specialized AI model lies. The principle of rule-based classification in the Medical Device Regulation (MDR) could be a model in this regard. In addition to the MDR model, however, it should then be emphasized more strongly that every rule also has exceptions!

3.3.1 Delimitation issues

According to the view represented here, the previous formulation complicates the distinction between GPAI models and universally applicable domain-related foundation models. This CAIR4 article explains why this is the case:

3.3.2 Recommendation

Below are some aspects that could be included in the Code of Practice – whether in the context of a glossary or in the context of a rule model:

  • It should be more clearly defined which specific characteristics a GPAI model must have in order to be considered “generally usable”.
  • A risk assessment for different (general) deployment contexts could also be included, e.g. for important domains with many sectors such as medicine, finance, legal, etc. It is precisely the formulation of “significance” that could be concretized in this way.
  • The extent to which modifications or fine-tuning are required could be determined before a model is considered “specific” and no longer “generally usable” (and vice versa).
  • Finally, there should be a clearer definition of who is responsible when a GPAI model is used in a new, unforeseen context:
    • In this respect, the formulation of “generality”, which can go beyond the intended purpose, is critical.
    • This could include requiring providers to provide clear usage guidelines and risk warnings, or requiring operators or providers of AI systems using the GPAI model in new contexts to comply with certain due diligence obligations.

3.4 Specification of the “wide range of distinct tasks”

It is no different with regard to the elusive formulation of a “wide range of distinct tasks”. This requirement is a central characteristic for distinguishing GPAI models from more specialized AI systems. But once again, there are problems of differentiation from domain-related foundation models:

  • Do these tasks have to be able to be fulfilled across different industries?
  • Is it sufficient that they can be fulfilled across several sectors of a specialist domain?

As much as this formulation is necessary to distinguish it from highly specific AI models, it also needs further specification.

In the case of “tasks”, it could be explained, among other things, how this term relates to the term “purpose” otherwise used in the EU AI Act. Here, too, it is ultimately irrelevant whether the specification takes place within the framework of a glossary or, better, within the framework of a rule-based classification model.

EU AI Act - CAIR4.eu

To come back to domain-related foundation models: For example, the “wide range of distinct tasks” of medical image recognition could consist of supporting countless specific medical AI systems: For the detection of individual types of cancer as well as for the detection of bone fractures or as a basis for the individualization of therapies of all kinds. It is no different when used in clinical trials or in everyday medical practice. All of this can also be understood as a “wide range of distinct tasks”. According to the view taken here, this is to be assumed, inter alia, taking into account the preceding point 3.3.

3.5 Specification of “competent” performance of tasks

With regard to the distinction from specialized AI models and domain-related foundation models, the concretization of the term “competent” is probably the most important.

3.5.1 The Competence Dilemma

  • On the one hand, the specialized AI models within a certain domain are characterized by a particularly high level of competence. This is exactly what protects them from being classified as a GPAI model in case of doubt. The focus here is on the general purpose.
  • On the other hand, universal AI models are apparently not to be classified as GPAI if they perform a variety of tasks “incompetently”.
  • The fact that presumably every GPAI model is likely to perform some tasks “competently” and other tasks “not competently” raises many questions in this regard in particular – again in distinction to domain-related foundation models.

3.5.2 Criteria and validation

ChatGPT is sometimes criticized for being mathematically “incompetent”:

“ChatGPT solves math problems to a certain extent, but only on the basis of huge amounts of data that are statistically recombined. But a gifted student solves problems logically without having devoured all kinds of textbooks beforehand. ChatGPT only knows numbers if they can be extracted from trained texts. For example, the definition of a prime number could be reproduced if this text appears somewhere in ChatGPT’s memory. But ChatGPT can only draw conclusions from this and decide whether a given number is a prime number or not if appropriate prior knowledge has been trained. Arithmetic, logical and causal thinking are alien to him.”

Conversely, Peter Thiel claims that mathematicians in particular are likely to be threatened by AI – and this refers not only to specific, but also to general AI models. These can be combined with dedicated mathematical modules, among other things, by extending the model architecture using hybrid approaches. The combination of machine learning and deterministic algorithms would also be an approach.

EU AI Act - CAIR4.eu

The topic should not be deepened further at this point! However, if such a vague term is already used in the EU AI Act, it should be “somehow” concretized by means of guidelines. However, the example of mathematics as a “hard science” also illustrates how difficult it will be to define or validate “competence” in the near future. This is likely to be even more difficult for comparatively “fuzzy” topics.

4. Overview of the five recommendations

Here are the five recommendations for the Code of Practice for GPAI models in a tabular overview:

5. Consider practice-oriented tools

Finally, one aspect should be pointed out: The word “practice guideline” contains the term “practice”, so it is particularly important that guidelines are designed to be as practice-oriented as possible. A recent publication by Dubey, Akshat, Zewen Yang, and Georges Hattab: “A Nested Model for AI Design and Validation.” illustrates what is important here. Among other things, it presents a 5-layer model.

5.1 The 5-layer model as an example

This is a tool that, from the point of view of AI science, recommends a practice-oriented methodology to answer the questions that also played a role in the previous five recommendations:

  • Dealing with sometimes unclear terminological requirements (reference to the glossary)
  • Need for clear definitions (e.g. for AI models)
  • The differentiation and validation of “general” and “specific” tasks (domain and data)
  • Validating the criteria of “competence” (Domain, Data, Model and Prediction)

5.2 Recommendation

The additional recommendation with regard to the Code of Practice for GPAI models is:

  • The Code of Practice for GPAI models can and should also be provided by innovative tools such as the 5-layer model!
  • In addition to even more text, it is therefore also important to make complex relationships methodologically comprehensible by means of graphic elements.
  • This already makes sense if it is a specific AI model. But it is even more important when highly complex demarcations are required, as in the case of dynamically evolving GPAI models

Click here for part 1 of the article, in which the 5-layer model is initially discussed:

Here is the second part of the article, which also sheds light on the topic of “guidelines”:

The third part, which will be published shortly, will focus in particular on the interaction of specific AI models and GPAI models within high-risk medical AI. The 5-layer model will be the central hub of the challenges to be solved.

Links to the articles of the EU AI Act mentioned in this post (German):

About the author:

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *