Everyone is talking about AI chatbots for companies. Kaercher has also developed an AI chatbot together with an IT service provider. The AI models used are able to solve complex tasks as well as provide a natural chat experience. What role does the EU AI Act play in this? Today and in the future. A sobering assessment – especially for downstream providers within the meaning of the EU AI Acts.
- This article takes a public case study as an opportunity to outline the impact of the EU AI Act for chatbots.
- The case study comes from the IT company Zoi, which has implemented an AI chatbot for Kaercher.
- The solution is based on the GPT 3.5/4 AI models, among others. The chatbot was implemented even before the final version of the EU AI Act was adopted.
- Below are assessments of which standards of the EU AI Act will be relevant for which actors in the value chain.
- At the end, there is an overview of possible obligations for all parties involved as well as a knowledge test.
Articles of the EU AI Act mentioned in this post (German):
- Article 3 Nr. 1, 3, 63, 65, 66, 68 EU AI Act
- Article 50 (1) EU AI Act
- Article 51 EU AI Act
- Article53 EU AI Act
- Article 54 EU AI Act
- Article 55 EU AI Act
- Article 56 EU AI Act
- Article 89 EU AI Act
- Article99 EU AI Act
- Article 101 EU AI Act
- Article 111 EU AI Act
- Article 113 EU AI Act
Please note that the original text of this article is based on the official German translation of the EU AI Act. Its wording may differ in part from the English version of the EU AI Act.
Copilot with GPT 3.5 / 4
First of all, the hook of this article: It is a public case study of the IT service provider Zoi. The company has implemented a chatbot together with Kaercher. This is based on the GPT 3.5 and 4 AI models. The chatbot was implemented as a browser-based copilot. It can be used by approx. 15,000 employees and is intended to support the following goals, among others.
- Research: collecting data, conducting surveys, analysing trends in language use, …
- Brainstorming: Generating and organizing ideas, giving feedback, enriching them with research data, …
- Learn languages: practice speaking, listening, reading and writing by having conversations and giving feedback, …
- Educational purposes: Answering general questions, giving feedback on assignments, …
- Personal productivity: setting reminders, taking notes, managing tasks, …
Kaercher also expects the chatbot to improve customer support as well as product recommendations, quality control, supply chain management and predictive maintenance.
This article uses the information from the case study as a technical hook, as it contains important keywords in the context of the EU AI Act. In this respect, the following statements are an assessment or evaluation that is completely independent of the participants in the case study. It is based exclusively on the publicly available information in the case study. Since there are a number of comparable case studies with other participants, the following content is kept as general as possible. Additions, notes and differing assessments can be added as comments to this article. The final version of the EU AI Act is still new, so many of the following statements are to be understood as a thesis.
Screenshot of the case study by Kaercher and Zoi
The following sentence of the case study is noteworthy with regard to the EU AI Act:
“While using the new AI tools, Kaercher and Zoi remain attentive to AI regulation developments, especially the proposed EU AI regulation. This regulation aims to establish comprehensive guidelines for AI systems in the EU, categorizing AI use by risk level and setting rules for each level. Kaercher is committed to incorporating ongoing legislative processes early while exploring the potential of AI
The note shows that the EU AI Act and its risk classes were considered relevant even before it came into force and were incorporated into the development process. A good strategy, because a chatbot is a long-term investment that “grows” into the AI Act after the transition periods have expired. Companies should therefore take the example of this case and already have the four risk classes of the EU AI Act on their radar. This article is recommended for the four risk classes.
1. Relevance of the EU AI Act for chatbots
In general, the question arises as to how the EU AI Act is likely to affect such a project – whether at Kaercher or a comparable company.
Here are the following introductory thoughts:
- First of all, it should be noted that such a chatbot is a general-purpose AI system within the meaning of Article 3 No. 1, No. 66 EU AI Act (which in turn is based on an AI model of openAI/GPT with a general purpose, see Articles 3 No. 1, No. 63 EU AI Act and Reasons 100).
- AI-based chatbots enable interactions with humans. They are therefore usually subject to the medium risk class of the EU AI Act.
- Users within the meaning of Article 50EU AI Act must therefore be explicitly informed that they are communicating with an AI.
- As a general rule, both providers and deployers of AI chatbots are obliged to provide AI expertise, see Article 4 EU AI Act.
More information on the obligations under Article 50 EU AI Act for providers and deployers of generative AI chatbots in this article:
1.1 Legal relationships
Regardless of this, the EU AI Act has an impact on the different actors in the use case:
- The provider of the AI model (here openAI).
- The IT service provider who creates the AI system (in this case, Zoi).
- The company that uses the chatbot or AI system (in this case, Kaercher)
- The users of the chatbot as an AI system (here employees of Kaercher)
This results in several (potential) horizontal and vertical legal relationships that are established or at least influenced by the EU AI Act:
- In vertical legal relationships, there is a superior/subordinate relationship between supervisory authorities and the individual actors in their respective roles. The EU AI Act establishes mutual rights and obligations in this regard.
- Legal relationships can also be established, influenced or clarified by the EU AI Act between the actors.
- Article 78 EU AI Act imposes a duty of confidentiality on all actors with regard to all information and data they come into possession of in the performance of their tasks.
- In particular, Article 53 (2) EU AI Act i.V.m. Annex XII contains information obligations of the AI model provider towards the (downstream) providers of an AI system if they integrate an AI model into their AI system.
- In this respect, a breach of certain obligations of the EU AI Act may give rise to damages pursuant to § 823 II BGB. The situation is similar with the GDPR, for example, which is also considered a protective law between parties under private law (see, for example, a ruling by the Hamm Higher Labor Court).
1.2 Relevance for AI systems and AI models
The case study illustrates what needs to be paid particular attention to in the EU AI Act:
- The distinction between obligations for the parties involved in the context of AI systems as well as AI models.
- This differentiation is important in many respects – not least because openAI is both a provider of AI systems and AI models.
- In individual cases, this dual role can lead to different, sometimes parallel obligations.
As will be shown later, this also applies to the participants in the AI value chain.
Many people are not really clear what openAI’s “GPT” is: an “AI system”, an “AI model” or both? The ambiguity lies, among other things, in the fact that both AI systems and models are often simply referred to as “AI” – without differentiating between system and model. As the following screenshot shows: Even ChatGPT 3.5 classifies itself as an AI model and not as an AI system, and the question of self-assessment explicitly referred to “ChatGPT”, i.e. the chatbot, and not to “GPT 3.5” or “4” – i.e. the AI models behind it.
Question for ChatGPT: “Are you an AI system or an AI model”?
The request was made with ChatGPT 3.5 in May 2024 (only in German):
One reason for general conceptual ambiguities around the term AI system is, among other things, that the definition of Article 3 No. 1 EU AI Act has turned out to be somewhat “cryptic” (which has already caused some criticism in the run-up to the final version of the EU AI Act).
2. Definition of AI system (Art. 3 No. 1 EU AI Act)
Accordingly, the following is considered an AI system within the meaning of the EU AI Act:
“… a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;’
All right? And in contrast, what is an AI model? A definition of AI models is so far missing in the EU AI Act – it is more or less required in Articles 3 No. 63 and 51 ff EU AI Act, among others. Therefore, more attention will be paid to this point below. A description for AI models can be found on the OECD website, among others. The AI model “maps the structure and/or dynamics of all or part of the environment of the system. It is based on expert knowledge and/or data provided by humans and/or automated tools (e.g. ML algorithms).” In short, it consists of specific data and algorithms without being an autonomous actor itself. The justified criticism of the EU AI Act’s definition of AI systems is dealt with in a separate article.
More on the missing definition of AI models in this post:
In the following, an overview using the example of openAI/ChatGPT/GPT illustrates what an AI model is (as in this case also with a general purpose within the meaning of Article 3 No. 63 EU AI Act) and what an AI system within the meaning of the EU AI Act is.
3. AI Models vs. AI System (openAI)
The special thing about software companies like openAI is that they offer several AI services that are to be evaluated differently within the meaning of the EU AI Act:
- There is a chat as an AI system for direct interaction with the URL chat.openai.com as well as a desktop application.
- In addition, various AI models with different specialized data and algorithms.
- One and the same chatbot can be used with several AI models from openAI.
3.1 Large Language Model (LLM)
With regard to text, GPT 3.5/4 qualifies as a Large Language Model (LLM). DALL-e is an AI model for generative image design using text input, and GPT 4o (omni) is a multimodal model for both text and image composition.
All variants are almost certainly AI models with a general purpose and systemic risks within the meaning of Article 3 No. 65 and Article 51 EU AI Act/Annex XIII. More on that later.
Note: If someone only talks about “GPT”, they usually mean the model. When people talk about “ChatGPT”, on the other hand, they are talking about openAI’s AI system. The case where the provider of the chatbot as an AI system is also the provider of an AI model is common. In the case study, however, it is different, because the provider of the AI chatbot with general purpose within the meaning of Article 3 No. 3, No. 66 EU AI Act is with a certain probability Kaercher itself or even Zoi. With regard to the AI model, Kaercher and Zoi could therefore be considered “downstream providers” within the meaning of Article 3 No. 68 EU AI Act. More on this in a moment in the context of the AI value chain.
If you transfer the comparison of the AI model and the AI system to the use case outlined here, then:
- Kaercher uses two LLM AI models from openAI, namely versions 3.5 and 4.
- On the other hand, openAI’s chatbot is not used,
- rather, a separate chatbot instance with its own interface was created, the provider and its “downstream” provider are likely to be the IT service provider or its customer with regard to the AI model.
- At the same time, Microsoft Copilot is mentioned – this is (if not individualized) a separate AI system that is integrated into Office 365. It is even possible that several AI systems are operated in parallel, which appear to be “a single AI chatbot”.
3.2 Customization
According to the case study, the chatbot frontend that is individualized is usually much more than “just” an improved “interface design”: It is much more about the individualization or expansion of the AI model as well as the AI system:
- The model is connected to the chatbot via a specially protected API, so that openAI’s original AI models are not further trained by internal company use. In addition, openAI cannot know how the instance is used, e.g. what is being searched for.
- Internal data is often connected to the AI system (e.g. for the purpose of intelligent search on the intranet or a website). Depending on the severity of the individual case, this step alone could contribute to changing the (downstream) provider of an AI system or AI model.
- For this purpose, individualization is implemented in a container app, for example, in such a way that, among other things, a role-based access control via LDAP is possible, i.e. a role-specific use of pre-selected content (similar to a company intranet).
- For example, a managing director, an HR employee or a plant manager can access different documents with varying levels of secrecy. In addition, specific technical terms of the respective company can be trained in this way, which the initial model does not know.
- After all, company-typical language styles can also be “fine-tuned” in this way. The context of the input can also be controlled.
The information in the case study on the many different use cases (see above) shows that many individualizations have taken place or will still take place. This almost automatically leads to the chatbot appearing as an AI system with a general purpose and can have downstream providers including the AI models used in it – with all the resulting obligations (see below).
More on the question of provider vs. deployer in the case of individualization in this article (so far only in German):
4. AI Value Chain
How the constellation is structured in the individual case can and should be left open here due to a lack of detailed information. However, it can be said that the IT infrastructure outlined below is used in similar constellations:
Technical overview (exemplary):
4.1 Variants of implemenation
This highly simplified overview can be implemented in various variants:
- For example, an IT service provider can acquire its own license from GPT, then connect it to additional services on its own servers using an API and pass it on to customers as a customizable “package” in a protected environment.
- However, it can also be the case that the IT service provider itself does not offer AI services and works solely on the AI infrastructures of the end customer and the end customer acquires the licenses. For example, some AI models (often as open source variants) can also be used on premise – in this case, the IT service provider is usually only a “tool”, i.e. neither provider nor deployer itself.
- Each situation has a different impact on the AI value chain as well as the resulting obligations of the (downstream) providers of the AI system and AI model.
As shown in the section “Consequences”, the different constellations have a direct impact on the efforts associated with a chatbot on the part of the IT service provider! These, in turn, play an important role in the calculation of business models based on AI.
4.2 Nested providers
The provider of the original AI model on the left will therefore usually be a company like openAI or another provider of a general-purpose AI model (e.g. Meta, Aleph Alpha, Mistral, google, AWS) – unless it is a self-developed AI model (e.g. with Python). Everything to the right of it (with the exception of the pure users) are potential downstream providers within the meaning of Article 3 No. 68 EU AI Act with regard to the AI model.
The following overview is a strong simplification of the resulting AI value chain, in which several actors can cause nested (downstream) provider relationships. When which constellation is relevant must be examined on a case-by-case basis.
AI value chain and (downstream) provider
The contents of the overview have been marked with question marks in various places, because many things can be the case in this value chain, but nothing has to be. The individual case decides – also on the obligations of the different actors. In addition, the EU AI Act does not actually define the role of “downstream provider for AI systems” – this only exists legally for GPAI models. In fact, however, a comparable role can arise for multi-nested AI systems and AI models, which is why the term is still used. The expression can also be found in a similar form for AI systems in paragraphs 101 and 133 of the recitals (but not in the articles).
The following overview of the resulting consequences shows: This is a “lot of wood” (German saying) that is likely to be faced by those involved in the AI value chain in the future. The reason is, among other things, the downstream providers, which in individual cases can affect both the AI model and the AI system.
5. Obligations for the actors
On the basis of the previous outline, different obligations for the individual actors can now be derived (by way of example and not exhaustively):
- Providers and deployers must comply with the obligation to impart AI competence within the meaning of Article 4 EU AI Act. The providers with regard to the AI model as well as the AI system. The deployers only with regard to the AI system.
- As a provider of general-purpose AI models that involve a systemic risk within the meaning of Article 51 EU AI Act, openAI must comply with the following obligations:
- First of all, Article 53 (1) EU AI Act/Annex XII (technical documentation, disclosure to providers of AI systems, strategy for copyright compliance, documentation of training content) applies
- Since openAI is not an open source model, the exception to paragraph 2 does not apply.
- Cooperation with authorities is required (paragraph 3).
- Attention should also be paid to the creation of codes of conduct within the meaning of paragraph 4 and, in addition, Article 56 (3) EU AI Act
- If one does not already exist, an authorized representative is required for the U.S. company (Article 54 EU AI Act).
- The implementation of Article 55 EU AI Act is also presumably necessary:
- This includes a model evaluation within the meaning of paragraph 1 a),
- documentation of serious incidents within the meaning of paragraph 1 c),
- the implementation of extended cyber security requirements in accordance with paragraph 1 d),
- the interim use of codes of conduct in accordance with paragraph 2.
- First of all, Article 53 (1) EU AI Act/Annex XII (technical documentation, disclosure to providers of AI systems, strategy for copyright compliance, documentation of training content) applies
- In the case of the IT service provider , the role of the primary provider of an AI system and thus also the role of the downstream provider of the AI model can arise (depending on the contract design with the model provider and the customer: Who will ultimately put the AI on the market and how?).
- If the chatbot is e.g. with its own name and a URL to be assigned to the IT service provider, it is highly likely that the IT service provider itself is the primary provider of the AI system, i.e. it must comply with the transparency obligations of Article 50 EU AI Act, among other things.
- As a provider of the AI system, he can also be a downstream provider of the AI model within the meaning of Article 3 No. 68 EU AI Act. In doing so, he “inherits” both the rights and – at least potentially – the obligations within the meaning of Annex XII, which openAI or another AI model provider also possesses.
- This means that many of the obligations of the AI model provider towards customers, but also towards authorities, listed above under point 1, must be fulfilled.
- If the customer ultimately operates the AI system, this includes the comprehensive description of the AI model in accordance with paragraph 1 (including architecture, inputs and outputs as well as licenses).
- Since two models are used in the use case, this basically applies to the description of both models.
- In addition, there are obligations under paragraph 2, including the model description and the documentation of training data.
- Among other things, the evidence regarding cybersecurity could be particularly tricky if it is an AI model of a hyperscaler.
- Important: As a downstream provider, the IT service provider is also entitled to the rights of appeal against the AI model provider in accordance with Article 89 (2) EU AI Act.
- The important role of the downstream provider is once again clarified in detail in the recitals (101).
- The various confidentiality obligations of Articles 51 et seq. EU AI Act must also be observed overall.
- The provider of the chatbot as an AI system can also lie with the company using it. However, he can also just be the deployer.
- If the customer is the provider of the AI system himself, the obligations of the IT service provider with regard to the provider of an AI system in a comparable form apply to him.
- Conversely, he also receives the rights as a downstream provider of an AI model in accordance with Annex XII – whether directly from the provider of the AI model or (in the form of staggered downstorage) from the IT service provider.
- If, on the other hand, the customer is only the deployer of the AI system, he must above all comply with the deployer obligations, including those of Article 50 (4) EU AI Act.
The “sandwich position” of the IT service provider will certainly raise many questions in practice about the specific obligations in the case of downstream providers within the meaning of the EU AI Act. Until there is clarity here, IT service providers should definitely have a basic awareness of the problem before concluding long-term contracts with AI model providers or customers in which they themselves (without awareness) assume many of the aforementioned obligations that arise/exist with the validity of the EU AI Act. And: In addition to all the topics mentioned, there are of course a number of other legal aspects to consider on all sides, including liability regulations, data protection aspects, labor law issues or copyright or usage rights. Especially with regard to the topic of education or recruiting, it should be noted that there are no violations of Article 5 EU AI Act.
6. Deadlines and sanctions
A chatbot is a medium-risk AI system. According to the transitional periods of Article 113 EU AI Act, the implementation of the requirements of Article 50 EU AI Act for AI systems must be binding until 24 months after the date of entry into force of the EU AI Act – this is 2 August 2026.
The standards of Chapter V (Article 51 et seq. EU AI Act) for general-purpose AI models with and without systemic risks, on the other hand, must be implemented as early as 12 months after entry into force – this is 2 August 2025. In addition, there is the obligation to implement codes of conduct by 2 May 2025. With regard to sanctions, non-implementation of AI systems within the meaning of Article 50 EU AI Act can result in fines of up to € 15 million or 3% of turnover, cf. Article 99 EU AI Act. In the case of AI models, there is a risk of corresponding amounts in the event of violations in accordance with Article 101 EU AI Act.
7. Procedural rules
Especially for the providers of AI models with general purpose (here openAI), it is important to know some procedural regulations, some of which will also be relevant for downstream providers of AI models:
- Article 66 c) EU AI Act for implementation advice by the Committee
- Article 88 EU AI Act on enforcement and powers
- Article 89 EU AI Act on possible actions and complaints
- Article 90 EU AI Act on warnings from the scientific panel regarding AI models
- Article 91 EU AI Act on the Documentation Request
- Article 92 EU AI Act on the assessment and access to the AI model
- Article 93 EU AI Act on dialogue with Commission and commitments
- Article 94 EU AI Act with reference to Article 18 of the Products Regulation
- Article 111 (3) EU AI Act on grandfathering and separate transitional periods
- Article 112 (6) EU AI Act on ongoing assessment and reporting on AI models
Since the present case concerns an AI system with a general purpose (see Article 3 No. 66 EU AI Act and Reasons 101), the provision of Article 75 (1) EU AI Act must be examined with regard to supervisory competence. However, because the provider of the AI model is most likely not also the provider of the final AI system, the provision is unlikely to apply.
But: In general, be careful with paragraph 2! According to this, factual high-risk cases can be scrutinized by the supervisory authorities on their own initiative for only one of several purposes of an AI system with a general purpose. According to the list of planned use cases (see above), there is probably no reason for such a suspicion.
Article 40 (2) EU AI Act on future concretizing standards, including on resource consumption, also refers to AI models, but is not relevant because it is not a high-risk AI system in the present case.
Unlike high-risk AI systems, the chatbot studied here does not have to be entered into a database. It also doesn’t need to be reported or reported elsewhere – especially since it’s not a regulated industry in which it’s used. In this respect, the question arises as to how compliance will be checked in these and comparable cases in the future. The answer could be as follows: For example, if a complainant (Article 85 EU AI Act) or a whistleblower gives rise to it (Article 87 EU AI Act). With regard to the expected punishment of violations, one should also consider the European comparison with the GDPR .
8. Conclusion
Even if not all of the aspects outlined above may apply in individual cases, it is clear that the chatbot, as perhaps one of the most important economic AI applications, can cause a lot of regulatory effort despite the classification as “medium risk”.
This is especially the case when AI models are used within the AI value chain with (potentially multiple) downstream providers. If it is also AI models that include systemic risks, it becomes particularly challenging. AI models based on open source, on the other hand, can significantly reduce the effort in this regard in various areas. The article also makes it clear, among other things, that both IT companies and their customers should already consider or check all aspects that must be implemented in one way or another after the expiry of the respective transitional periods.
As you can see, there can be a lot for chat operators! In addition to the legal aspects outlined above, downstream providers also have a significant influence on business models and ROI calculations – especially for IT companies. Since it is currently still a bit too early to want to evaluate all relevant aspects, it is definitely advisable to talk to customers at an early stage about the design of long-term contracts, and under what circumstances which scenarios are likely to appear. For example, it can make sense to conduct a pilot with an instance of the IT company and implement the final solution directly at the customer’s site in order to avoid downstream provider constellations. As is so often the case, the individual case often decides.
About the Autor:
Be First to Comment