Many companies and organizations are asking themselves whether and by when AI literacy should be implemented and in what form. Sometimes it is even claimed that AI literacy must be made mandatory by February 2, 2025 with appropriate training. Is that true? This article explains and analyzes the legal basis.
- Article 4 EU AI Act regulates the topic of AI literacy as part of the general regulations.
- According to various interpretation methods, a direct, explicit obligation to implement AI literacy is difficult to justify.
- Rather, it is an objective or a kind of guideline that can be observed voluntarily and implemented without being bound by a deadline.
- However, in the case of other breaches of duty, Article 4 EU AI Act is an important factor. Among other things, to determine the amount of sanctions
Articles of the EU AI Act mentioned in this post (German):
- Article 3 Nr. 56 EU AI Act
- Article 4 EU AI Act
- Article 16 ff EU AI Act
- Article 53 ff EU AI Act
- Article 95 EU AI Act
- Article 99 EU AI Act
- Article 113 EU AI Act
Please note that the original text of this article is based on the official German translation of the EU AI Act. Its wording may differ in part from the English version of the EU AI Act.
Is AI literacy mandatory?
Article 4 EU AI Act deals with the promotion of AI literacy. The debate about whether this should be considered a mandatory obligation has led to controversy even among lawyers and industry experts.
As long as this debate is academic, there is little reason to worry. Sometimes, however, it is claimed on the Internet by means of ads, among other things, that AI literacy must be implemented by companies by February 2, 2025. This claim can also be found in various “sales brochures” – not least from well-known law firms that offer training and advice in this regard (deliberately without proof: please google it yourself!).
Below you will find some screenshots of the corresponding Google ads. They appear when you search for AI literacy. The search term in this case was “KI-Komptenz”, which is the official German translation of AI literacy. “Pflicht” is the German term for ‘obligation’ or ‘duty’ (search as of September 26, 2024):
Google gemini also says self-confidently (2x queried on September 26, 2024):
Even such information can raise legal questions if it is not certain that an “obligation” exists at all. See point 2 below and the attached comments at the end of the article.
1. Pros and Cons
Three questions arise in view of these statements:
- Does Article 4 constitute an independent legal “duty” or “obligation” – does this term apply or is it misleading?
- Is there an associated obligation to implement it by February 2, 2025?
- To what extent does the combination of two assertions in this regard increase the pressure to act?
In order to clarify these questions, the arguments for and against an “obligation” to promote AI literacy are examined, taking into account legal interpretation methods. It then examines by when and how AI literacy should be implemented.
A common saying goes: Two lawyers, three opinions. This is also a challenge in this case. However, it becomes difficult when legal “opinions” are used to exert factual pressure in the market, e.g. to sell training courses. The ads in particular give the impression: “You have to act quickly now! Otherwise you are violating a legal obligation.” This is a difficult aspect, especially with regard to authorities, because they are bound by the law. Due to these circumstances, special attention is paid to the balancing of interests. Requests such as those in the ads are only legal if there is a demonstrable explicit obligation that can be derived directly from Article 4.
The following table shows the classic four legal methods of interpretation in the case of contra arguments.
Anyone who claims a duty or direct obligation of AI literacy must deal with this interpretation. This is all the more true if this is combined with statements about a very short implementation period!
It is precisely the combination of both aspects that creates a high pressure to act (mandatory assertion & deadline assertion)!
A possible pro-argumentation is listed below in the sense of an unbiased examination in order to compare it with the following contra-argumentation.
1.1 Pro: Arguments for a mandatory AI literacy
Does Article 4 contain a legal obligation to be able to meet the comprehensive requirements of the EU AI Act?
1.1.1 Indirect obligation through other regulations?
Articles 16 et seq. of the EU AI Act and other articles set out specific obligations for providers and deployers of AI systems. Content concerns risk management, security standards and transparency. Similar to Article 53 et seq. of the EU AI Act for GPAI models.
On the question of provider vs. deployer of AI Systems also this article:
Without sufficient skills in dealing with AI, one could argue, it would be impossible to properly comply with these regulations. So, a company that doesn’t provide AI literacy training could indirectly violate other, mandatory regulations.
1.1.2 Systematic classification of Article 4 EU AI Act?
Article 4 EU AI Act is part of the general provisions of the AI Act. Chapter I establishes the framework for the implementation of the remaining provisions. If a company is to meet its security and risk assessment obligations, it needs the necessary knowledge and skills. In this respect, there is no question that the development of AI literacy is de facto necessary in these cases.
Article 4 could therefore be interpreted as a fundamental prerequisite for compliance with the rest of the rules. This would again lead to a kind of “indirect obligation”. An indirect obligation arises when a certain act or omission is not directly prescribed, but is to be derived as a necessary consequence from other, directly formulated obligations. It is then an obligation that results from the interaction of several legal norms. However, indirect obligations then only apply to AI systems of high and medium risks and GPAI models – but not, for example, to AI systems with low risks.
1.1.3 Deadline by February 2, 2025?
Article 113 a) EU AI Act could give companies until February 2, 2025 to take all basic precautions. From the point of view of some training providers, this is interpreted to mean that AI competencies must be implemented by this date and therefore training must be conducted by February 2, 2025. This is the only way to ensure that the other, more specific obligations can be fulfilled.
1.1.4 The European AI Pact as a signal?
More than 100 companies have already signed the European AI Pact this year. It is a voluntary initiative that aims to implement the provisions set out in the AI Act even before the legal deadline. This shows, one could argue, that companies are proactively preparing to meet the requirements, even if the legislator does not yet make them mandatory.
In any case, the pact underlines the importance of training and skills in the use of AI systems in order to minimise risks and promote the ethical use of AI
1.1.5 Interim Conclusion Pro
In view of some aspects, it can be assumed that there is at least a grey area that could give rise to indirect obligations.
1.2 Cons: Arguments against an obligation
On the other hand, it should be noted that the promotion of AI literacy is not to be regarded as a binding obligation for various reasons – if at all, then only indirectly within the framework of other explicit obligations. The basis for this view is the clear wording of the law and the structure of the regulations.
1.2.1 Wording and voluntariness
Article 4 EU AI Act deals with the promotion of skills in dealing with AI. The choice of words indicates that it is an objective or recommendation and not a binding obligation. Chapter I as a whole does not contain any obligations. It regulates goals, definitions, principles and guidelines such as those of human centricity (see this article in detail).
Article 4 EU AI Act does not use mandatory wording such as “must” or “are obliged”. Many other articles of the EU AI Act explicitly set out specific obligations for providers and operators. To be named are, for example: Chapter III, Section 3, Chapter IV, Chapter V, Sections 2 and 3. These parts of the EU AI Act explicitly speak of “obligations”!
1.2.2 Point (20) of the recitals and Article 95 EU AI Act emphasise voluntariness
Point (20) of the Reasons and Article 95 (2) c) EU AI Act make it clear that the development, and thus also the scope and expansion of competences within the framework of codes of conduct, is in many respects voluntary. Companies are to be encouraged to offer training and build up skills independently. However, there is no regulation that makes this binding for everyone and everyone:
- Neither with regard to the way in which
- the timing of implementation,
- nor in terms of scope.
The codes of conduct are described as voluntary “best practices”. You and thus flexible in terms of timing, scope and structure. The content or providers of the training courses are also not mandatory. In the case of a sanctionable obligation to provide AI literacy, these requirements would not be sufficient and too vague.
1.2.3 Historical interpretation
The draft of the EU AI Act of June 2021 did not once contain the term “AI literacy”. Only in recitals such as (48) was “competence” mentioned in the context of high-risk AI. It was not until the leaked version at the end of 2023/beginning of 2024 that the topic of “AI literacy” was introduced and publicly discussed.
If you follow the history of the EU AI Act (e.g. here), it becomes clear that AI literacy has only gradually found its way into the EU AI Act. It was presumably deliberately inserted into the Fundamentals section of Chapter I. This is also supported by recital (81) of the old version, which appeals in particular to the voluntariness of all those AI systems that are not high-risk AI. The same applies to point 3.3 of the context justification, which is only contained in the old version.
Thus, even in the first version, Chapter I contained above all basics, general provisions and, last but not least, overarching objectives. Obligations are therefore not established in any of the various versions of Chapter I of the EU AI Act.
In this respect, the provision of Article 4 AI Act is to be understood as an extremely important goal. However, it is not a sanctionable duty, not a compulsion, but an important request and guideline!
1.2.4. Short implementation deadline
If the EU AI Act were to grant only six months of preparation and implementation time after the entry into force, companies and organizations would not have time to adequately prepare for this obligation in terms of organization. It can be assumed that the development of an organizational foundation (responsibilities, budget, training plans) takes much more than six months and is also an ongoing task.
The pro-argumentation of a mandatory justification with reference to Article 113 a) EU AI Act is already relativized by the fact that, for example, Article 113 c) EU AI Act explicitly points out that the obligations in this regard do not begin until the 36 months have expired. However, Article 113 a) does not contain any reference to obligations.
A deadline-based “hasty approach” cannot be in the spirit of the legislator with regard to AI literacy: even for the supervisory authorities, competencies are only created “gradually”. It is also relevant that official duties in this regard are only regulated to a limited extent, for example in Articles 70 and 76 EU AI Act. In addition, a right of appeal was granted for missing competences of notified bodies: a right that does not apply immediately! Why should companies and other organizations then be put under such “(time) pressure”?
1.2.5 All risk classes are covered by AI literacy
Article 4 EU AI Act does not differentiate according to risk classes, but according to application contexts. AI literacy is considered an important tool. This also applies to low-risk AI systems, for which there are otherwise no explicit obligations for providers and operators. The AI Act stipulates that for medium- and low-risk AI systems, voluntary elements are to be integrated into codes of conduct, as required for high-risk AI. This is expressly mentioned in point (165) of the recitals.
This is logical, as there are no mandatory requirements for low-risk AI systems. Nevertheless, incentives for the development of AI literacy are to be created. Overall, it can therefore be assumed that Article 4 EU AI Act does not contain a binding obligation – and certainly not a deadline – but rather represents an objective and guideline. It has a motivating effect as a “carrot” and not as a “stick”.
1.2.6 No sanctions provided
In this sense, a decisive argument against an obligation is also the lack of sanctions, as provided for in Article 99 EU AI Act for the violation of all explicit obligations. While violations of the specific obligations for providers and operators (e.g. Article 16 et seq. EU AI Act) can be massively sanctioned, there are no sanctions at all with regard to the promotion of competence.
This clearly shows that the legislator has not provided for a mandatory regulation for the development of competences, as sanctions are generally necessary for the credible enforcement of obligations – and conversely obstructive to the promotion of voluntary measures. If obligations were intended, the legislator would have provided sanctions for the lack of appropriate training, i.e. chosen the “stick”
Nevertheless – and this is important:
- Within the framework of the obligations of Article 16 EU AI Act, for example, the non-implementation of AI literacy can lead to higher sanctions.
- However, even that only after the expiry of the relevant deadlines for the fulfilment of the primary obligations.
In general, see the deadline model of the EU AI Act:
1.2.7 Interim Conclusion Cons
The methodological interpretation of the provision is highly likely to lead to the fact that there is at least no direct obligation. Especially not for all risk classes with a correspondingly short deadline of February 2, 2025.
1.3. Classification as a “fundamental right”?
Finally, it could be argued independently of the pros and cons that Article 4 could have a character similar to a fundamental right in its structure and meaning. Similar to fundamental rights in the Grundgesetz for Germany (GG) or the EU Charter.
1.3.1. Fundamental nature
Article 4 aims to promote AI literacy to ensure safe and ethical use of AI systems. Even though Article 4 does not contain any explicit obligations, it provides a basic framework that emphasizes the importance of education and awareness in dealing with AI.
Similarly, some fundamental rights in the German Grundgesetz, such as Article 1 GG (human dignity) or Article 12 GG (freedom of occupation), have a strongly normative character, without always being directly justiciable. Article 1 EU AI Act, which regulates human centricity as part of Chapter I, also has a similar character.
1.3.2 Protection of the fundamental rights of others
Another argument is that the purpose of Article 4 – the promotion of AI literacy – is indirectly aimed at protecting fundamental rights, especially in relation to privacy, data protection and non-discriminatory use of AI. AI literacy could therefore be understood as a necessary step towards the realisation of other fundamental rights. Thus, the educational aspect of Article 4 could be interpreted as a protection mechanism for civil rights in the digital age.
1.3.3 Orientation towards common goods
Article 4 aims to promote a collective awareness and the ability to use AI critically, which could be compared to the common good orientation of some fundamental rights of the German Grundgesetz (such as the principle of the welfare state). The promotion of AI literacy has a clear link to the common good, as it is a prerequisite for avoiding risks and promoting transparency and responsibility in dealing with AI.
1.3.4 Structural parallels
Although Article 4 does not contain any direct obligations and directly enforceable rights, it can be seen as a structuring principle for the entire AI Act, similar to how some articles of the German Grundgesetz or the EU Charter are guiding principles that radiate to other laws. Article 4 could thus provide a framework similar to a fundamental right to guide all subsequent obligations and regulations related to the safe and responsible use of AI.
1.3.5 Relationship to fundamental rights
Fundamental rights such as Article 5 GG (freedom of expression) or Article 2 GG (right to the free development of the personality) can be affected by the use of AI systems. Thus, the promotion of AI literacy under Article 4 AI Act could be interpreted as a kind of protective mechanism to ensure that these fundamental rights are not violated by unethical or irresponsible AI applications.
Even if Article 4 EU AI Act does not establish fundamental rights in the formal sense, it could still be regarded as similar to fundamental rights, as it creates a fundamental guideline for the promotion of AI literacy and the realisation of ethical AI use. It could thus function as a structural norm similar to the fundamental rights in the German Grundgesetz by establishing the framework for the safe and fundamental rights-preserving use of AI.
According to Article 4 EU AI Act, in view of this comparison with fundamental rights, the state would at most be directly binding vis-à-vis the citizen. Theoretically, for example, students could derive an obligation to teach them AI literacy at school. However, the other text of the law, which explicitly names providers and operators as well as the context of use, speaks against this.
1.3 Overall Conclusion
After careful consideration of the pro and con arguments, it can be said:
- The promotion of AI literacy according to Article 4 EU AI Act is not a binding – direct – obligation.
- However, it is more than just a recommendation, it is also more than an objective in the sense of an – indirect – obligation.
- The violation of indirect obligations does not lead directly to sanctions – but non-compliance can be taken into account, for example, in the amount of sanctions.
The topic of AI literacy can therefore only be seen in the sense of an obligation in the context of other specific obligations. Obligations that must result from the other chapters of the EU AI Act.
The latter is particularly important for AI systems with low risks, as there are no explicit or sanctionable obligations at all. This makes it all the more important to motivate these actors to voluntarily build up AI literacy.
At high and medium risks, Article 4 EU AI Act thus also establishes a justiciable guideline within the framework of existing obligations and related deadlines!
Therefore, training companies and other relevant actors must point out all these relativizing aspects if they do not want to expose themselves to the accusation of “sales-promoting” misinformation! The statement that AI literacy must be implemented “by February 2, 2025” due to an “obligation” is therefore doubly questionable and misleading!
Relax!!! There is no urgent need to carry out any mandatory training courses by February 2, 2025 – which are not officially recognized anywhere anyway. Anyone who claims this anyway (especially as a training provider, lawyer or search engine) is treading on extremely thin ice from a legal point of view!
2. Due diligence regarding communication
Since it is obviously possible to assert the existence of an obligation with clever argumentation, it is important for certain professional groups and service providers that they make it clear that this is just one (and moreover difficult to justify) possibility of interpretation.
If there is no corresponding clarification or (as with Google) it is firmly claimed despite all doubts that it is “unequivocally” an obligation, then there is a threat of legal consequences.
2.1 Training providers
Conceivable are (especially in Germany):
1. Misleading according to the UWG (German Act against Unfair Competition):
- If training providers make misleading statements about legal obligations that do not exist in this form, this could be classified as a misleading commercial act under Section 5 UWG .
- Misleading by omission could also be relevant if relevant information about the uncertain legal status is deliberately omitted in order to encourage the sale of training.
2. Consequences under consumer protection law:
- In the context of consumer protection, companies have a duty to provide clear and truthful information. False claims about legal obligations could be considered a violation of consumer protection regulations.
- Warnings from consumer protection associations or competitors would be possible here, which can lead to injunctions and possibly claims for damages.
3. Consequences under competition law:
- Competitors could sue for unfair competition if they suffer a competitive disadvantage due to false statements about legal obligations. This could lead to warnings or legal proceedings.
- Providers must ensure that their marketing measures clearly communicate the legal uncertainty in order to avoid distortions of competition.
4. Consequences under contract law:
- If customers book training courses based on the false claim that there is a legal obligation to implement AI literacy by February 2, 2025, they could assert contractual claims such as withdrawal from the contract or damages if the claim turns out to be false.
2.2 Lawyers and legal advisors
Conceivable are (especially in Germany):
- Professional consequences:
Violation of the duty of care (§ 43 BRAO, § 11 BORA = two relevant German Lawyers Acts): Lawyers are obliged to advise their clients carefully and correctly. False statements about legal duties could lead to disciplinary action (such as warnings, fines) by the Bar Association. - Liability:Lawyer’s liability (§ 280 BGB):
Clients could claim damages if they suffer financial losses as a result of the wrong advice. - Consequences under competition law:
Misleading advertising (§ 5 UWG): If the false advice is used as an advertising measure (e.g. to win clients over to training), this could be punished as misleading advertising.
2.3 Search engines
Conceivable are:
- Misleading according to the UWG:
Misleading business practices (§ 5 UWG): False information about legal obligations that leads users to make certain decisions (e.g. the purchase of services) could be considered misleading. - Liability for content and liability as a provider:
If Google (or Google Gemini) itself acts as a content provider, Google could be held liable if companies or users act on false legal information and suffer financial damage. - Consumer protection:
Violation of transparency obligations: Google may be obliged to correct incorrect legal information in order not to mislead users.
In both cases, legal action could be taken by competitors, clients or consumer protection organisations, especially in the case of competition law violations.
3. Conclusion and recommendation
AI literacy is a paramount instrument of the EU AI Act (and beyond). There is no question whether this topic is important. It must also be taken into account – indirectly. Every company is explicitly called upon to do everything conceivable to achieve this goal.
However, it takes the appropriate time to prepare and implement sensible measures. There is no obligation to have successfully implemented measures by February 2 – at least not under the EU AI Act.
Therefore, the following recommendations:
- Take the appropriate time to plan and implement AI literacy in peace.
- Do not allow yourself to be pressured to book training courses without having checked in advance whether and to what extent there are direct obligations and what measures are necessary here.
- Avoid unrest in your own organization by pushing the topic of AI literacy too much due to the supposed time pressure without need: This creates a “risk of trouble”.
- It is important to anchor AI literacy in a planned and sustainable way. This is not a sprint, but an endurance run. Avoid avoidable frustration: Therefore, divide your forces and those of your organization well!
But: Always keep in mind: The topic of “AI literacy” is enormously important! For this very reason, nothing should be rushed. It is an ongoing task! Their implementation requires good planning and an individually tailored approach.
The freedom that the EU AI Act gives to plan and implement AI literacy in a meaningful and sustainable way is there! This is also important in view of many doubts and criticisms of the EU AI Act: It is not always justified because, as the example shows, various rumours and false claims can create a false or negative impression of the EU AI Act that is not justified.
Further information
Links to the articles of the EU AI Act mentioned in this post (German):
- Article 3 EU AI Act
- Article 4 EU AI Act
- Article 16 EU AI Act
- Article 53ff EU AI Act
- Article 96 EU AI Act
- Article 99 EU AI Act
- Article 113 EU AI Act
Further articles on the topic:
About the author:
Oliver M. Merx
Oliver M. Merx ist Initiator und Herausgeber von CAIR4. Er arbeitet als Digital Consultant im Großraum München. Als ausgebildeteter Rechtsassessor mit Schwerpunkt Verwaltungsrecht hat er mehrere Jahre als Repetitor in der Erwachsenenbildung und als Leiter eines juristischen Verlags gearbeitet. Ihn interessiert vor allem das Zusammenspiel von normativem Recht und freiwilliger Corporate Digital Responsibility (CDR). Er ist Mitherausgeber des CDR-Magazins, Gründer der 2019 gegründeten CDR-LinkedIn-Gruppe, Autor des CDR-Playbook, Mitgestalter der CDR-Building-Bloxx sowie des internationalen CDR-Manifesto. Bekannt ist er auch als Speaker auf zahlreichen Veranstaltungen. Seine fachlichen Schwerpunkte liegen im Gesundheitswesen, der agilen Gesetzgebung sowie im KI-Kontext.
Be First to Comment