Transparency obligations for media professionals at GenAI

Lesedauer 9 Minuten

  • The EU AI Act stipulates that providers of AI systems must label content created with generative AI.
  • This applies to texts, images, audio and video files.
  • In the case of deepfakes and (potential) fake news, there are special requirements for deployers of AI systems.
  • There are also exceptions.
  • Failure to comply with the transparency obligations could result in severe sanctions.
  • For implementation, there is the model of the CAI initiative.

Articles of the EU AI Act mentioned in this post (German):

  • Article 3 Nr. 1, 3, 4, 60, 68 EU AI Act
  • Article 50 (2), (4), (5) EU AI Act
  • Article 85 EU AI Act
  • Article 87 EU AI Act
  • Article 96 (1) d) EU AI Act
  • Article 99 (4) g) EU AI Act
  • Article 113 EU AI Act
  • Recitals 131, 133 EU AI Act

Please note that the original text of this article is based on the official German translation of the EU AI Act. Its wording may differ in part from the English version of the EU AI Act.

Transparency in GenAI-generated content

In the future, many stakeholders will have to comply with the transparency obligations of the EU AI Act when creating and/or publishing content using generative AI (GenAI). Media companies in particular should be aware of three scenarios that justify transparency obligations:

  • The creation of AI-generated content by GenAI systems by AI providers (Article 50 (2) EU AI Act).
  • The disclosure of AI-generated deepfakes as audio, image or video by AI deployers (Article 50 (4) sentence 1 EU AI Act).
  • The publication of AI-generated text content that could be fake news by AI deployers (Article 50 (4) sentence 4 EU AI Act).

Overview about the three variants:

Dieses Bild hat ein leeres Alt-Attribut. Der Dateiname ist Medien_GenAI_EU_AI_Act-3.jpg

1. AI-generated content

Article 50 (2) EU AI Act states:

Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated.’

The requirement as such is comparatively clear. However, it is questionable for which type of actor the provision applies in the AI value chain:

  • It applies to all direct providers within the meaning of Article 3 No. 3 EU AI Act.
  • It also applies to so-called “downstream providers” within the meaning of Article 3 No. 68 EU AI Act.
  • It does not apply to deployers or users within the meaning of Article 3 No. 4 EU AI Act.

Direct providers are primarily companies that manufacture generative AI systems within the meaning of Article 3 No. 1 EU AI Act and offer them on the market. AI models are not covered.

Downstream providers of AI systems can also be media companies if they operate their own (individualized) GenAI AI system. Whether and when this is actually the case must be clarified on a case-by-case basis. However, as a media company, you should definitely have this aspect on your radar – especially since the transition periods in this regard are not too long (see below).

2. Exceptions possible

According to paragraph 2 sentence 3, there are also exceptions to the transparency requirements in this regard:

“This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics.”

When this exception takes effect depends, among other things, on the question of what is a “standard edit” or what is a “substantial” change in semantics and what is an “insignificant” change or who decides this … We’ll see! The large number of legal terms that have to be specified in practice in individual cases does not make it easy at the moment to provide clarity here without official guidelines.

With regard to the content created with GenAI, the following examination points must be considered:

Dieses Bild hat ein leeres Alt-Attribut. Der Dateiname ist GenAI_Inhalte_EU_AI_Act.png

As will be shown shortly, a distinction must be made depending on the type of media: Paragraph 1 applies to all media, i.e. to text, images, audio and video. However, “only” the providers of AI systems are affected. A distinction must be made between this and the provision in paragraph 4, which only applies to audio, video and images with regard to deepfakes, or sets specific requirements for texts with regard to the risk of fake news, among other things – for providers (see below). And you can become an provider of an AI system faster than you think …

3. How to implement paragraph 2 (technically)?

If there is no exception, the question arises as to how exactly the AI providers should implement it – especially since the EU AI Act provides for “machine readability” of the labelling and at the same time emphasises the issue of cost/benefit ratio. Recitals 131 and 133 of the EU AI Act, among others, can help here.

  • This means, for example, watermarks, metadata identifications or cryptographic methods. Further details are not yet known.
  • However, Article 96 (1) d) EU AI Act indicates that corresponding guidelines for the implementation of the transparency obligations can be expected in the future.

As long as these guidelines do not exist, it may make sense for providers of GenAI systems and media companies of all kinds to play it safe and, in case of doubt, do everything they can to avoid possible sanctions. Since implementing appropriate labeling tools and processes takes time, you shouldn’t wait too long.

Therefore, it is already worthwhile to take a closer look at the CAI initiative:

  • There you will find practical examples of how leading media companies have been voluntarily carrying out a similar task for years in the sense of Corporate Digital Responsibility (CDR).
  • The international initiative even received a CDR Award for this in 2021. In Germany, it is supported by dpa, Stern and Axel Springer, among others.

The following video shows what CAI credentials are for GenAI-generated content and how it can be automatically inserted into a media article. The implementation of the requirement of a machine-readable form is of particular importance here (see above).

4. Special case: deepfakes

Deepfakes must be distinguished from the general content that can be created with GenAI. These are regulated in Article 50 (4) sentence 1 EU AI Act.

Literally, it says:

Sentence 1: “Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated.

The definition of deepfake can be found in Article 3 No. 60 EU AI Act, which states:

‘‘deep fake’ means AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful”

First, a video on the explosiveness and increasing relevance of deepfakes:

It is important, among other things, that the deployers, i.e. users of AI systems, are affected by paragraph 4 sentence 1. Compared to paragraph 2, this is a considerable expansion of the target group, because deployer use AI systems without being providers themselves, cf. Article 3 No. 5 EU AI Act.

The special rules for deepfakes, which are art and satire within the meaning of paragraph 2 sentence 3, should also be noted: This topic is dealt with together with freedom of expression in a separate article.

On the subject of deep fakes, the following examination points must be considered:

Dieses Bild hat ein leeres Alt-Attribut. Der Dateiname ist Deep_Fake_EU_AI_Act-3.jpg

And how exactly does the disclosure of deepfakes have to take place? Despite intensive research, no valid information could be found here. What is certain, however, is that the German Bundestag is also  dealing with this issue quite intensively. As soon as there is reliable information here, it will be added in the comments at the end of the article.

5. Similar and yet different: Fake news

The deepfakes outlined above within the meaning of paragraph 4 sentence 1 explicitly refer to the formats photo, audio and video. This also makes sense because they are to be distinguished from textual news within the meaning of paragraph 4 sentence 4. In the case of (potential) fake news, the text format plays a particularly important role – not least in news.

Sentence 4: “Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated”

The first half-sentence refers to the “generation” of AI-generated text content, which must be labeled by deployers. So if you write marketing texts or technical articles or other texts with GenAI for publication, you should be careful about what is published and how.

The second half-sentence explicitly concerns manipulation: The word “fake news” does not appear, which is partly due to the fact that it can also be done in combination with deepfakes. Conversely, “fake news” that is spread at breakneck speed, e.g. via X (formerly Twitter) or Facebook, is in many cases manipulated text content without manipulated images, audio files or videos.

According to wikipedia, fake news is:

“Manipulatively disseminated, fake news … that spreads predominantly on the Internet, especially in social networks and other social media, sometimes virally.

Fake news can be spread particularly well and quickly by means of text. And not only via the mainstream media, but also by bloggers or bloggers. Social media creators of all kinds. As deployers, i.e. users of an AI system, they can create corresponding texts – whether with or without manipulation.

Considering the large number of bloggers and influencers, the question arises,

  • Am I also affected by the requirements as a blogger?
  • If so, how can or must the labelling be implemented?

On the subject of GenAI text content and fake news, the following examination points must be considered:

Dieses Bild hat ein leeres Alt-Attribut. Der Dateiname ist Fake_News_EU_AI_Act-3.jpg

6. How much are bloggers affected?

Below is a look at some special features in the context of (potential) fake news:

  • An provider of an AI system within the meaning of Article 3 No. 1 EU AI Act can theoretically be any blog operater who publicly reports on his hobby and produces artificially generated texts or even fake news.
  • However, Article 50 (4) sentence 4 EU AI Act clarifies that the information must serve the public interest in order to fall under the prescribed transparency obligation. This indeterminate legal concept can  be researched on wikipedia, among other places.
  • Media companies will definitely be affected, small blogs usually not. Here, however, the individual case is certainly decisive, because blogs or user-generated content from media portals can also have a considerable reach.
  • Article 50 (4) sentence 4 EU AI Act therefore first defines a rule, which is then immediately restricted again with sentence 5, quote: “If the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.” A look at the imprint of an offer helps here.

In essence, it is therefore particularly about AI-generated content that could be published without human examination and without labeling, e.g. in the case of fake news or deepfakes, at breakneck speed. In such cases, if it is not made transparent that the content is AI-generated and not further validated, sanctions seem justified. In this way, at least in many cases, it can be prevented that important fundamental values such as freedom of information or freedom of the press are not endangered by AI abuse, especially in relevant public media.

7. “Freedom of expression” for AI-generated content

As already indicated above, artistic and satirical content is a potential exception. From the point of view of media professionals, including many journalists, the question may arise as to what the difference is between spreading an “opinion” without or with AI.

The topic of “Freedom of Expression and AI” was highlighted by the Friedrich Naumann Foundation, among others, as there are overlaps between the two topics. In the regulatory context, however, the arc of tension between “freedom of expression, paternalism and AI” is repeatedly created. The image of the excessively “paternalistic state”, on the other hand, is often  sketched by conspiracy theorists. Conspiracy theories, for their part, are often generated and spread by AI – and so we come full circle!

If you want to delve deeper into this exciting topic, you should take a closer look at Article 5 (1) sentence 1 GG and its scope of protection. This is the topic of this article, especially with regard to deepfakes. At this point, however, it should already be pointed out that freedom of expression also has its limits – especially for the “enemies of freedom“! AI-generated “opinion” is no exception in this regard.

8. Liability issues

It gets really interesting when, for example, AI-generated content from third parties, e.g. AI-generated fake comments or advertising, is automatically integrated into media offerings. In many cases, the latter can easily be confused with substantive contributions.

More on the subject of liability for third-party content, etc. here. With regard to AI adds, there will certainly be new reference cases in the future. It is quite expected that AI will significantly change (online) marketing  in many respects, as described here. The topic of controlling AI-generated comments is also likely to be exciting.

Here is an example of deepfakes in advertising videos (many more can be found on youtube):

The integration of advertising in Internet portals is a “hot” topic for several reasons, because not only the creation, but also the control or personalization is increasingly being carried out by means of AI. The question of whether everything is “in compliance with data protection regulations” must be separated from the question of the transparency obligation at issue here. Nevertheless, the topics are close together.

9. Accessibility, deadlines and sanctions

Attention: In the future, the information on transparency must comply with the applicable accessibility requirements, see Article 50 (5) S.2 EU AI Act! On the subject of accessibility, the information on wikipedia can help, among other things  . As a rule, it is mainly public actors who are covered by this obligation, i.e. also by public media and authorities.

According to Article 113 of the EU AI Act as part of Chapter IV, these transparency requirements will take effect as early as 12 months after the EU AI Act comes into force, i.e. probably in the summer of 2025. Anyone who does not implement them correctly must expect severe penalties according to Article 99 (4) g) EU AI Act: Up to € 15 million or 3% of turnover.

Since media companies in particular have an important leading function, it is quite likely that they will be sanctioned at an early stage for violations of the transparency obligation. However, due to the countless number of public media portals, it is not quite so easy for supervisory authorities to carry out comprehensive market observations. In this respect, information from the public, e.g. from customers, as well as complaints within the meaning of Article 85 of the EU AI Act or whistleblowers within the meaning of Article 87 of the EU AI Act (in particular current and former employees of media companies) will play an important role in recognizing and reporting failure to implement transparency obligations.

Sources that form the basis of or supplement this article (mostly German):

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *