Labeling obligation of AI-generated content

Programs like ChatGPT, DALL-E, Midjourney or Adobe Firefly have been very popular for quite some time. These programs enable users to create AI-generated text or images by entering a command, known as a prompt. These tools are frequently used in marketing to save time and create appealing websites. However, image or sound recordings manipulated using AI also often circulate on social networks, where it often contributes significantly to the spread of misinformation.
In this article, we will inform you about the extent to which such AI products must be made recognizable to the user and what the general legal situation is in this regard. The recently enacted EU AI Act (»AI Act«) will play an important role in this regard in the future. You can learn more about the AI Act in this article.
Current legal situation
First, a few words about the current law. At the moment, there is no specific obligation to label AI-generated works. However, indirect obligations may arise from other areas of law or general terms and conditions.
As previously mentioned , AI products do not receive copyright protection. This is because the texts or images are not created by a human, but by a machine. Thus, they lack the “personal intellectual creation” that copyright law requires. The fact that the prompt itself, i.e. the instruction to the AI to generate a text or image, comes from a human and may be the expression of a creative intellectual process, is not considered sufficient in the current debate to create such protection for the result generated by the AI. The practical relevance: All published AI-generated texts and images, but also software, designs, or even other devices or processes that are basically protectable by patents, can be used, applied, copied and even published by others at will.
However, an indirect necessity to mark these AI products as such may arise from § 5 UWG. This is the case if the decision not to mark AI-generated content as such leads to the erroneous assumption of the publisher’s authorship. In certain cases, this could constitute misleading information that can be subject to a cease-and-desist letter in accordance with the law against unfair competition.
Another aspect to be aware of with unlabeled AI elements: You are responsible for the content that it publishes, for example on the internet. If a factual error creeps into the AI, you are liable in case of doubt.
Voluntary self-commitment of tech companies
There is currently no general requirement for labeling AI-generated content. However, there is a voluntary self-commitment of relevant tech companies such as Amazon, Meta, Google, Adobe, IBM, and others to provide notice of AI-generated content. For example, Meta is working on an automated detection of AI products posted by Instagram or Facebook users. In addition, a similar obligation can be found in the terms of use and general terms and conditions of individual platforms. Therefore, users should always check whether a service they are using requires AI content to be identified.
The labeling requirement of the AI Regulation
The European Union’s “AI Act” is now providing more specific guidance on this topic. Adopted by the European Parliament and the Council of the EU in spring 2024, the regulation came into force upon publication in the Official Journal of the European Union on July 12, 2024. The new rules will now gradually become law. Companies should therefore prepare to implement the new regulations at an early stage.
In addition to banning AI systems that endanger fundamental rights and imposing control and transparency obligations for risky AI systems, the regulation also introduces a labeling requirement for AI products in Article 50 of the AI Regulation. How far does this labeling requirement go and which AI products does it cover?
Specifically, the legal text states the following:
“Anyone who uses an AI system that generates or manipulates image, audio or video content that represents a deep fake must disclose that the content was artificially generated or manipulated […] Anyone who uses an AI system that generates or manipulates text that is published for the purpose of informing the public about matters of public interest must disclose that the text was artificially generated.

Deep fakes
The first major group of cases requiring labeling are the so-called deep fakes. These are images, video or sound recordings that are artificially created or manipulated, but create a deceptively genuine impression. A deep fake often imitates real people and puts words in their mouths that they never said, or puts them in the wrong context. This is how false information is deliberately spread.
Such image, audio or video content must be clearly marked. The only exceptions are in the area of criminal prosecution or if it is a creative or satirical work. In the latter case, a label is sufficient to the extent that it does not affect the presentation too much.
»public interest«
The second group of cases concerns texts that serve to inform the public about ‘matters of public interest’ (for example, in journalistic work for current affairs reporting). Such texts must also be marked as AI-generated. Exceptions can also be found here in the area of law enforcement. The legal text continues:
“This obligation does not apply if […] the AI-generated content has been subject to human review or editorial control and a natural or legal person has editorial responsibility for the publication of the content.”
The labeling requirement therefore does not apply if the text has been checked for errors and untruths. In addition, either a natural or legal person must take responsibility for such errors or untruths that escape control.
How do I label AI-generated content?
That covers the legal requirements for when AI-generated content needs to be labeled. The question now remains as to what such labeling might look like. There are no specific requirements for this (yet). Therefore, here are a few ideas, some of which are already being used in practice:
The most classic way is to either incorporate the notice into the text or place it before or after it. In this case, it is crucial that the labeling is sufficiently prominent to be recognized by the reader. For images and videos, there is the option of either pointing this out in the caption or adding a watermark to the image or video. Watermarks can be designed to be visible or invisible (so-called “low perturbation watermarks”). Alternatively, corresponding hashtags could be used on social media platforms.

Some social media platforms work by automatically generating a notice based on an algorithm that recognizes AI content. So far, however, there is no program that reliably and accurately recognizes such content. Another option is to additionally enter the information in the metadata so that it can be read by the platforms. The company Adobe suggests working with a so-called “Content Credential” tool Pin. This tool displays an icon over the image. By clicking on this icon, the user can take a look at the metadata and the AI labeling.
It remains to be seen which design option will ultimately prevail. Do you have questions about the labeling requirements of the EU’s AI regulation? Or do you need a legal assessment of which of your content you should label and how? Please feel free to contact our attorneys at LLP Law|Patent.
Sebastian Helmschrott | Rechtsanwalt (Lawyer), Certified Specialist for Information Technology Law, Department Head of IT-Law at BISG e.V.
Mr. Helmschrott is your competent LLP Law|Patent point of contact for contract design, especially for international companies in the field of semiconductors, as well as other modern technologies such as LED/OLED, embedded systems and software-supported processes. He is responsible for the national and international aspects of your IT procurement procedures, as well as IP law areas focusing on licensing and research cooperation.