AI Act in practice
Labelling AI-generated content
Document the level of your work so that your own achievements are not devalued.
The integration of Artificial Intelligence (AI) into the day-to-day workflows of media professionals (or, to use the modern term, ‘creators’) is in full swing, and there has been a dramatic increase in both the quality and quantity of available tools. Positive and negative effects are having a parallel impact on all related disciplines and project stages. Creators must once again reinvent themselves and learn to navigate the seismic upheavals in their daily work. Fundamental questions remain largely unanswered, partly due to the unpredictability of future developments and their consequences for society and the economy.
One step towards transparency and reorganisation is the so-called AI Act. This includes a labelling requirement for AI-generated content, which comes into force on 2 August 2026.
For media professionals, it is of existential importance that the ‘creative contribution’ of their works involving AI is adequately documented, so that their own work does not suffer permanent devaluation. Therefore, documenting the project stages ensures the traceability of the work’s creation process.
In today’s creative landscape, creators face a paradox: whilst generative tools such as Gemini and Midjourney open up unimagined visual horizons, there is a growing concern that their own contribution as media professionals or creators will be overshadowed and devalued. And the fear of being accused of deception means that the use of AI is often treated like a ‘black box’ – people use it extensively but remain silent about it.
But this game of hide-and-seek has an expiry date. With the EU AI Act, a regulatory turning point is drawing nearer, making transparency a requirement rather than an option. But it is not just legislators; platforms such as YouTube and Meta are already implementing standards for labelling synthetic media. Those who develop a clear strategy today view transparency not as a necessary evil, but as a professional statement. It is about evolving from a mere tool user to a visual director. Labelling is not an admission of weakness, but proof of control and artistic sovereignty.
The “sparkle symbol” (✨) has now established itself in various forms as a symbol for AI. It immediately signals to the viewer that AI-assisted processes were involved. For media creators, however, precise classification is crucial. The addition of “AI-assisted compositing” (here: in relation to visual images or elements) provides the ideal distinction from simple “one-click” generation. This wording is not found directly in the relevant section of the AI Act, but represents a recommendation that has emerged from extensive research and discussions with colleagues and clients.
This type of labelling makes it clear that the AI has merely assisted, whilst creative direction, the concept and the final composition remain with the human. It is a conscious decision to acknowledge one’s own “level of creativity”. Here, the AI is the digital tool, not the creator.
Through this differentiation, the aesthetic decision remains clearly attributed to the artists. In doing so, they demonstrate that they have not relinquished control, but rather expanded their toolkit. True to the motto: “My works are created through a multi-stage, curated process. Starting from a fixed artistic concept, I use generative AI systems as an extended toolkit. […] The AI functions here as a digital brush within a highly controlled, creative workflow.”
2 August 2026 is the critical deadline from which the transparency obligations of the EU AI Act will take full effect. From then on, AI-generated content in the EU must be clearly and visibly labelled to avoid deception.
Whilst the law mandates transparency, the following three-tiered nomenclature offers a possible strategic framework for a professional declaration:
- AI-assisted: The AI was used for research, structuring or proofreading (e.g. “Created with AI assistance”).
- AI-contributed: Parts of the content (fragments) originate directly from an AI (e.g. “Partially AI-generated”).
- AI-generated: The content was created predominantly and in its core components by an AI.
Those who proactively embrace this transparency secure a head start in terms of trust. In a world where major platforms are already introducing ‘authenticity scores’ or similar measures.
Aesthetics vs. law – labelling need not ruin the work!
The law requires labelling in an “appropriate manner”. Instead of overlaying a permanent text overlay on the image, two levels of labelling are becoming established:
- Visible level: A brief overlay (2–3 seconds) in the intro or outro, as well as a note in the video description (✨ AI-assisted compositing), is therefore sufficient.
- Technical level (metadata): Standards such as C2PA (cryptographic proof of origin) and SynthID (invisible watermarks) are options for attribution. Browsers and platform algorithms can read this data and classify your work as “verified” and “transparent”. This strengthens brand integrity without disrupting the aesthetics.
The AI Act’s requirement for labelling in an “appropriate manner” explicitly includes machine-readable disclosure. For media professionals, this means that information regarding the use of AI should be embedded not only visually but also technically within the file’s code.
Two standards play a central role here:
- IPTC (International Press Telecommunications Council): The classic standard for image metadata has been expanded to include fields such as “Digital Source Type” in order to clearly declare AI generation (e.g. as “trainedAlgorithmicMedia”).
- XMP (Extensible Metadata Platform): This platform, developed by Adobe, makes it possible to store complex content credentials directly within the file.
(In professional applications, this data is usually added automatically and can be edited if necessary.)
Many media professionals and digital creators are looking with concern at their previously published works. However: content that was finally published (‘put into circulation’) before 2 August 2026 generally does not need to be edited retrospectively. Nevertheless, there is a difference between a one-off post and the ‘making available’ of content that you will continue to actively use as a reference or for advertising even after 2026. For such works, a brief archive disclaimer in the description is recommended.
Conclusion: By labelling AI content, you can, where applicable, document that you, as a media professional, have retained full control over the process – from the first prompt iteration to the final compositing. In an era flooded with synthetic media, the level of human creative control becomes a valuable asset. Those who make their methods transparent protect their authorship.
Bibliography/Sources:
European Parliament & Council of the European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 2024/1689. https://eur-lex.europa.eu/legal-content/DE/TXT/?uri=CELEX:32024R1689.
HÄRTING Rechtsanwälte. (2024). Transparency obligations in the AI Regulation (Art. 50). HÄRTING Knowledge. https://haerting.de/wissen/transparenzpflichten-in-der-ki-verordnung/.
Dreyer, S., Lampert, C., & Andresen, S. (2025). Labelling of edited (influencer) photos: Requirements, impact, regulatory approaches (Working Papers of the Hans Bredow Institute | Project Results, No. 75). Leibniz Institute for Media Research | Hans Bredow Institute (HBI). https://doi.org/10.21241/ssoar.99635.


