AI Act Article 50: What Media and Institutions Must Know About AI Content Transparency
On 5 March 2026, the European Commission published the second draft of its Code of Practice on AI content transparency. From August 2026, labelling obligations become enforceable. Here is what this means — concretely — for press, media and public institutions.
The transparency obligations of Article 50 of the AI Act will become enforceable in less than 5 months. AI providers, media outlets and institutions: the time to prepare is now.
Electoral deepfakes. Synthetic images embedded in conflict coverage. Fake articles attributed to established newsrooms. The proliferation of AI-generated or AI-manipulated content is no longer an abstract threat — it is actively reshaping the information ecosystem at a pace that regulation has struggled to match.
The European Union has decided to act. With the AI Act — the world's first comprehensive regulatory framework dedicated to artificial intelligence — and its Article 50 in particular, Brussels is now requiring providers of generative AI systems to make their outputs detectable and traceable.
The deadline: 2 August 2026.
For Provenance For Trust, this regulatory milestone is not an external constraint. It validates the core of our mission. Since our founding, our collective has worked to guarantee the authenticity, traceability and credibility of information. Article 50 of the AI Act provides the legal architecture that formalises and accelerates that work.
Article 50 of the AI Act: What Does It Actually Say?
The AI Act, which entered into force on 1 August 2024, classifies AI systems by risk level and imposes proportionate obligations on those who develop or deploy them. Article 50 deals specifically with transparency towards users and the public. For generative AI systems — those that produce images, videos, audio, text, code or other synthetic content — it establishes four core obligations:
AI-generated content — images, video, audio, documents, text — must be marked in a machine-readable way so that its synthetic origin can be detected.
People interacting with AI systems — chatbots, virtual assistants — must be clearly informed that they are communicating with a machine, not a human.
Audiovisual content generated or manipulated to depict real individuals must be unambiguously identified as synthetic or artificially altered.
Mechanisms for verifying the origin of AI-generated content must be accessible to the general public, researchers and competent authorities.
The Code of Practice: What the V2 Published on 5 March 2026 Changes
To support the implementation of Article 50, the European Commission developed a voluntary Code of Practice — a document that translates the regulation's requirements into technical and operational guidance. On 5 March 2026, the Commission published its second version, introducing three major developments: multi-layer marking is now officially endorsed, detection must be publicly accessible, and a new concept emerges — the shared provider-agnostic detection interface, or European verification hub.
The Regulatory Timeline
-
August 2024AI Act enters into force
-
Early 2025First draft of the Code of Practice published
-
5 March 2026V2 of the Code of Practice published
-
30 March 2026Deadline for stakeholder comments
-
June 2026Final version of the Code expected
-
⚠ Enforcement deadline2 August 2026Article 50 obligations become enforceable
What Article 50 Means for Media, Press and Institutions
The question is natural: does Article 50 directly concern newsrooms and public institutions? The answer depends on the role each actor plays in the content production and distribution chain.
| Actor | Role under the AI Act | Key Obligations |
|---|---|---|
| Generative AI provider e.g. software editor | Provider | Implement technical content marking; provide a detection mechanism |
| Media / newsroom using an AI tool | Deployer | Ensure published AI content is marked; inform audiences; document AI use |
| Public institution publishing content | Deployer | Verify compliance of AI tools used; implement editorial labelling policies |
| Distribution platform | Deployer Intermediary | Enable detectability of AI content; cooperate with verification mechanisms |
For media organisations, the challenge is twofold. First, they must verify that the AI tools they use — for generating visuals, summaries, translations, or edits — comply with the marking requirements. Second, they must develop clear editorial policies for signalling synthetic content to their audiences.
This is precisely where Provenance For Trust's work becomes directly actionable: not merely as a compliance tool, but as an editorial trust signal towards readers.
The Technical Challenges: Why Text Is a Special Case
One of the most actively debated points in the technical discussions around the Code of Practice concerns the labelling of text generated by large language models (LLMs). Unlike images or videos — for which steganography and watermarking techniques are relatively mature — text presents specific challenges:
Text can be copied, paraphrased, translated or partially rewritten, erasing any marking without technical effort.
No text steganography technique has yet demonstrated sufficient robustness to meet regulatory requirements at scale.
Machine-readable metadata works for structured file formats, but not for plain text copied and pasted into a third-party publication.
Between generation (at the AI provider) and publication (at the media outlet), marking can be lost at several stages of the workflow.
These challenges explain why the Code of Practice places such emphasis on the multi-layer approach and on forensic capabilities for unmarked content: detection must function even when marking has been altered or removed.
Your Compliance Roadmap: Where to Start
Whatever your position in the media or institutional ecosystem, three questions structure a meaningful approach to Article 50 compliance:
Which generative AI systems do you use?To produce which types of content? At which stage of your workflow?
Are those systems compliant with marking requirements?Can you verify the origin of the content you publish?
Which editorial procedures and technical tools do you need?Put them in place before August 2026.
The Code of Practice gives actors some flexibility in choosing technical solutions. But it sets a clear direction: synthetic content must be detectable, origins must be verifiable, and verification mechanisms must be accessible.
Conclusion: An Opportunity as Much as a Constraint
It would be reductive to approach Article 50 of the AI Act purely as a compliance obligation. For press organisations, media outlets and institutions whose primary asset is editorial quality and audience trust, AI transparency obligations also represent a genuine differentiation opportunity.
In an information landscape saturated with synthetic content, those who can demonstrate the authenticity and traceability of their productions will hold a real competitive advantage. Provenance will become a quality signal — much as journalistic certification labels function today.
That is the bet Provenance For Trust is making: that transparency will not remain merely a legal requirement, but will become a new editorial standard that lastingly restores trust in information.
Are you a media organisation, institution or press actor? Participate in our workshops and help build the transparency infrastructure of tomorrow.
Join Provenance For Trust →Want to assess your Article 50 compliance trajectory?
Contact us →