Back to Guides
General
AI Transparency Obligations Under Article 50
Understanding and implementing the transparency requirements for AI systems that interact with natural persons, generate synthetic content, or make emotion/biometric inferences.
8 min read4 sections
In this guide
1
Overview of Article 50 Transparency Requirements
Article 50 of the EU AI Act imposes transparency obligations on providers and deployers of certain AI systems that interact with or affect natural persons. These obligations are distinct from — and in addition to — the high-risk AI obligations under Chapter III. They apply to limited-risk AI systems that fall outside Annex III but still interact with people in ways that require disclosure.
The four main transparency obligations under Article 50 cover: AI systems that interact directly with natural persons (chatbots, virtual assistants); AI systems that generate synthetic content (text, images, audio, video); AI systems that perform emotion recognition or biometric categorisation; and AI systems that generate or manipulate deepfake content.
Tips
- Article 50 obligations are triggered by the nature of the AI system's interaction, not just its risk level — even minimal-risk systems can trigger transparency obligations
- Transparency notices must be provided 'in a clear and distinguishable manner' — vague or buried disclosures will not satisfy the requirement
- Keep records of when and how transparency notices were provided — these records are your evidence of compliance
Important
- Chatbots and virtual assistants must disclose their AI nature 'at the latest at the beginning of the interaction' — disclosing mid-conversation or only in terms and conditions is insufficient
- Emotion recognition systems cannot be used in workplaces or educational institutions unless permitted by Union or national law — check your jurisdiction before deployment
2
Chatbot and Virtual Assistant Disclosure
Providers of AI systems intended to interact directly with natural persons must ensure those systems are designed so that natural persons are informed they are interacting with an AI system, unless this is obvious from the context. This obligation applies at the beginning of the interaction — not buried in documentation or disclosed only if asked.
The obligation applies to all AI systems designed for human interaction: customer service chatbots, AI-powered virtual assistants, automated telephone response systems, and any other system that converses with users in natural language.
Tips
- Use clear, prominent disclosure language at the start of every interaction — 'You are chatting with an AI assistant' is more effective than technical jargon
- Consider how your disclosure language appears across different interface types: web chat, mobile app, email, voice — each may need a tailored approach
- Test your disclosure with representative users to ensure it is noticed and understood
- For voice interfaces, ensure the AI nature is clear in the initial greeting — not just in background documentation
Important
- Exception: The disclosure requirement does not apply when the AI nature is obvious from context. However, this exception is narrow — if there is any ambiguity, disclosure is required
- Users must be able to opt out of AI interaction and speak with a human in contexts where this is reasonably expected — particularly in customer service applications
3
Synthetic Content Labelling
Providers of AI systems that generate synthetic text, images, audio, or video content must ensure those outputs are machine-readable labelled as AI-generated. This applies to text generation systems, image generation models, audio synthesis, and video generation tools — including deepfake generation.
The labelling obligation uses technical watermarking or metadata standards. The European AI Office is developing harmonised technical standards for AI-generated content labelling. Until these are finalised, providers should implement available watermarking solutions and document their implementation approach.
Tips
- Implement both visible (on-screen) and machine-readable (metadata) indicators of AI-generated content — relying only on one is insufficient
- For text content, consider clear textual disclosures ('This text was generated by AI') in addition to technical metadata
- Monitor EU AI Office guidance on technical standards for synthetic content labelling — standards are evolving rapidly
- Train content creators and editors in your organisation to apply appropriate disclosure when using AI generation tools
Important
- Failure to label AI-generated content is particularly serious for election-related or politically sensitive content — Article 50(4) has specific provisions for AI-generated content about candidates, elections, and democratic processes
- Deepfake content depicting real persons without their consent creates multiple legal exposures: EU AI Act, GDPR, and potentially defamation and personality rights
4
Emotion Recognition and Biometric Categorisation
AI systems that infer emotions or categorise people based on biometric data face some of the strictest transparency obligations under the Act. Providers of such systems must inform persons who are exposed to them. Deployers must similarly inform the persons being processed.
Emotion recognition systems are broadly prohibited in workplaces and educational institutions unless permitted by Union or national law. Biometric categorisation systems that categorise people based on protected characteristics — including political opinions, religious beliefs, or sexual orientation — are prohibited entirely under Article 5.
Tips
- If your system performs any emotional inference — even as a secondary feature — you must provide disclosure to all persons processed
- Consent cannot be used to legitimise prohibited biometric categorisation under Article 5 — these prohibitions are absolute
- For systems that include emotion recognition as an optional feature, the disclosure obligation applies whether or not the feature is used
Important
- Using emotion recognition in job interviews — including AI tools that analyse video interviews for 'engagement' or 'enthusiasm' — is prohibited in employment contexts unless expressly permitted by law
- Systems that infer health conditions, political opinions, or other sensitive attributes from biometric data are likely prohibited outright — seek legal advice before development
Fundamental Rights Impact Assessment (FRIA) GuidePreviousImplementing Human Oversight for High-Risk AINext
Ready to Start Your Compliance Journey?
Use AIComply to manage your AI inventory, classify risks, and generate required documentation.