Back to Guides
General
EU AI Act: Key Terms and Definitions
A practical reference guide to the most important terms and definitions in the EU AI Act, with plain-language explanations and compliance implications.
7 min read4 sections
1
Core Definitions
AI System (Article 3(1)): A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
This definition is intentionally broad. It covers traditional machine learning models, deep learning systems, rule-based systems that adapt over time, and hybrid systems. It does not cover simple automation, lookup tables, or deterministic rule systems that cannot be considered to 'infer' outputs.
Provider (Article 3(3)): A natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.
Deployer (Article 3(4)): A natural or legal person, public authority, agency, or other body that uses an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
Tips
- The AI system definition's requirement to 'infer' outputs distinguishes AI from simple automation — but the line is genuinely blurry for some systems. When in doubt, treat the system as an AI system
- The provider definition includes organisations that deploy AI under their own brand, even for internal use — you do not need external customers to be a provider
- The deployer definition explicitly excludes personal, non-professional use — consumers using AI apps for personal purposes are not deployers
Important
- Misclassifying your organisation as a 'user' rather than a 'provider' or 'deployer' is a common compliance error with significant consequences for your obligation mapping
2
Risk Categories
Unacceptable Risk / Prohibited Practices (Article 5): AI systems whose use is prohibited outright in the EU. These include: AI systems that use subliminal techniques or exploit vulnerabilities to distort behaviour in ways that cause harm; social scoring systems used by public authorities; real-time remote biometric identification systems in public spaces for law enforcement (with narrow exceptions); emotion recognition systems in workplaces and educational institutions; and biometric categorisation systems that infer sensitive attributes.
High-Risk AI System (Article 6 and Annex III): AI systems that pose significant risks to health, safety, or fundamental rights. These fall into two categories: AI systems that are safety components of products already regulated by existing EU product safety legislation (Annex II); and AI systems that operate in specific domains listed in Annex III.
Limited-Risk AI System: AI systems subject to specific transparency obligations under Article 50, including chatbots, synthetic content generators, and emotion recognition systems.
Minimal Risk AI System: All remaining AI systems. No specific obligations under the Act, though the EU AI Act encourages voluntary adoption of codes of conduct.
Tips
- The prohibited practices list is exhaustive — if a practice is not on the Article 5 list, it is not outright prohibited under the EU AI Act (though other laws may apply)
- High-risk classification is based on intended purpose AND deployment context — the same technology can be high-risk or minimal risk depending on how it is used
Important
- Emotion recognition in workplaces is prohibited — this includes AI video interview analysis tools that assess candidate 'engagement', 'enthusiasm', or emotional state unless permitted by applicable law
3
Technical Terms
Conformity Assessment (Article 43): The process by which a provider demonstrates that a high-risk AI system complies with applicable requirements. For most systems, this is a self-assessment. For certain systems (biometric identification, AI in safety-regulated products), third-party notified body assessment is required.
CE Marking: The marking placed on a high-risk AI system after successful conformity assessment, indicating EU compliance. The CE marking for AI is affixed under the EU AI Act framework and may also include CE markings from other applicable product legislation.
Technical Documentation (Annex IV): The comprehensive technical file that providers must create and maintain for all high-risk AI systems. It must be available to supervisory authorities on request and must be kept for at least 10 years after last market placement.
Post-Market Monitoring (Article 72): The proactive process by which providers collect and review data from deployed AI systems to identify emerging risks and compliance issues. It is an ongoing obligation, not a one-time assessment.
Systemic Risk (Article 3(65)): For GPAI models, a risk arising due to high-impact capabilities (as indicated by training compute above 10^25 FLOPs or as designated by the EU AI Office) that could significantly impact the internal market or pose threats to public health, safety, public security, or fundamental rights.
Tips
- CE marking for AI does not replace CE marking required by other product safety legislation — you may need multiple CE markings for a single product
- Post-market monitoring is mandatory for high-risk AI providers — schedule it into your operational calendar and resource plan from day one
Important
- Systemic risk designation by the EU AI Office can be made based on qualitative factors, not just compute thresholds — a widely deployed model with significant social influence may be designated systemic risk even below the FLOP threshold
4
Roles and Obligations
Authorised Representative (Article 22): A natural or legal person established in the EU mandated by a non-EU provider to act on their behalf regarding EU AI Act compliance. Non-EU providers placing AI systems on the EU market must designate an authorised representative before market placement.
Notified Body (Article 3(21)): A conformity assessment body designated by a member state and notified to the European Commission to carry out third-party conformity assessments. Notified bodies must be accredited and technically competent for the specific AI domains they assess.
Market Surveillance Authority (Article 3(26)): The national authority responsible for market surveillance activities — monitoring AI systems placed on the market to ensure compliance. In Ireland, DRAI (Digital, Research and Artificial Intelligence Authority) is the market surveillance authority for general-purpose AI systems.
Fundamental Rights Impact Assessment (Article 27): A structured assessment conducted by deployers of high-risk AI systems to evaluate the potential impact on fundamental rights. Mandatory for public authorities and private bodies providing essential services; recommended as best practice for others.
Tips
- Non-EU companies placing AI on the EU market must designate an authorised representative before market placement — this is not optional and cannot be done retroactively
- Market surveillance authorities have significant powers including requesting documentation, conducting audits, and ordering market withdrawal — build your compliance posture assuming you may be audited at any time
Important
- Authorised representatives are not a compliance shield — they facilitate regulatory communication but do not reduce the non-EU provider's compliance obligations
Ready to Start Your Compliance Journey?
Use AIComply to manage your AI inventory, classify risks, and generate required documentation.