Why Classification Is the Foundation of Everything
Every obligation your organisation faces under the EU AI Act — documentation requirements, human oversight protocols, conformity assessments, transparency notices — flows directly from your AI system's risk classification. Get the classification right, and you know exactly what to do. Get it wrong, and you face either unnecessary compliance cost or enforcement action.
- •Prohibited AI: Banned outright — social scoring by public authorities, real-time remote biometric identification in public spaces, manipulation of vulnerable groups
- •High-Risk AI (Annex III): Significant compliance obligations — 8 defined domains including HR, credit, education, law enforcement
- •Limited-Risk AI: Transparency obligations only — must disclose AI interaction to users
- •Minimal Risk AI: No specific obligations — spam filters, recommendation engines in low-stakes contexts
The Annex III High-Risk Domains
Annex III is the most practically important part of the EU AI Act for the majority of Irish businesses. It lists 8 specific domains where AI systems are automatically classified as high-risk, triggering the Act's full compliance requirements.
- Biometrics: Remote identification, emotion recognition, biometric categorisation — extensive restrictions for most applications
- Critical Infrastructure: AI used in management of digital, water, gas, heating, or electricity infrastructure
- Education and Vocational Training: Systems that determine access to educational institutions, evaluate learning outcomes, or assess students
- Employment and HR: CV screening, interview analysis, candidate scoring, employee monitoring, promotion decisions
- Essential Services: Credit scoring, insurance risk assessment, life or health insurance underwriting, emergency dispatch
- Law Enforcement: Crime risk assessment, polygraph testing, evidence reliability evaluation, predictive policing
- Migration, Asylum and Border Control: Risk assessment of visa or asylum applicants, document authenticity verification
- Justice and Democratic Processes: Legal reasoning assistance, election-related AI systems
Classification as high-risk does not mean your AI is prohibited. It means you must meet the Act's technical and governance requirements — risk management, data governance, transparency, human oversight, and conformity assessment — before deployment.
The Step-by-Step Classification Process
Classification should be conducted as a structured decision process, not a gut-check. Here is the recommended seven-step sequence:
- Step 1 — Is it an AI system? Apply the EU AI Act definition: a machine-based system designed to operate with varying levels of autonomy that generates outputs influencing real or virtual environments
- Step 2 — Is it prohibited? Check whether the system matches any Article 5 prohibited practices. If yes, stop deployment immediately
- Step 3 — Is it a GPAI model? General Purpose AI models have separate obligations under Title VIII — assess independently
- Step 4 — Does it operate in an Annex III domain? Map the system's actual use case against all 8 Annex III categories, considering deployment context not just technology
- Step 5 — Apply the context exception: A system technically in an Annex III domain may be limited-risk if it does not pose significant risks to health, safety, or fundamental rights
- Step 6 — Assess limited-risk transparency triggers: Does the system interact with humans who might not know they are dealing with AI? Does it generate synthetic content?
- Step 7 — Document your classification rationale with article references for every determination
Common Misclassification Mistakes
Under-classifying HR AI Tools
Many Irish companies use AI tools for CV screening, interview scheduling, or candidate scoring provided by third-party SaaS vendors. A common mistake is assuming the vendor's compliance covers the deployer's obligations. It does not. As a deployer of a high-risk AI system in the employment domain, you retain significant obligations under Article 26 — including human oversight, maintaining technical documentation, and registering the system in the EU database.
Over-classifying Recommendation Engines
Product recommendation systems, content personalisation engines, and pricing optimisation tools are frequently classified as high-risk when they are not. The key question is: does the system make or significantly influence a decision that affects a person's access to essential services, education, employment, or justice? A content recommendation system does not — it is minimal risk. A system adjusting insurance premiums based on behavioural data is high-risk.
Ignoring the Deployer/Provider Distinction
The EU AI Act distinguishes between providers (who develop and place AI systems on the market) and deployers (who use AI systems in a professional context). The same system has different — though overlapping — obligations for each party. Deployers often underestimate their obligations, assuming the provider's CE marking covers everything. It does not cover your human oversight programme, your employee training obligations, or your duty to suspend the system if it risks causing harm.
Maintaining Your Classification Over Time
Classification is not a static exercise. The EU AI Act explicitly requires that material changes to an AI system — changes to its intended purpose, training data, algorithmic architecture, or deployment context — must trigger re-classification.
- •Adding a new use case to an existing AI system can elevate its risk classification
- •Changes to Annex III through delegated acts may bring previously limited-risk systems into high-risk scope
- •Using an AI system in a new organisational context can change its classification even if the technology is unchanged
- •Quarterly classification review is recommended for any system operating in or adjacent to Annex III domains
Building a living AI register — a continuously maintained inventory of all AI systems with their current risk classifications, rationales, and material change log — is the most practical way to manage this obligation. It also forms the core of your Annex IV technical documentation.