How to Classify Your AI Systems Under the EU AI Act: A Practical Guide
CLASSIFICATION14 min

How to Classify Your AI Systems Under the EU AI Act: A Practical Guide

The EU AI Act's four-tier risk classification system determines your compliance obligations. This guide walks through the classification process step by step with practical examples relevant to Irish businesses.

A
AIComply Team
3 March 2026

Key Takeaways

  • 1The EU AI Act uses a four-tier risk pyramid: prohibited, high-risk (Annex III), limited-risk, and minimal risk.
  • 2Annex III lists 8 specific high-risk AI domains — if your system operates in any of these, significant obligations apply.
  • 3Classification must consider both the system's technical design AND the specific deployment context.
  • 4Deployers and providers can have different risk classifications for the same underlying AI system.
  • 5Misclassification — in either direction — carries significant cost: either unnecessary overhead or regulatory fines.
  • 6Classification is not a one-time exercise: material changes to your AI system or deployment context can change its risk tier.

Why Classification Is the Foundation of Everything

Every obligation your organisation faces under the EU AI Act — documentation requirements, human oversight protocols, conformity assessments, transparency notices — flows directly from your AI system's risk classification. Get the classification right, and you know exactly what to do. Get it wrong, and you face either unnecessary compliance cost or enforcement action.

  • Prohibited AI: Banned outright — social scoring by public authorities, real-time remote biometric identification in public spaces, manipulation of vulnerable groups
  • High-Risk AI (Annex III): Significant compliance obligations — 8 defined domains including HR, credit, education, law enforcement
  • Limited-Risk AI: Transparency obligations only — must disclose AI interaction to users
  • Minimal Risk AI: No specific obligations — spam filters, recommendation engines in low-stakes contexts

The Annex III High-Risk Domains

Annex III is the most practically important part of the EU AI Act for the majority of Irish businesses. It lists 8 specific domains where AI systems are automatically classified as high-risk, triggering the Act's full compliance requirements.

  • Biometrics: Remote identification, emotion recognition, biometric categorisation — extensive restrictions for most applications
  • Critical Infrastructure: AI used in management of digital, water, gas, heating, or electricity infrastructure
  • Education and Vocational Training: Systems that determine access to educational institutions, evaluate learning outcomes, or assess students
  • Employment and HR: CV screening, interview analysis, candidate scoring, employee monitoring, promotion decisions
  • Essential Services: Credit scoring, insurance risk assessment, life or health insurance underwriting, emergency dispatch
  • Law Enforcement: Crime risk assessment, polygraph testing, evidence reliability evaluation, predictive policing
  • Migration, Asylum and Border Control: Risk assessment of visa or asylum applicants, document authenticity verification
  • Justice and Democratic Processes: Legal reasoning assistance, election-related AI systems

Classification as high-risk does not mean your AI is prohibited. It means you must meet the Act's technical and governance requirements — risk management, data governance, transparency, human oversight, and conformity assessment — before deployment.

The Step-by-Step Classification Process

Classification should be conducted as a structured decision process, not a gut-check. Here is the recommended seven-step sequence:

  • Step 1 — Is it an AI system? Apply the EU AI Act definition: a machine-based system designed to operate with varying levels of autonomy that generates outputs influencing real or virtual environments
  • Step 2 — Is it prohibited? Check whether the system matches any Article 5 prohibited practices. If yes, stop deployment immediately
  • Step 3 — Is it a GPAI model? General Purpose AI models have separate obligations under Title VIII — assess independently
  • Step 4 — Does it operate in an Annex III domain? Map the system's actual use case against all 8 Annex III categories, considering deployment context not just technology
  • Step 5 — Apply the context exception: A system technically in an Annex III domain may be limited-risk if it does not pose significant risks to health, safety, or fundamental rights
  • Step 6 — Assess limited-risk transparency triggers: Does the system interact with humans who might not know they are dealing with AI? Does it generate synthetic content?
  • Step 7 — Document your classification rationale with article references for every determination

Common Misclassification Mistakes

Under-classifying HR AI Tools

Many Irish companies use AI tools for CV screening, interview scheduling, or candidate scoring provided by third-party SaaS vendors. A common mistake is assuming the vendor's compliance covers the deployer's obligations. It does not. As a deployer of a high-risk AI system in the employment domain, you retain significant obligations under Article 26 — including human oversight, maintaining technical documentation, and registering the system in the EU database.

Over-classifying Recommendation Engines

Product recommendation systems, content personalisation engines, and pricing optimisation tools are frequently classified as high-risk when they are not. The key question is: does the system make or significantly influence a decision that affects a person's access to essential services, education, employment, or justice? A content recommendation system does not — it is minimal risk. A system adjusting insurance premiums based on behavioural data is high-risk.

Ignoring the Deployer/Provider Distinction

The EU AI Act distinguishes between providers (who develop and place AI systems on the market) and deployers (who use AI systems in a professional context). The same system has different — though overlapping — obligations for each party. Deployers often underestimate their obligations, assuming the provider's CE marking covers everything. It does not cover your human oversight programme, your employee training obligations, or your duty to suspend the system if it risks causing harm.

Maintaining Your Classification Over Time

Classification is not a static exercise. The EU AI Act explicitly requires that material changes to an AI system — changes to its intended purpose, training data, algorithmic architecture, or deployment context — must trigger re-classification.

  • Adding a new use case to an existing AI system can elevate its risk classification
  • Changes to Annex III through delegated acts may bring previously limited-risk systems into high-risk scope
  • Using an AI system in a new organisational context can change its classification even if the technology is unchanged
  • Quarterly classification review is recommended for any system operating in or adjacent to Annex III domains

Building a living AI register — a continuously maintained inventory of all AI systems with their current risk classifications, rationales, and material change log — is the most practical way to manage this obligation. It also forms the core of your Annex IV technical documentation.

Frequently Asked Questions

Does every AI tool my company uses need to be classified?

Technically yes, though the practical focus should be on systems that interact with Annex III domains or have significant consequences for individuals. A spell-checker or spam filter is minimal risk with no specific obligations. A system that scores job candidates or determines credit eligibility is high-risk with extensive obligations.

We use a third-party AI system — are we the provider or deployer?

If you are using an AI system built by a third party without significant modification, you are a deployer. If you have substantially modified the system — retrained it on your data, changed its intended purpose — you may have taken on provider responsibilities. Deployers have obligations under Article 26 that are distinct from, but complementary to, provider obligations.

What is a General Purpose AI (GPAI) model and how is it classified?

GPAI models are trained on large volumes of data and can be used for a range of tasks — including foundation models like large language models. They have separate regulatory treatment under Title VIII. Most companies using GPAI via APIs are deployers of a third-party system, not GPAI providers.

How does EU AI Act classification interact with existing sector regulations?

The EU AI Act is complementary to existing sector regulations, not a replacement. Medical devices (MDR/IVDR), aviation (EASA), and financial services (DORA) all have AI-specific provisions. The EU AI Act applies as an additional layer where the existing sector regulation does not address AI-specific risks.

Ready to Start Your Compliance Journey?

AIComply simplifies EU AI Act compliance for SMEs with intelligent tools.

Get Started Free