Back to Guides
Deployer
Deployer Compliance Checklist
Complete checklist for deployers of high-risk AI systems under Article 26. Know your obligations before, during, and after deployment.
10 min read6 sections
In this guide
1
Understanding Your Role as a Deployer
Under the EU AI Act, a 'deployer' is any natural or legal person, public authority, agency, or other body that uses an AI system under its authority except where the AI system is used in the course of a personal non-professional activity. If your organisation uses an AI system built by a third party in any professional context — HR screening tools, credit scoring systems, customer service AI, document processing — you are a deployer and have specific obligations under Article 26.
Deployers are distinct from providers. Providers develop and place AI systems on the EU market. Deployers use them. The distinction matters because your compliance obligations — while significant — are different from those of the provider, and you cannot assume that the provider's CE marking or conformity assessment covers your deployment context.
Tips
- Create a register of all AI tools your organisation uses professionally — include SaaS tools, embedded AI features, and internally deployed models
- For each tool, determine whether your use case matches the provider's intended purpose. Using an AI system outside its intended purpose elevates your obligations
- Request and review the provider's instructions for use, technical documentation, and conformity declaration before deployment
Important
- You cannot delegate your Article 26 obligations to your AI provider — even if the provider is EU AI Act compliant, your deployment context creates distinct obligations
- Deploying a high-risk AI system that the provider has not placed on the EU market constitutes acting as a provider, triggering full provider obligations
2
Pre-Deployment Obligations
Before deploying any high-risk AI system, deployers must complete a structured pre-deployment review. This is not a one-time tick-box exercise — it is a documented assessment that must be retained and updated.
Under Article 26(2), deployers must ensure that the intended purpose of the AI system matches their specific use case. A recruitment AI certified for candidate pre-screening is not automatically certified for performance review or promotion decisions. Using it for an uncertified purpose makes you responsible for that extended use.
Tips
- Document the specific use case your organisation will deploy the AI system for — compare it explicitly against the provider's intended purpose
- Review the provider's instructions for use and confirm your IT environment meets technical requirements
- Complete a fundamental rights impact assessment (FRIA) for high-risk systems in sensitive domains before deployment
- Ensure your organisation has a named human oversight officer before go-live
Important
- Deploying an AI system for a purpose not covered by the provider's intended use — or in an environment that does not meet the provider's technical specifications — makes you responsible as if you were a provider
- FRIA is mandatory for public authorities and for private organisations deploying AI in certain sensitive domains (Article 27)
3
Human Oversight Implementation
Article 26(1) requires deployers to implement appropriate human oversight measures during deployment. This is one of the most operationally significant obligations for deployers, and one of the most frequently underestimated.
Human oversight does not mean a person watches every AI decision in real time. It means you have designed your deployment so that a qualified person can understand the AI system's outputs, identify when those outputs may be incorrect or biased, and intervene or override the system when necessary. The oversight framework must be appropriate for the risk level and operational context of the system.
Tips
- Assign a named human oversight officer with documented authority to suspend the AI system
- Define clear criteria for when a human must review an AI decision before it is actioned
- Train oversight personnel specifically on the limitations and failure modes of your AI system
- Document all human override decisions — these records are key evidence in the event of an audit
- Establish escalation procedures for edge cases and contested AI decisions
Important
- Rubber-stamping AI decisions without genuine review does not constitute human oversight — regulators can assess whether your oversight is substantive
- Article 26(5) requires that if deployers become aware that using the AI system poses a risk, they must immediately notify the provider and suspend the system if necessary
4
Data and Input Governance
Deployers are responsible for the quality of data they provide as inputs to AI systems. Under Article 26(3), deployers must ensure that input data is relevant for the intended purpose of the system and is prepared in accordance with the provider's instructions.
This is particularly important for AI systems whose outputs depend heavily on the quality and representativeness of the data they process. An AI system that performs well on nationally representative data may perform poorly — and introduce bias — when given data from a specific regional or demographic subset.
Tips
- Audit your input data sources before deployment — assess representativeness, completeness, and bias
- Document your data preparation procedures and keep records of data sources used
- Implement ongoing data quality monitoring as part of your post-market monitoring plan
- If the AI system processes special category data under GDPR, ensure your legal basis and safeguards are in place
Important
- Providing input data that does not match the provider's specifications — for example, data from a different geographic or demographic context than the training data — can introduce bias and may be a compliance violation
- GDPR and the EU AI Act interact: data used in high-risk AI systems must comply with both frameworks simultaneously
5
Logging, Record-Keeping and Post-Market Monitoring
Article 26(6) requires deployers to retain the logs automatically generated by high-risk AI systems. The retention period is determined by applicable Union or national law, but the default is three years unless sector-specific rules require longer.
These logs are the evidential backbone of your compliance posture. If an incident occurs, or if a supervisory authority requests access to records, your log retention and organisation will determine whether you can demonstrate compliant operation or face an adverse finding.
Tips
- Configure your AI system or its integration layer to automatically capture and store all decision logs
- Tag log entries with timestamps, user IDs, input data references, and output records
- Establish a log review schedule — review samples regularly to identify anomalies before they become compliance events
- Store logs in a format that is accessible and readable for at least three years
- Include log summaries in your quarterly compliance reports
Important
- Failure to retain logs as required is itself a violation — independent of whether the underlying AI system was compliant
- Logs must be retained even after you stop using the AI system — plan for log migration when changing providers
6
Incident Notification and Response
If your deployed AI system causes or contributes to a serious incident — an event resulting in harm to health, safety, or fundamental rights — you have specific notification obligations. Under Article 26(5), deployers must immediately notify the provider and, where applicable, market surveillance authorities.
For deployers in the public sector or in highly regulated sectors, additional sector-specific incident reporting requirements may also apply, layering on top of EU AI Act obligations.
Tips
- Establish an incident response procedure before deployment — define what constitutes a 'serious incident' in your operational context
- Designate a named incident response lead with authority to notify the provider and suspend the system
- Document all incidents, including near-misses that did not result in harm — these records demonstrate active oversight
- Test your incident response procedure with a tabletop exercise before go-live
Important
- Failure to notify in the event of a serious incident is a distinct violation — separate from and in addition to any liability for the incident itself
- The 72-hour notification requirement under Article 73 applies to providers; deployers must notify their provider promptly enough to allow the provider to meet this timeline
Ready to Start Your Compliance Journey?
Use AIComply to manage your AI inventory, classify risks, and generate required documentation.