Back to Guides
Provider

Provider Obligations Under the EU AI Act

If you build or place AI systems on the EU market, you are a provider. This guide covers your full obligations under Chapters III and IV of the EU AI Act.

13 min read5 sections
1

Are You a Provider? The Legal Definition

Under Article 3(3) of the EU AI Act, a 'provider' is any natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model and places it on the market or puts it into service under its own name or trademark — whether for payment or free of charge. This definition is broader than it first appears. You are a provider if you develop an AI system and use it in your own operations under your own brand. You are also a provider if you substantially modify a third-party AI system before deploying it — making you responsible for the modified system as if you built it from scratch. Being a provider does not require selling to external customers.

Tips

  • If you have substantially modified a third-party AI system — changed its purpose, retrained it, or integrated it in a way that creates a new intended use — legal advice on whether you have assumed provider obligations is essential
  • Open-source AI providers have specific carve-outs but can still be providers if they place systems on the market under their own branding
  • Consider whether your AI system is subject to existing EU product safety legislation (MDR, IVDR, Machinery Regulation) — the AI Act creates complementary obligations, not replacements

Important

  • A deployer who modifies a high-risk AI system beyond its intended purpose automatically becomes a provider with full provider obligations under Article 25
  • Simply using the label 'open source' does not automatically exempt you from EU AI Act obligations
2

Risk Management System (Article 9)

Article 9 is the central technical obligation for providers of high-risk AI systems. It requires establishing, implementing, documenting, and maintaining a risk management system — not a static document, but an iterative process running throughout the entire AI system lifecycle. The risk management system must identify and analyse the known and foreseeable risks associated with the AI system, estimate and evaluate the risks that may emerge during intended use and reasonably foreseeable misuse, evaluate potential impacts, and adopt appropriate risk mitigation measures. This must be documented and the documentation must be kept up to date.

Tips

  • Treat risk management as a continuous process integrated into your development pipeline, not a document produced at launch
  • Use the EU AI Office's guidance documents and the NIST AI Risk Management Framework as complementary references
  • Document risk assessments at every major development milestone — not just pre-launch
  • Include foreseeable misuse scenarios in your risk assessment, not just intended use

Important

  • Risk management documentation that is not updated after significant system changes will be inadequate — and will evidence your non-compliance in an audit
  • Article 9 requires testing specifically against the population of natural persons that the system will interact with or affect — this is often overlooked
3

Technical Documentation (Annex IV)

Annex IV of the EU AI Act specifies the required content of the technical documentation that providers must maintain for high-risk AI systems. This documentation must be drawn up before the AI system is placed on the market or put into service, and must be kept up to date for the entire period the system remains on the market — and for at least 10 years thereafter. The Annex IV technical file is the primary document that supervisory authorities will review in an audit. It must be comprehensive, accurate, and current. Incomplete or inaccurate technical documentation is itself a compliance violation, independent of whether the underlying system is compliant.

Tips

  • Build your technical file as a living document from the first day of development — not as a post-hoc exercise
  • Use a structured template aligned with Annex IV's specific requirements (the EU AI Office publishes guidance templates)
  • Version-control your technical documentation to demonstrate how your system has evolved and how risks have been managed at each stage
  • Include all training data governance information, model architecture descriptions, and validation methodology

Important

  • A technical file assembled in the final weeks before launch — without evidence of the development process — will not satisfy Article 11's requirements
  • Post-market changes to the AI system must be reflected in updated technical documentation — failure to do this is a standalone violation
4

Conformity Assessment

Before a high-risk AI system can be placed on the market or put into service, it must undergo a conformity assessment — a formal evaluation that the system meets all applicable EU AI Act requirements. For most high-risk AI systems, providers can conduct this assessment internally (self-declaration). For AI systems in specific domains — including biometric identification and AI in safety components of regulated products — third-party notified body assessment is required. The outcome of the conformity assessment is the EU Declaration of Conformity and the right to affix the CE marking to the AI system. The CE marking signals to deployers and market surveillance authorities that the system has been assessed and found compliant.

Tips

  • Identify early whether your system requires notified body assessment — the lead time for engaging notified bodies can be significant
  • Self-assessment still requires rigorous documentation — it is not a lighter process, just an internal one
  • The EU Declaration of Conformity must be signed by a natural person with authority to represent your organisation
  • Keep your conformity assessment documentation for at least 10 years after last market placement

Important

  • CE marking an AI system that has not undergone proper conformity assessment is a serious violation and can result in market withdrawal, fines, and criminal liability in some member states
  • If your AI system undergoes substantial modification after market placement, you must conduct a new conformity assessment before the modified system is used
5

Post-Market Monitoring

Article 72 requires providers of high-risk AI systems to proactively collect and review experience gained from deployers' use of their systems. The post-market monitoring plan must be part of your technical documentation and must be actively implemented — not just documented. This is where many providers underestimate their obligations. Post-market monitoring is not simply reading your support tickets. It requires a systematic process for collecting performance data, analysing it for indications of risk, and acting on findings — including updating your risk management system and technical documentation, or initiating corrective actions.

Tips

  • Design your AI system with telemetry and monitoring capabilities built in from the start — retrofitting monitoring into a deployed system is significantly more difficult
  • Establish a formal feedback channel for deployers to report anomalous behaviour
  • Schedule regular post-market monitoring reviews — at least quarterly for high-risk systems
  • Connect your post-market monitoring findings to your risk management system so that emerging risks are captured and addressed systematically

Important

  • Post-market monitoring that consists only of reactive customer support responses will not satisfy Article 72 requirements
  • Providers must notify relevant market surveillance authorities and deployers promptly when serious risks are discovered through post-market monitoring

Ready to Start Your Compliance Journey?

Use AIComply to manage your AI inventory, classify risks, and generate required documentation.