Back to Guides
Documentation
Fundamental Rights Impact Assessment (FRIA) Guide
How to conduct and document a Fundamental Rights Impact Assessment for high-risk AI systems under Article 27 of the EU AI Act.
11 min read4 sections
In this guide
1
What Is a FRIA and Who Must Conduct One?
A Fundamental Rights Impact Assessment (FRIA) is a structured evaluation of the potential impact of a high-risk AI system on the fundamental rights guaranteed by the EU Charter of Fundamental Rights and the European Convention on Human Rights. Article 27 of the EU AI Act requires deployers — not providers — to conduct and document a FRIA before deploying certain high-risk AI systems.
The FRIA obligation applies to: public authorities deploying high-risk AI systems; private organisations deploying high-risk AI systems for the provision of essential private services such as banking, insurance, and healthcare; and organisations deploying AI systems in domains directly affecting individuals' rights and freedoms at scale.
Tips
- Even if your organisation is not strictly required to conduct a FRIA, consider it as a best practice — it demonstrates good faith compliance and helps identify risks before incidents occur
- A well-conducted FRIA can also satisfy elements of your GDPR Data Protection Impact Assessment (DPIA) where both are required
- Involve your organisation's legal, HR, and data protection officer in the FRIA process — it is a cross-functional exercise
Important
- Article 27 creates a deployer obligation — not a provider obligation. Even if your provider has conducted impact assessments, your FRIA as a deployer is a separate, mandatory exercise
- FRIAs must be conducted before deployment, not during or after. Post-deployment FRIAs are insufficient to meet the Article 27 obligation
2
Rights at Stake: The EU Charter Framework
The FRIA must assess the impact of the AI system on the fundamental rights protected by the EU Charter of Fundamental Rights. The rights most commonly implicated by high-risk AI systems include:
Human Dignity (Article 1), the Right to Non-Discrimination (Article 21), the Right to Privacy (Article 7), the Protection of Personal Data (Article 8), the Right to an Effective Remedy (Article 47), the Rights of the Child (Article 24), and the Rights of Persons with Disabilities (Article 26).
For AI systems operating in specific domains, additional rights become relevant — for example, the Right to Work (Article 15) for employment AI, or the Right to Education (Article 14) for educational AI systems.
Tips
- Map your AI system's intended use case against the full list of Charter rights — do not limit your analysis to the most obvious ones
- For each right at stake, assess both the likelihood and the severity of potential impact — a high-severity, low-likelihood impact may still require mitigation measures
- Consider the impact on both users of the AI system and third parties who may be affected by its outputs
Important
- The Right to Non-Discrimination (Article 21) is almost always relevant for Annex III AI systems — failing to assess discriminatory impact in your FRIA is a significant gap
- Remember that fundamental rights impacts can be indirect — an AI system that generates data which is then used in a discriminatory way creates an indirect fundamental rights impact
3
The FRIA Assessment Process
A robust FRIA follows a structured five-phase process that produces a documented assessment capable of withstanding regulatory scrutiny.
Phase 1 — System Description: Document the AI system's purpose, capabilities, and the population of persons it will affect, both directly (users) and indirectly (third parties affected by decisions).
Phase 2 — Rights Mapping: Identify all Charter rights potentially affected by the system, and for each right, identify the specific mechanism by which the AI system could impact it.
Phase 3 — Risk Assessment: For each identified rights impact, assess the likelihood (how probable is the impact occurring?) and severity (if it occurs, how serious is the harm to the individual's rights?).
Phase 4 — Mitigation Measures: For each identified risk, document the specific technical and organisational measures you will implement to mitigate the risk to an acceptable level.
Phase 5 — Residual Risk Evaluation: After mitigation measures are applied, assess the residual risk. If residual risk remains significant, you must either implement additional measures or consider whether deployment is appropriate.
Tips
- Use a structured risk matrix to document likelihood × severity assessments — this creates an auditable record of your risk-based decision making
- Involve affected communities or their representatives where possible — this strengthens the quality of your assessment and demonstrates meaningful engagement
- Review completed FRIAs against your organisation's existing equality impact assessments to identify gaps or inconsistencies
Important
- A FRIA that identifies significant residual risks but proceeds with deployment without documented justification creates serious compliance and reputational exposure
- The FRIA must be reviewed and updated when material changes are made to the AI system or its deployment context
4
FRIA Documentation Requirements
Article 27 requires the FRIA to be registered in the EU database for high-risk AI systems established under Article 71. This creates a public accountability mechanism for high-risk AI deployment by public authorities and certain private sector deployers.
The FRIA document must include: a description of the processes in which the AI system will be used; the period and frequency of use; the specific categories of natural persons and groups that are likely to be affected; the specific risks of harm to those persons; the human oversight measures and other measures taken to address these risks; a description of the mechanisms for allowing individuals to exercise their rights.
Tips
- Use the EU AI Office's published FRIA template as a starting point — it ensures your documentation covers all required elements
- Keep your FRIA documentation alongside your other Article 27 compliance records — store it in a format that can be shared with supervisory authorities promptly
- Establish a FRIA review schedule — at minimum annual reviews, or whenever the AI system or its use context materially changes
Important
- Public authorities must register FRIAs in the EU database (Article 71) — failure to register is a standalone compliance violation
- FRIA documents may be subject to freedom of information requests for public authorities — ensure your documentation is accurate and reflects genuine assessment
Ready to Start Your Compliance Journey?
Use AIComply to manage your AI inventory, classify risks, and generate required documentation.