Back to Guides
Documentation
Annex IV Technical Documentation Guide
A practical guide to creating and maintaining the Annex IV technical file required for all high-risk AI systems under the EU AI Act.
12 min read6 sections
In this guide
1
What Is the Annex IV Technical File?
Annex IV of the EU AI Act specifies the minimum content requirements for the technical documentation that providers must create and maintain for every high-risk AI system. This technical file serves as the primary evidence of compliance — it is what supervisory authorities will request and review during market surveillance activities, and what notified bodies will assess during conformity assessments.
The technical file must be drawn up before the AI system is placed on the market or put into service. It must be comprehensive at launch and kept up to date throughout the entire lifecycle of the system — including any modifications made after deployment. The obligation to maintain the file does not end at market placement: it continues for at least 10 years after the system is last placed on the market.
Tips
- Start your technical file from the first day of development — treating it as a product artefact, not a compliance document to be written at the end
- Use version control for your technical file — each significant system update should generate a dated, versioned update to the technical documentation
- Appoint a named technical documentation owner who is responsible for keeping the file current
Important
- A technical file assembled in the final weeks before market placement — without contemporaneous development records — is unlikely to satisfy Article 11 requirements and may be treated as non-existent during an audit
- The technical file must be available to supervisory authorities upon request — ensure it is stored in a format that can be shared promptly
2
General Description of the AI System
The technical file must begin with a general description of the AI system that gives a supervisory authority sufficient context to understand the system's purpose, capabilities, and limitations before reviewing the more technical sections.
This section should include: the intended purpose of the system; the natural persons or categories of persons for whom the system is intended to be used; the geographic markets or territories where the system is to be placed; and a description of the hardware on which the AI system is intended to run.
Tips
- Write the general description for a technically literate reader who is not familiar with your specific application domain — avoid jargon without explanation
- Be precise about intended purpose — this section defines the scope of your conformity assessment and is a key reference point for deployers assessing whether your system fits their use case
- If your system has multiple modes of operation or is designed for multiple use cases, document each separately
Important
- Overly broad intended purpose descriptions are a red flag in audits — they suggest the provider has not thought carefully about risk classification and scope
- The intended purpose documented in the technical file must match the intended purpose used in your risk management system and conformity assessment
3
Detailed Description of Elements and Development Process
This is the most technically detailed section of the Annex IV file. It must include a description of the methods and steps used to develop the AI system, including pre-training, training, testing, and validation procedures, as well as the design specifications, training methodologies, techniques, and tools used.
For AI systems that learn continuously after deployment, the technical file must describe the mechanisms in place to ensure that training during operation does not result in risks to applicable requirements. This is a significant obligation for providers of adaptive or online learning systems.
Tips
- Include all data preprocessing steps, feature engineering decisions, and model architecture choices with justifications
- Document the split between training, validation, and test datasets and explain the rationale for the split approach
- Record the specific versions of all libraries, frameworks, and tools used — this is essential for reproducibility and audit
- For continuously learning systems, document your monitoring approach for detecting and correcting training drift
Important
- Vague descriptions of development processes — 'we used standard machine learning techniques' — will not satisfy Article 11 requirements
- All hyperparameter choices must be documented with their rationale, particularly for systems affecting Annex III domains
4
Training Data Governance
The technical file must contain detailed information about the training, validation, and testing datasets used, including: their provenance, scope, and main characteristics; a description of the data labelling methodology; information about the relevant characteristics, limitations, and potential biases; and measures taken to detect, prevent, and mitigate these biases.
Data governance documentation is frequently the weakest part of technical files submitted by companies undergoing first-time compliance reviews. Many organisations have incomplete records of their training data — particularly for systems built before the EU AI Act came into force.
Tips
- Maintain a data lineage map for all training datasets — document where each dataset came from, when it was collected, and what processing it underwent
- Document your data labelling methodology in detail — who labelled the data, with what instructions, and how inter-annotator agreement was assessed
- Conduct and document a bias assessment for each training dataset against the demographic characteristics of the intended user population
- If your training data includes third-party datasets, document your due diligence on their provenance and licensing
Important
- Using personal data in training datasets without a documented lawful basis under GDPR creates dual compliance exposure: EU AI Act and GDPR simultaneously
- Data collected for one purpose and repurposed for AI training without appropriate documentation creates both legal risk and a gap in your technical file
5
Testing, Validation, and Performance Metrics
The technical file must include detailed testing and validation procedures, including the metrics used to measure accuracy, robustness, and cybersecurity, and the testing results. For high-risk AI systems, testing must be conducted using representative data that reflects the conditions under which the system will be used.
This section must demonstrate that the AI system performs at the level claimed in its intended purpose documentation, across the full range of conditions it may encounter in deployment.
Tips
- Define your performance metrics before training, not after — documenting post-hoc selection of favourable metrics creates an adverse inference in audits
- Test across demographic subgroups relevant to your Annex III domain — overall accuracy metrics can mask significant disparate impact
- Include adversarial testing results — attempts to cause the system to fail or produce harmful outputs — and document your response to findings
- Benchmark your performance against the claim you make in your intended purpose description
Important
- Test datasets that overlap with training data produce inflated performance metrics — document your data isolation procedures
- Testing conducted only in ideal conditions will not satisfy requirements for systems deployed in real-world environments with distribution shift
6
Post-Market Monitoring and Incident Reporting
The technical file must include the provider's post-market monitoring plan and, for systems placed on the market, a summary of post-market monitoring activities conducted and findings addressed. Over time, this section grows into an audit log of the system's performance in production and the provider's response to emerging risks.
Incident reports — formal notifications to supervisory authorities of serious incidents — must also be referenced in the technical file, along with the corrective actions taken in response.
Tips
- Design your post-market monitoring template before market placement — include the metrics, data collection methods, review frequency, and escalation criteria
- Store post-market monitoring reports in your technical file as dated appendices so that reviewers can trace the system's history
- Connect your monitoring findings to your risk management system — emerging risks identified in monitoring must feed back into your risk assessment
Important
- A post-market monitoring plan that exists in the technical file but has not been implemented will be identified as non-compliance during audit — supervisory authorities request both the plan and evidence of its execution
Provider Obligations Under the EU AI ActPreviousFundamental Rights Impact Assessment (FRIA) GuideNext
Ready to Start Your Compliance Journey?
Use AIComply to manage your AI inventory, classify risks, and generate required documentation.