Back to Guides
Provider

GPAI Model Obligations Under Title VIII

If you develop or deploy General Purpose AI models — including foundation models and large language models — this guide covers your specific obligations under Articles 51–56 of the EU AI Act.

10 min read4 sections
1

What Is a General Purpose AI Model?

The EU AI Act introduces a specific regulatory framework for General Purpose AI (GPAI) models in Title VIII (Articles 51–56). A GPAI model is an AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of how it is placed on the market, and that can be integrated into a variety of downstream systems or applications. This definition captures large language models (LLMs), multimodal foundation models, large image generation models, and similar AI systems trained on broad datasets for general-purpose use. The key distinguishing characteristic is generality: the model can be applied across diverse domains without being retrained for each one.

Tips

  • If you are developing an LLM, a multimodal model, or any foundation model intended for downstream integration, you are almost certainly a GPAI model provider — assess your obligations under Title VIII
  • GPAI model obligations apply to the model itself, not just to downstream applications built on it. Your obligations do not diminish because someone else built the application layer
  • Open-source GPAI models have specific carve-outs but are not automatically exempt — assess against the Article 53 exemptions carefully

Important

  • The GPAI model framework creates a two-tier regulatory structure: all GPAI models face baseline obligations, while GPAI models with systemic risk face significantly enhanced requirements
  • Relying on the 'open source' exemption without satisfying its specific conditions is a compliance risk — the exemption is narrower than often assumed
2

Baseline Obligations for All GPAI Models

Article 53 establishes baseline obligations that apply to all GPAI model providers, regardless of whether the model poses systemic risk. These baseline obligations cover technical documentation, transparency to downstream providers, copyright compliance, and publication of model summaries. Providers must draw up technical documentation sufficient for the EU AI Office to assess compliance. They must make available to downstream providers information about the model's capabilities and limitations relevant to their intended use. They must implement a policy for compliance with EU copyright law, particularly the text and data mining exceptions. And they must publish a sufficiently detailed summary of the training data used.

Tips

  • Create your GPAI model technical documentation using the EU AI Office's published templates — these templates are specifically designed to capture the information required under Article 53
  • Your training data summary must be 'sufficiently detailed' — general descriptions like 'publicly available internet data' are unlikely to be sufficient. Document data sources, curation methods, and known characteristics
  • Establish a copyright compliance review process for training data acquisition — document your legal basis for each major data source

Important

  • Copyright compliance is a hard obligation — not a best-efforts standard. Training on data without appropriate rights creates both EU AI Act and copyright law exposure
  • The obligation to make information available to downstream providers means you cannot rely on confidentiality to avoid transparency obligations entirely — you must find a way to share relevant capability and limitation information
3

Systemic Risk: Enhanced Obligations

GPAI models with systemic risk face significantly enhanced obligations under Article 55. A GPAI model is presumed to have systemic risk if the training compute exceeds 10^25 floating point operations (FLOPs), or if the EU AI Office designates it as having systemic risk based on other factors including its market reach, the extent of its integration into downstream systems, and its potential to cause serious adverse impacts. For GPAI models with systemic risk, providers must conduct model evaluations, including adversarial testing, before and after market placement. They must assess and mitigate systemic risks, including those arising from misuse. They must report serious incidents to the EU AI Office. They must ensure cybersecurity protection adequate to the risks posed by the model.

Tips

  • Conduct compute tracking during training and document your total compute figures accurately — this is the primary trigger for systemic risk designation
  • Even if your model does not reach the 10^25 FLOP threshold, assess other systemic risk indicators — the EU AI Office can designate models as systemic risk based on qualitative factors
  • Engage with the EU AI Office proactively if you are developing a model approaching the systemic risk threshold — the codes of practice developed under Article 56 provide a pathway to demonstrate compliance

Important

  • Adversarial testing for systemic risk GPAI models is not optional and must be conducted by qualified personnel — this is a significant resource commitment that must be planned and budgeted in advance of market placement
  • Serious incident reporting for systemic risk GPAI models must be made to the EU AI Office — not just to national supervisory authorities
4

Obligations When Your GPAI Model Is Used Downstream

A key feature of the GPAI model framework is the interaction between GPAI model providers and the downstream providers who integrate their models into AI systems or other AI models. Article 54 establishes a flow-down obligation: GPAI model providers must ensure downstream providers have the information they need to comply with their own EU AI Act obligations. This creates a supply chain compliance obligation. If a downstream provider builds a high-risk AI system using your GPAI model, they need sufficient information about your model's capabilities, limitations, and training data to satisfy their own Annex IV technical documentation requirements.

Tips

  • Develop a standardised 'model card' or technical disclosure document that downstream providers can reference in their technical documentation
  • Establish a process for downstream providers to request additional technical information — particularly for high-risk AI system deployments
  • Consider your contractual terms with downstream providers: ensure they include obligations to comply with the EU AI Act and to notify you of incidents involving your model

Important

  • You cannot contractually transfer your GPAI model obligations to downstream providers — you remain responsible for your model's compliance as a GPAI model provider
  • If a downstream provider builds a prohibited AI system using your GPAI model, your obligations as a GPAI provider may not shield you from regulatory scrutiny if you had reason to know about the intended use

Ready to Start Your Compliance Journey?

Use AIComply to manage your AI inventory, classify risks, and generate required documentation.