What is the General-Purpose AI (GPAI) Code of Practice?
On 10 July, the EU Commission published the fourth and final version of the GPAI Code of Practice (the Code). The Code is designed to enable providers of general-purpose AI (GPAI) models to demonstrate compliance with their obligations under Articles 53 and 55 of the EU AI Act. The obligations will apply as of 2 August 2025 in respect of GPAI models placed on the EU market on or after that date. See below for further details on the Code’s implementation.
The Code contains three chapters: (1) Transparency, (2) Copyright, and (3) Safety and Security. The Transparency and Copyright chapters apply to all providers of GPAI models, whilst the Safety and Security chapter applies to providers of GPAI models with systemic risk (defined below). For a discussion on chapter 2 (Copyright), see our colleague Toby Headdon’s articles here and here.
The Code is accompanied by official Commission guidelines on GPAI models (the Guidelines), that were finally published on 18 July. Whilst the main purpose of the Guidelines is to provide guidance on the concepts and principles in the Act related to GPAI models, it also includes a useful section on the Code that explains the Commission’s approach to implementation of the Code following 2 August.
On 24 July, the Commission published a separate template (accompanied by an explanatory notice) (the Template) for the summary of the content of pre-training data that GPAI model providers must make publicly available under Article 53(1)(d).
This means that, as of the final week of July 2025, GPAI model providers finally have a complete picture of not only what is required to comply with Articles 53 and 55, but also the approach that the Commission is taking in relation to compliance and enforcement of the 2 August deadline. There is the offer of flexibility in many areas, but not in all.
Why should GPAI providers sign the Code?
While the purpose of the Code is to provide GPAI model providers with an off-the-shelf set of tools to demonstrate adherence to Articles 53 and 55 of the Act, the Code is not mandatory. GPAI Model providers are free not to sign the Code, and to demonstrate that they are compliant with Articles 53 and 55 in their own way.
Nevertheless, the Commission is strongly encouraging GPAI model providers to sign. The Guidelines emphasise that a signatory to the Code will benefit from increased trust from the Commission, and that the Commission will focus its enforcement activities on monitoring a signatory’s adherence to the Code. In the event of any enforcement action, the Commission may take into account the fact that a provider is a signatory to the Code as a mitigating factor in setting the amount of fines.
Conversely the Guidelines also emphasise the disadvantages of not signing the Code. In particular, non-signatories will be expected to report their compliance measures to the AI Office, and there is a reference to the provider having to to carry out a gap analysis comparing the provider’s measures to those in the Code. The Guidelines refer, somewhat ominously, to the prospect of non-signatories being subject to a larger number of requests for information and requests for access to conduct model evaluations by the AI Office. In short, the message from the Commission appears to be that whilst signing the Code brings its own set of obligations and responsibilities, not signing it may be more trouble than it’s worth.
The Template has a different legal status to the Code. Article 53(1) (d) of the Act places an obligation on GPAI providers to make publicly available “a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office”. So completion of the Template is a mandatory requirement of the Act.
As at the time of writing, most of the leading GPAI model providers have indicated they will sign or have signed the Code, including Google, OpenAI, Microsoft, Anthropic and Mistral. For GPAI model providers looking to sign, the Commission has provided a separate FAQ and made a template signatory form available.
When does the Code apply?
The Code is to be implemented by the deadlines in the table below. The Guidelines and accompanying FAQ explain that the AI Office will be operating what amounts to an informal grace period for signatories to the Code. The message appears to be that the AI Office is prepared to work with signatories in a collaborative manner, recognising the challenges of complying with the Code, both for models placed on the market prior to, and after, 2 August.
The Guidelines are clear that this “grace period” will only last until 2 August 2026. From that date the Commission will be prepared to use the Act’s enforcement powers to enforce full compliance on GPAI model providers, including through fines.
The Guidelines also emphasise that the obligation in Article 52(1) on providers of GPAI models with systemic risk to notify the AI Office within two weeks of the model meeting the systemic risk criteria is still mandatory. The AI Office is expecting to receive the initial tranche of notifications by 16 August.
For providers of GPAI models with systemic risk that are still in development or even at an early planning stage, preparing and issuing these notifications before 16 August will likely be among the first active compliance steps to take under the Act.
GPAI models | Not systemic risk | Systemic risk |
---|---|---|
Placed on the market before 2 August 2025 | Compliance by 2 August 2027. The AI Office is therefore dedicated to supporting providers in taking the necessary steps to comply with their obligations by 2 August 2027. Providers of general-purpose AI models placed on the market before 2 August 2025 are not required to conduct retraining or unlearning of models, where it is not possible to do this for actions performed in the past, where some of the information about the training data is not available, or where its retrieval would cause the provider disproportionate burden. Such instances must be clearly disclosed and justified in the copyright policy and in the summary of the content used for training. | Compliance by 2 August 2027. |
Placed on the market on or after 2 August 2025 | Compliance from 2 August 2025, but the AI Office is willing to exercise discretion to signatories. In the first year from 2 August 2025 onwards, the AI Office will offer to collaborate closely in particular with providers who adhere to the Code of Practice to ensure that models can be placed on the EU market without delay. If providers adhering to the Code do not fully implement all commitments immediately, the AI Office will not consider them to have broken their commitments under the Code. Instead, the AI Office will consider them to act in good faith and will be ready to collaborate to ensure full compliance. | Compliance from 2 August 2025, but the AI Office is willing to exercise discretion to signatories. In particular, providers who, on 2 August 2025, have trained, are in the process of training, or are planning to train a general-purpose AI model with a view to placing it on the market after 2 August 2025, and who anticipate difficulties in complying with the obligations for providers of general-purpose AI models, especially those with systemic risk, should proactively inform the AI Office regarding how and when they will take the necessary steps to comply with their obligations. In the specific case where a provider has not placed on the market a general-purpose AI model with systemic risk before 2 August 2025, the Commission will give particular consideration to their challenging situation, in particular to allow a timely placing on the market |
Transparency
The centrepiece of the Transparency chapter is a Model Documentation Form. The form gives providers a template to document information to comply with AI Act obligations to ensure sufficient transparency (Article 53(1)(a), (b) and the corresponding Annexes XI and XI of the Act). For example, the form allows providers to document information required under the Act in one place and covers information relating to:
- Architecture, design and size of the model
- Distribution channels and how the model will be licensed
- Acceptable usage policies
- Information to assist downstream integration of the model
- A general description of the training process, including methodologies, techniques and a description of the key design choices made in model training.
- A very high level description of types of data and provenance of data used for training, testing and validation.
The Model Documentation form includes a recommended word count for the number of words that should be used to populate many of its fields, which never exceeds 400 words. This makes it clear that the expectation is for a concise summary only.
The Transparency chapter also makes clear that national competent authorities don’t have the right to demand information from Signatories directly. They only have the right to ask the AI Office to request it, and only where strictly necessary for the exercise of their supervisory tasks. There is also an emphasis on requests from the AI Office, or from the AI Office on behalf of national competent authorities, to fulfil their tasks “at the time of the request”. It is hard not to read these clarifications as included to dissuade national regulators from trying to use the Transparency obligations too broadly.
Importantly, the Model Documentation Form, or information referred to in it, does not have to be made generally publicly available. Signatories are “encouraged to consider whether the documented information can be disclosed, in whole or in part, to the public to promote public transparency”.
Safety and Security
The Safety and Security chapter sets out the framework for GPAI model providers deemed to pose systemic risks. The chapter’s purpose is to translate the Act’s high-level obligations into concrete, state-of-the-art practices, including establishing a formal safety and security framework, conducting structured risk assessments, and categorising risk tiers in advance.
Providers must identify, analyse, and mitigate systemic risks - such as chemical, biological, radiological, nuclear threats, loss of control, cyber-offence, or harmful manipulation - before deployment, and continue monitoring models prost-release for new hazards.
The chapter mandates robust incident-response procedures, timely incident reporting, cybersecurity safeguard, and, in most cases, external independent evaluation of both models and safety mechanisms.
In practice, most major AI labs would embed these commitments into their development and deployment lifecycles - co-ordinating internal governance, securing third-party audit, continuously tracking system behaviour, and cooperating with regulators - to ensure that frontier models are released only once systemic risk is understood, controlled, and continuously overseen.
What’s next?
Following the Code will enable GPAI model providers to demonstrate EU AI Act compliance, increasing legal certainty and reducing admin burden on in-house legal and compliance teams.
Given the rapid advancements in AI, particularly breakthroughs in agentic AI, there are still likely to be issues that existing legislation does not address, including infringing content, deepfakes, and protecting copyright owners whose work is used to train generative AI. Continuing engagement from the European Commission will be crucial to ensure it remains fit for purpose as the technology develops and the Act is tested.
We expect that once the summer holiday period is over, attention will turn to the upcoming publication of the guidelines on high-risk AI, and also to the publication timeline of the Act’s technical standards, which have been repeatedly delayed and are now promised in early 2026.