The European AI Office is currently facilitating the drawing-up of the General-Purpose AI Code of Practice (the “Code”). The European Commission published the first draft of the Code on 14 November 2024. Further drafts are to be prepared, with the final version of the Code forecast to be released by 2 May 2025, in accordance with Article 56 (Codes of Practice) of the EU AI Act.
The independent experts involved in preparing the Code developed their initial version based on contributions from providers of AI models as the addressees of the Code. The drafting also considered evolving literature and international approaches to AI regulation.
The AI Office’s idea is to provide a future-proof Code that will also be appropriate for the next generation of AI models released after the May 2025 deadline.
Addressees of the Code
The Code provides guidance on compliance with the obligations set forth in Articles 53 (Obligations for Providers of General-Purpose AI Models) and 55 (Obligations for Providers of General-Purpose AI Models with Systemic Risk) of the AI Act for providers of general-purpose AI (“GPAI”) models and GPAI models with systemic risk (the “Providers”). Equally, the Code will provide the basis on which the AI Office will assess compliance by the Providers.
Article 3(63) of the AI Act defines a GPAI model as an AI model which “… displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications…”. The definition specifically excludes AI models used before their placing on the market for the sole purpose of research, development and prototyping activities, but covers models that are placed on the market (i.e. first put into service or commercial use in the EU) following such activities.
Under recital 110 of the AI Act, models with “systemic risk” include, but are not limited to, models which pose “any actual or reasonably foreseeable negative effects in relation to major accidents, disruptions of critical sectors and serious consequences to public health and safety; any actual or reasonably foreseeable negative effects on democratic processes, public and economic security; the dissemination of illegal, false, or discriminatory content”. The definition of “systemic risk” is provided in Article 3(65) of the AI Act.
The purpose of the Code
In accordance with Article 56 (Codes of Practice) of the AI Act, the Code should detail the manner in which the Providers may comply with their obligations under the AI Act. Importantly, the Code serves as guidance for the Providers in demonstrating compliance. Accordingly, adherence to the Code does not constitute conclusive evidence of compliance with the AI Act.
The Code’s objectives emphasise that the Providers should ensure and demonstrate compliance with their obligations under the AI Act, enable the AI Office to assess compliance, integrate AI models effectively, follow EU copyright law, and continuously manage systemic risks.
Demonstrating compliance
The AI Act obliges the Providers to provide technical information, including as to their training data and testing, and establish a policy to ensure compliance with EU copyright law.
High level principles of the Code include:
- Alignment with EU principles and values, including the Charter of Fundamental Rights of the European Union, the Treaty on the European Union and Treaty on the Functioning of the European Union.
- Alignment with the AI Act and international approaches, for example with the standards or metrics developed by AI Safety Institutes.
- Proportionality to risks, so that all measures within the Code are suitable and do not propose an excessive burden in relation to the risk, including proportionality of measures and key performance indicators (KPIs) to the size of the Provider.
- Ensuring that the Code is future proof, in that it strikes appropriate balance between concrete requirements and flexibility to adapt the rules to technological development.
- Enablement and support of cooperation between the different stakeholders (including industry, academia, civil society and standardisation organisations), encouragement of further transparency and the general support and growth of the AI safety ecosystem.
Copyright law compliance
Article 53(1)(c) of the AI Act states that the “Providers of general-purpose AI models shall put in place a policy to comply with Union law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790”. Article 53(1)(d) adds that the Providers should “draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office.”
In this first draft, the Code only makes reference to measures related to Article 53(1)(c) of the AI Act, i.e. the copyright policies obligations. In this respect, the Code proposes the following three measures, each of which is coupled with several sub-measures to ensure that AI models comply with copyright laws throughout their development and use:
- Put in place a copyright policy. The Providers must create and implement an internal policy to comply with EU copyright laws, covering the entire lifecycle of any GPAI model. Before contracting with third parties for the use of data sets for GPAI model development, the Providers must conduct reasonable copyright due diligence. The Providers (except for small and medium enterprises) should also implement reasonable measures to mitigate the risk and prevent downstream copyright infringement by AI models.
- Compliance with the limits of the text-and-data mining (TDM) exception in Article 4 of Directive (EU) 2019/790. The Providers should only use crawlers that respect the Robot Exclusion Protocol (robots.txt), exclude pirated sources from their crawling activities, and ensure that crawler exclusions do not negatively affect content findability in search engines. The Providers should make best efforts to comply with machine-readable rights reservations for publicly available online content.
- Transparency in relation to opt-outs. The Providers should publicly share information about compliance with rights reservations. They should also provide details about crawlers and their robots.txt features. The Code recommends that the Providers designate a contact point for rightsholders to communicate and handle complaints. Transparency also means documenting and providing upon the AI Office’s request information about data sources used for training, testing and validation, and authorisations to access and use protected content for the development of the GPAI.
Conclusions
Contrary to the UK’s current approach to AI regulation, which largely relies on self-regulation and no government interference, the AI Act in conjunction with the Code bring forth a prescriptive and centralised framework of measures. Such measures provide clear guidance for the Providers across member states.
The finalised Code shall be subject to regular review by the AI Office. The AI Office may encourage and facilitate updates of the Code on an ongoing basis, to reflect advances in AI technology, societal changes, and emerging systemic risks.
Whilst the Act and the Code will not apply directly to the UK following Brexit, they will apply to UK companies operating in the EU. In any event, once finalised, the Code will likely have wide-spread international influence and other global economies are likely to adopt similar measures. As reported by João Pedro Quintais here, the AI Act aims to have extraterritorial reach, although there are doubts as to how that will play out in practice considering copyright’s core principle of territoriality. As such, UK companies would be wise to opt for pre-emptive compliance with the Code.
The first draft of the Code is available here. The AI Office has also prepared a Q&A to help the Providers comply with the AI Act. The Q&A can be accessed via this link.