AI has been in the news quite a lot lately, and therefore we thought it would be a good idea to write a short overview of the EU’s AI Act, so that people would understand what it is, to whom it applies, how it aims to manage some of the risks associated with AI (particularly those AI practices that the EU’s AI Act prohibits) and our view on whether the Act is sufficiently robust to effectively mitigate the risks of AI in relation to people’s fundamental rights.
What is the EU AI Act?
The EU AI Act is the first binding worldwide horizontal regulation on AI. The Act aims to foster responsible artificial intelligence development and deployment in the EU and it came into force on the 1st August 2024. It sets a common framework for the use and supply of AI systems in the EU.
This Act classifies AI systems with different requirements and obligations depending on whether the AI system poses unacceptable risks to fundamental rights and EU values, high risk AI systems that can have a detrimental impact on people’s health, safety or on their fundamental rights, as well as AI systems that pose low or no risk. The AI Act also lays down specific rules for general purpose AI models, although these will not be the focus of this article. We will be covering these in subsequent articles.
So who does the EU AI Act apply to?
The EU AI Act applies to private organisations as well as public authorities. The AI Act applies primarily to providers and deployers putting AI systems and general purpose artificial intelligence (GPAI) models either:
- into service or
- Placing AI systems and GPAI models on the EU market, where the private companies or public authorities deploying and providing these systems and models are established in the EU.
- Placing AI systems and GPAI models on the EU market, where those providing and deploying these systems and models, whether private organisations or public authorities are located in the EU.
- Where those deploying or providing AI systems and GPAI models are based outside the EU, but the output produced by their AI systems and GPAI models is used in the EU.
AI system safety related components or products, that apply to:
- aviation security and civil aviation (Regulation 300/2008 and Regulation 2018/1139)),
- the rules relating to bicycles, tricycles and quadricycles (Regulation 168/2013)
- the rules relating to agricultural and forestry vehicles (Regulation 167/2013),
- marine equipment(Directive 2014/90)
- the inter-operability of the rail system (Directive 2016/797), and
- motor vehicles and their trailers (Regulations 2018/858 and 2019/2144),
These are classified as high-risk AI systems. These high risk systems are only subject to Article 6(1), Articles 102 to 109 and Article 112 of the EU AI Act.
Articles 102-109 basically provides when implementing acts are introduced by EU legislators which relate to any AI products or systems that apply to these high risk AI systems, the requirements specified in Chapter 3 of Section 2 of the EU AI Act need to be taken into account.
In terms of product development of AI systems, the provisions of the EU AI Act don’t apply while an AI product is being researched, tested or is being tested, prior to them being placed on the market or put into service. However, once it is being tested in real world conditions then the EU AI Act applies.
EU Data protection law also applies to the EU AI Act, and provides an important safeguard.
The EU AI Act is also subject to EU consumer protection and product safety law. For this reason, it seems probable that the Unfair Commercial Practices Directive is likely to evolve to also cover AI harms, particularly when those harms involve AI systems and models using AI in ways that are unfair, misleading or aggressive.
When does the EU AI Act NOT apply?
The EU AI Act applies to private organisations as well as public authorities. It doesn’t apply to public authorities in a third country nor to international organisations where those authorities or organisations use AI systems in the framework of international cooperation or agreements:
- for law enforcement and judicial cooperation with the Union or
- for law enforcement and judicial cooperation with one or more Member States.
However, in order to be exempt from the provisions of the EU AI Act, such a third country or international organisation needs to provide adequate safeguards with respect to the protection of fundamental rights and freedoms of individuals. What constitutes “adequate safeguards” is not specified in the EU AI Act itself, although the recently developed Codes of Conduct as well as future CJEU, EctHR and national caselaw are likely to flesh out the exact nature of adequate safeguards going forward.
The EU AI Act also doesn’t apply to areas outside the scope of Union law, and doesn’t affect the competences of the Member States concerning national security, regardless of the type of entity entrusted by the Member States with carrying out tasks in relation to those competences. So if a private limited company is providing national security AI services to a Member State then this is likely to be outside the scope of the EU AI Act.
Crucially, the EU AI Act does not apply to AI systems where they are placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities. So those entities can be either public or private entities, and the crucial criteria is that they have to be using AI systems for military, defence or national security purposes. If they are, then they are likely to be outside the scope of the EU AI Act.
It also doesn’t apply to AI systems and models (including their output) which are specifically developed and put into service for the sole purpose of scientific research and development.
Finally, the EU AI Act does not apply to AI systems released under free and open-source licences, unless:
- they are placed on the market or
- they are put into service as high-risk AI systems or
- they are put into service as an AI system that falls under Article 5 (which lists AI practices that are prohibited) or Article 50 (which covers transparency requirements).
Diving into the details
So let’s look at the provisions of the EU AI Act in more detail…The EU AI Act includes the following definitions of AI and GPAI:
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; (Article 3(1))
‘general-purpose AI system’ means an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems;(Article 3(66))
A risk based approach to AI protections
The EU AI Act adopts a risk based approach, and classifies AI systems into several risk categories, with different degrees of regulation applying depending on the classified risk levels. In relation to AI systems, the risk categories are:
- Unacceptable risk, which include AI systems that violate EU fundamental rights and values and are therefore prohibited.
- High risk systems, which impact on health, safety or fundamental rights. These require conformity assessment and will be subject to post market monitoring.
- Transparency risk – these AI systems risk impersonation, manipulation, or deception, such as chatbots, deep fakes, AI generated content. These systems will be subject to information and transparency obligations.
- Minimal risk – these cover common uses of AI, such as spam filters, recommender systems, etc. These systems will not be subject to specific regulations.
Caption: A graphic of the risk hierarchy created by the EU AI Act (Source European Commission)
The EU AI ACT prohibitions
The EU AI Act contains some broadly defined prohibitions (and caveated exceptions) which are supposed to target the worst AI abuses. These include:
- AI Systems using subliminal, manipulative or deceptive techniques to distort people’s or groups of people’s behaviour and impair informed decision making, leading to significant harm.
- AI systems exploiting vulnerabilities due to age, disability, or social or economic situations, causing significant harm;
- Biometric categorisation systems inferring race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation (except for lawful labelling or filtering in law-enforcement purposes);
- AI systems evaluating or classifying individuals or groups based on social behaviour or personal characteristics, leading to detrimental or disproportionate treatment in unrelated contexts or unjustified or disproportionate to their behaviour;
- ‘Real-time’ remote biometric identification in public spaces for law enforcement (except for specific necessary objectives such as searching for victims of abduction, sexual exploitation or missing persons, preventing certain substantial and imminent threats to safety, or identifying suspects in serious crimes);
- AI systems assessing the risk of individuals committing criminal offences based solely on profiling or personality traits and characteristics (except when supporting human assessments based on objective, verifiable facts linked to a criminal activity);
- AI systems creating or expanding facial recognition databases through untargeted scraping from the internet or CCTV footage;
- AI systems inferring emotions in workplaces or educational institutions, except for medical or safety reasons.
Our view on the EU AI Act prohibitions
While these prohibitions sound great, it is worth bearing in mind that the EU AI Act does not apply if the AI systems are placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.
In the UK, ever since the Human Rights Act 1998 came into force in October 2000, followed by the “War on terror” the UK courts have been embroiled in a battle between citizens and NGO’s seeking to uphold people’s fundamental rights, and governments arguing that military, defence or national security purposes and particularly the “War on Terror” justifies all manner of infringements of people’s fundamental rights.
For the last twenty five years, the UK, European Court of Human Rights and EU courts have had to continually push back against government over-reach, and it seems likely that similar battles will take place over AI.
It is also worth considering AI and the EU’s deference to military, defence and national security purposes found in the EU’s AI Act, within the wider context of a rise in authoritarian and far right regimes. Two recent academic papers are relevant here.
The article “When Do Parties Lie? Misinformation and Radical-Right Populism Across 26 Countries” published on the 13th January 2025 by Petter Törnberg and Juliana Chueri, examines which parties are more likely to spread misinformation, by drawing on a comprehensive database of 32M tweets from parliamentarians in 26 countries, spanning 6 years and several election periods.
They found that radical-right populism is the strongest determinant for the propensity to spread misinformation. Populism, left-wing populism, and right-wing politics were not linked to the spread of misinformation.
The authors concluded that political misinformation should be understood as part and parcel of the current wave of radical right populism, and its opposition to liberal democratic institutions. Misinformation appears therefore to be a deliberate strategy choice of radical right populists and therefore it seems unlikely that radical right politicians would not use the power of AI to spread disinformation.
If the enforcement mechanisms of the EU’s AI Act are not robust enough (and there have been criticisms of the EU AI Acts provisions on that score) then AI could end up being a tool that helps to usher in radical right governments across Europe.
Secondly, the paper “Strategies of Political Control and Regime Survival in Autocracies” published on the 16th January 2025 and authored by Wooseok Kim, Eugenia Nazrullaeva, Anja Neundorf, Ksenia Northmore-Ball and Katerina Tertytchnaya aimed to comprehensively map the use of six strategies used by autocratic governments across 229 regimes from 1946 to 2010. These six strategies were:
- Repression of (1) physical integrity rights and (2) civil liberty rights;
- Co-optation via (3) formal institutions and (4) the distribution of resources; and
- Indoctrination through (5) education and (6) the media
68% of the world’s population live under autocratic regimes. Autocratic regimes have also changed in terms of their control mechanisms and one of the findings of this paper was that media indoctrination and civil liberties rights repression appeared to make the longevity of autocratic regimes more likely. It seems probable that autocratic regimes, particularly those already in power, will seek to exploit and abuse AI, and undermine civil liberty rights in doing so. Modern autocracies nowadays don’t tend to use an iron fist to maintain control – instead they seek to control the information ecosystem and maintain control that way, whilst othering minority groups.
Given the broad military, defence, and national security exceptions, which dis-applies the prohibitions found in the EU’s AI Act, it seems likely that autocratic governments will seek to use AI to maintain their regime, and therefore it is legitimate to question whether the protection of fundamental rights is sufficiently robust to counter the threat that AI poses in the hands of autocratic and radical right regimes.
It will also be interesting to observe what role and stance the courts take when cases are brought which challenge AI abuses. Will they uphold citizen’s rights, or defer to the national security, military and defence exception?
Some other concerns regarding the EU’s AI Act
A number of leading academics have raised numerous questions regarding the AI act’s final text and the implementation challenges lying ahead. One leading academic called Mr P Hacker, who was cited in the European Parliament briefing on the EU AI Act, welcomes the final AI act but stresses that:
- alignment with existing sectoral regulation is incomplete (which results in unnecessary and highly detrimental red tape);
- compliance costs will be substantial, especially for SMEs developing narrow AI models;
- the threshold of 10^25 FLOPs for a default categorisation of systemic risk models is too high; and
- European supervision and monitoring of remote biometric identification is needed to avoid the risk that some Member States circumvent the rules enshrined in the AI act (arguably those Member States with autocratic or radical right regimes).
Other academics have argued that the AI act’s implementation will require a robust taxonomy setting out the correlation of risk classification and model capabilities and assessing the developments of open sources models.
Additionally, academics have called for the AI act to be complemented by:
- an additional set of exercisable rights to protect citizens from AI-generated harm,
- additional legislation to control the potential environmental impact of training AI models and (iii) additional legislation to protect workers’ rights and regulate employers using AI to monitor what their staff are doing.
- additional legislation or guidance to define further a set of requirements that research organisations must comply with to benefit from the research exemption.
- further legislation, on the basis that some argue that the AI Act does not go far enough in preventing and/or mitigating the specific risks associated with chatbots.
- consideration whether standardisation and codification processes are likely to include properly representative groups of stakeholders, or whether they are likely to only reflect a Eurocentric, white viewpoint.
- additional legislation and guidance which specifically covers fundamental rights impact assessments(FRIA), as that is currently lacking.
Key questions such as setting common terminology and addressing dual use and military AI applications have also been raised in this respect.
Conclusion
The EU’s AI Act is a legislative first. As a first step in managing AI risks it is good, but a number of gaps remain, and the failure to include provisions that allow individuals to bring claims against AI systems providers for the AI harms they suffer appears to be a major omission. There is a distinct lack of equality of arms between individuals and governments using AI systems in ways that their citizens may disagree with ie to spread misinformation or to undermine fundamental human rights. The role of the national courts, European Court of Human Rights and the CJEU is also likely to be important going forward in clarifying whether fundamental rights or national security takes precedence.
AI has the potential to deliver a lot of good for humanity as a whole, but it also carries with it risks of substantial harm and whilst some of these risks are addressed in the EU AI Act, many questions on how this will work in practice still remain.
Answering those questions is likely to keep the EU courts, the Strasbourg court and national courts busy for years to come as defining the boundaries of the EU’s AI Act is likely to be done through a combination of case law, further implementing legislation, voluntary codes of conduct, and soft law legislative guidance. Given time, many of these questions will be resolved, but at this early stage we have more questions than answers.