
Mariana Coelho (Master's in European Union Law at the School of Law of University of Minho)
1. Preliminary considerations
The rise of new technologies has consistently provided more challenges for human rights and democratic values all over the world. With the widespread use of AI technologies, it has never been easier to create manipulated content, namely of sexual nature. And if deepfakes have shown to be increasingly realistic, the risk is ever growing.
In fact, the last few days have seen the emergence of a new trend on social media websites, such as TikTok and X: mukbang[1] and/or ASMR[2] videos created entirely through AI systems, featuring predominantly women of color, and replicating their mannerisms and accents. In these videos, AI models have even been trying to convince the viewers that they are real people, with most of them being dotted with an unimaginable level of realism. If it is this easy to create seemingly innocent videos, that can blur or even virtually delete the line between real people and AI models, the issue of pornographic deepfakes is, or should at least be, now more than ever, at the center of public discourse, with women’s rights being at risk at levels never before seen.
Digital sexual violence targeting women has been a persistent and widespread concern for several years, and its ongoing prevalence has elevated it to a priority within the EU’s digital policy agenda. From political efforts, legislative action and digital literacy initiatives, the EU has undoubtedly become “the world’s leading tech watchdog”.[3] In face of how quickly violent discourse seems to be spreading through multiple societies, the European Parliament has increased pressure on the Commission and Member States to act more quickly and aggressively on the matter of women’s rights, with Irish Member of Parliament Maria Walsh calling for stricter criminal penalties for those who create and disseminate pornographic deepfakes in December 2024.[4] The MEP called to attention the fact that current legal frameworks that exist in the EU, no matter how revolutionary, have proven to be insufficient to combat malicious uses of technology, that are used to harass, defame and exclude women from public discourse and professional life every day.
For the creation of pornographic deepfakes, it all starts with collecting data, especially biometric data: AI deepfake models rely heavily on it, particularly facial geometry, voice patterns and micro-expressions, since the reproduction of things like gaze direction and lip movement can fool not only human eyes but also automated detection and authentication systems, with studies confirming that biometric-rich deepfakes can spoof authentication systems, especially ones trained on static biometric snapshots.[5]
With this in mind, it is worth noting that the collection of biometric data became a hot topic in the EU in the past year, with World’s (formerly known as Worldcoin) iris collection activities having been suspended and/or prohibited in several Member States, due to the sensitive nature of biometric data and the potential for misuse. World Network “consists of a privacy-preserving digital identity network (World ID) built on proof of personhood and, where laws allow, a digital currency (WLD).”[6] While the goal is to allow individuals the possibility to assert that they are a real person, different from all other real people, multiple concerns and complaints surfaced against the company: on the one hand, parents claimed that minors were “selling” their irises without consent; on the other, there was a clear lack of information regarding the terms of data collection.
Germany’s BayLDA (Bavarian Data Protection Authority), acting as lead GDPR authority [according to Article 56(1) GDPR, since World’s European headquarters and manufacturing facility is located in Bavaria], launched a formal investigation into the company, with multiple Member States following suit. In December 2024, BayLDA concluded that World’s processing of the iris scans did not comply with GDPR requirements, since it “entails a number of fundamental data protection risks for a large number of data subjects”,[7] therefore ordering the company to delete all data. Even though the decision is not yet definitive, due to an appeal filed by World, it marks a historic moment in the protection of citizens’ rights in a moment where not enough safeguards seem to have been put into place.
Even though World claims to have implemented several measures to prevent the misuse of collected data, the risks cannot be ignored: if data is used to train generative AI models, deepfakes will become more and more realistic, with even microfeatures being replicated. It is, therefore, essential to look at solutions like the one offered by World with caution, since, as Maria Inês Costa claims, “(…) in the face of an “offer of proof of personhood”, we should be concerned as to whether this is weakening our very own autonomy and dignity – in essence, our humanness – or if it will enhance and, on the contrary, protect our life in coexistence with technology”.[8]
2. Pornographic or sexual deepfakes and IBSA
While deepfakes are concerning in all their dimensions, research has shown that women are the main victims: a study by Home Security Heroes reveals a staggering 550% increase in AI-manipulated images between 2019 and 2023, with women being the subjects of 99% of deepfake pornography.
Revenge porn became a popular term in public discourse around the early 2010s, as multiple incidents involving jealous ex-partners sharing nonconsensual intimate pictures and videos to get revenge on their ex-lovers came to light. And while this terminology was perfectly adequate for the phenomenon in its origin, with the evolution of technology, especially with the widespread use of AI, academics started to note that today, gender-based cyberviolence looks very different: not all perpetrators are ex partners, nor are they always moved by revenge; and not all disseminated content can be classified as pornography.[9] It is also worth noting that the term revenge implies that the victim did something worth retaliating, which enforces misogynistic and victim-blaming perceptions.[10]
While terminology should be thoroughly thought out as to accommodate and respond to societal needs at a given time, the wording is not nearly as important as its definition. A broad spectrum of new terms has been proposed, and Image-Based Sexual Abuse (IBSA)[11] seems to be the most appropriate, since it encompasses “all forms of the non-consensual creating, taking or sharing intimate images or videos, including altered or manipulated media, and threats to distribute such material.”[12] Since it serves as an umbrella term, AI-IBSA, which refers to digitally altered content that is fake but depicts real people, can be placed under it.
The issue of deepfakes can be inserted into the realm of Non-consensual Synthetic Intimate Images (NSII), which are “intimate images of a person that were created using technology such as AI or photoshop without the consent of the person featured in them”.[13] At first, deepfakes – a combination of “deep learning” and “fake” – referred exclusively to the altering of pornographic videos, in the sense that real non-consenting people’s faces were swapped into the bodies of (assumingly) consenting adults that engaged in sexual acts, recorded and published them. However, nowadays, GenAI has allowed anyone to create fully original images depicting someone naked, in sexually suggestive settings and/or engaging in sexual activities. “Nudifying” apps that use AI to undress people have seen a surge in popularity, and many of them only work on women. We can therefore describe a deepfake as images that “leverage powerful techniques from machine learning and AI to manipulate or generate visual and audio content with a high potential to deceive”.[14] And, if sexual deepfakes started out by targeting women celebrities and public figures, any woman, no matter her age, occupation or relationship to the offender is now at serious risk of becoming a victim simply for having an online presence.
In response to this growingly concerning issue, the EU has taken political and regulatory measures to combat all forms of violence against women, namely online violence. In 2023, after a long and tenuous road, the accession to the Istanbul Convention Against Violence Against Women and Domestic Violence was finalised, which marked a very important, even if mostly symbolic, first step in the protection of women’s rights in the online sphere. The AI Act and the Digital Services Act are also important legislative documents that, even without taking the protection of women’s rights as their central objective, contain important provisions to promote an online space that follows the same key social values and rights that are essential to the EU. It is also worth mentioning the Directive on combating violence against women and domestic violence (2024/1385), which recognises violence against women as a violation of fundamental rights[15] and that helps to address most gaps from the previously mentioned legislation in terms of the protection of women’s rights in the online sphere.[16]
3. AI Act
The AI Act, the first worldwide legislation in the subject, pursues the goal of human-centered AI. Looking at its text, it addresses pornographic deepfakes through three key elements: the definition of deepfakes; by imposing transparency obligations for AI providers and deployers also known as deepfake creators; and by mentioning the issue in its recitals.[17]
According to Article 3 number 60 of the AI Act, a “deep fake” consists of an “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to be a person or to be authentic or truthful.” This definition allows us to extract the four most important characteristics of deepfakes (in light of EU law): they need to be produced or manipulated using AI techniques (technical aspect); using images, audios or videos (typological aspect); referring to a person, object, place, entity or event (subjective aspect); and they need to falsely appear real (effectual aspect).
Although the definition of the phenomenon is undoubtedly very important for tackling it, the main contribution that the AI Act provides to the fight against pornographic deepfakes is the imposition of transparency obligations for providers and deployers of “certain AI systems”. Looking at Article 50(2), “Providers of AI systems, (…), generating syntethic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable, as far as this is technically feasible, …” (emphasis added).[18] While it is true that the provisions of this Article, by focusing on making content detectable and identifiable, create a synergy with the DSA, it should also be noted that the scope of the responsibilities mentioned is very vague and could benefit from further development.
However, Article 50(4) seems to present the obligations in a more appropriate way, by defining that “Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been articially generated or manipulated.” This same Article limits this obligation in cases where the use of the AI system is permited by law and in cases that fall under artistic and satirical content, in order to protect “the display or enjoyment of the work”. This content labeling obligation is central in the fight against sexual gender-based cyberviolence, by enhancing accountability and providing digital platforms with clear frameworks for content moderation.
4. Shortcomings of the AI Act
While the AI Act is, undoubtedly, one of the most coveted and discussed pieces of EU legislation to date, due to what it seeks to achieve, progress can only happen when one adopts a critical mindset and is able to understand and state that this Act is not the solution to all AI related dangers and problems.
Firstly, it is worth mentioning that, while the definition of deepfake seems to be adequate to the needs of European societies, at least looking at current technological evolutions at this time, some legislative inaccuracies have been pointed out by academics. In the final version of the AI Act, deepfakes are mentioned not only in Article 3, but also in recital 134, which defines deepfakes “AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful”. As Labuz mentions, the adding of the word “appreciably” might change the meaning of the definition, by “giving a different value to the identified resemblance” and therefore opening the door to conflicting and unclear interpretations.[19]
One of the biggest criticisms of this Act is the number of exemptions it contains, the most central of which is its non-applicability to end users. As Article 2(10) states, the AI Act “does not apply to obligations of deployers who are natural persons using AI systems in the course of a purely personal non-professional activity”. While the shift of responsibility to providers and deployers is commendable, platforms cannot have the full burden of preventing illegal content such as pornographic deepfakes, especially when end users not only use AI to undermine women’s rights on a daily basis, but also when they use that same technology to remove labels and watermarks, and therefore violate the provisions in this Regulation.
It is also worth pointing out that although this is a Regulation (and therefore does not require transposition), Member States will still have a crucial role to play in terms of enforcement, which could prove decisive, given the strong dependence on the adoption of harmonised provisions to guide operators in complying with this legislation.[20]
Deepfakes are not a uniform phenomenon, since their harmfulness is deeply contextual, and, for a long time, it posed the question of whether the AI systems themselves have intrinsic qualities that should warrant a high-risk or even prohibited classification, or if it is only their misuse that might cause such harms. In the end, the AI Act decided to classify deepfakes exclusively in the “limited risk” category [vide Article 50(4)], which tracks with its “holistic approach to AI systems based on their technological properties rather than the context of their use”.[21] While there have been significant efforts in the recitals to better specify and indicate the risks that might be associated with deepfakes, especially in recitals 120, 132 and 133, it is especially telling that the focus is on democracy and electoral processes, and there’s a clear lack of reference to pornographic deepfakes, which are arguably one of the most common and dangerous uses of this technology in today’s day and age. This means that deepfakes are only subject to transparency obligations with regard to their labelling, as mentioned above, which is in line with the fact that the AI Act, as explained in the Explanatory Memorandum added to the proposal for a regulation, focus on the recipient of the content and not on the person who may be affected by it.[22]
And, even if transparency obligations do play the part promised in the AI Act,[23] they are not enough to prevent the most pressing harms of deepfake technology – when talking about porno deepfakes, victims do not just want to see the content labeled as AI-generated or manipulated. The protection of women’s rights in these situations requires proactive action, since labeling the image alone will not prevent phsycological and reputational damage.[24]
It is therefore worth concluding that, while the AI Act plays a key role in the fight against pornographic deepfakes, it is not, by itself, enough. On the one hand, labeling content as being manipulated does not adress the two key needs of victims of pornographic deepfakes: victims need paths to criminalisation, and swift action in reporting and removing the content. On the other, new trends, like the afforementioned case of biometric data collection, put into perspective just how hard it could become for companies to actually enforce transparency obligations, with manipulated content becoming increasingly realistic and therefore basically indistinguishable.
Accordingly, not only is it important to encourage Member States to play their part in ensuring the full application of this Regulation, it is also worth continuing to develop new legislative bodies to help in the fight against gender-based cyberviolence. In this regard, it is worth mentioning the Directive on combating violence against women and domestic violence (2024/1385), which addresses the multiple dimensions of cyberviolence, criminalising some practices (as mentioned in its Article 5) and requiring effective and swift removal by relevant service providers. Although this directive is not without its flaws, «it has the advantage of filling gaps in EU and national legislation on forms of violence that, while not exclusively affecting women, are clearly “targeted” at them.»[25]
[1] A Mukbang consists of “an online audiovisual Broadcast in which a host consumes various quantities of food while interacting with the audience or reviewing it”. “Mukbang”, Wikipedia: The Free Encyclopedia, accessed 20 june 2025, https://en.wikipedia.org/wiki/Mukbang.
[2] Audiovisual media that aims to evoke «a subjective experience of “low-grade euphoria” characterized by a combination of positive feelings and a distinct static like tingling sensation on the skin». ASMR, Wikipedia: The Free Dictionary, accessed 20 june 2025, https://en.wikipedia.org/wiki/ASMR.
[3] Adam Satariano, “G.D.P.R, a new privacy law, makes Europe world’s leading tech watchdog”, New York Times, 24 May 2018, https://www.nytimes.com/2018/05/24/technology/europe-gdpr-privacy.html.
[4] Maria Walsh, “MEP calls for strict punishments for deepfake creators as women’s careers ‘on the line’ amid ‘insidious’ new AI threat”, The Irish Sun, 15 December 2024, https://www.thesun.ie/news/14362633/deepfake-creator-maria-walsh-punishment-ai-threat/.
[5] Shijing He, et al., “Identity deepfake threats to biometric authentication systems: public and expert Perspectives”, arXiv, 10 june 2025,
arXiv:2506.06825.
[6] World, “World ID: a new identity and financial network” (Whitepaper), 2025, https://world.org/world-id.
[7] Bavarian Data Protection Authority (BayLDA), “Decision on Worldcoin’s processing of biometric data”, 19 December 2024, https://www.edpb.europa.eu/system/files/2025-02/decision1594_0.pdf.
[8] Maria Inês Costa, “Iris collection as a proof of personhood: current trends on biometric recognition,” The Official Blog of UNIO – Thinking and Deabting Europe, 17 May 2024, https://officialblogofunio.com/2024/05/17/iris-collection-as-a-proof-of-personhood-current-trends-on-biometric-recognition/.
[9] Asher Flynn, Nicola Henry and Anastasia Powell, “More than revenge: addressing the harms of revenge pornography”, Report of the More than Revenge Roundtable, hosted by Monash University, La Trobe University and RMIT University, 22 February 2016, https://research.monash.edu/files/214257814/More_than_Revenge_Final_Report_Nicola_Henry.pdf.
[10] Carlotta Rigotti, Clare McGlynn, and Franziska Benning, “Image-Based Sexual Abuse and EU Law: a critical analysis”, German Law Journal 25, no. 9 (2024): 1473. https://doi.org/10.1017/glj.2024.49.
[11] An image is defined as a representation of any external form of a certain person or object, which means that the term encompasses diverse formats of sexually explicit content, such as photos, videos, etc. Farlex, The Free Dictionary, 2019, https://www.thefreedictionary.com.
[12] Carlotta Rigotti and Clare McGlynn, “Towards an EU criminal law on violence against women: the ambitions and limitations of the Commission’s proposal to criminalise image-based sexual abuse”, New Journal of European Criminal Law IJECL (2022); 1-26, https://ssrn.com/abstract=4379096.
[13] Suzie Dunn, “Legal definitions of intimate images in the age of sexual deepfakes and generative AI”, McGill Law Journal vol. 69, no. 4 (2024), https://doi.org/10.26443/law.v69i4.1626.
[14] Jan Kietzmann, Linda W. Lee, Ian P. McCarthy, and Tim C. Kietzmann, “Deepfakes: Trick or treat?”, Business Horizons vol. 63, no. 2 (2020), doi: 10.1016/j.bushor.2019.11.006.
[15] Directive (EU) 2024/1385 of the European Parliament and of the Council of 14 May 2024 on combating violence against women and domestic violence, recitals 2 and 3.
[16] Directive (EU) 2024/1385 of the European Parliament and of the Council of 14 May 2024 on combating violence against women and domestic violence.
[17] Mateusz Labuz, “Regulating deep fakes in the Artificial Intelligence Act”, Applied Cybersecurity & Internet Governance vol. 2, no. 1 (2023): 1-42, https://doi.org/10.60097/ACIG/162856.
[18] Similar observations are made in recital 133, which contains a list of possible technical solutions. Although this is only an illustrative list, it provides a basic idea that will be useful during the practical implementation of these measures.
[19] Mateusz Labuz, “Deepfakes and the Artificial Intelligence Act – an important signal or a missed opportunity?”, Policy & Internet, vol.16, no. 4 (2024): 783-800, https://www.researchgate.net/publication/381855647_Deep_fakes_and_the_Artificial_Intelligence_Act-An_important_signal_or_a_missed_opportunity.
[20] Inês Neves, “The EU Directive on violence against women and domestic violence—fixing the loopholes in the Artificial Intelligence Act,” The Official Blog of UNIO – Thinking and Debating Europe, 29 March 2024, https://officialblogofunio.com/2024/03/29/the-eu-directive-on-violence-against-women-and-domestic-violence-fixing-the-loopholes-in-the-artificial-intelligence-act/#_ftn1.
[21] Mateusz Labuz, “Deepfakes and the Artificial Intelligence Act – an important signal or a missed opportunity?”.
[22] European Commission, “Explanatory Memorandum for the Artificial Intelligence Act,” 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206.
[23] Which they are unlikely to do, since “most deepfakes are created and disseminated with damaging or at least indifferent intent. In this respect, malicious actors will simply not adhere to the obligations”. See Martina J. Block, “A critical evaluation of deepfake regulation through the AI Act in the European Union”, Journal of European Consumer and Market Law, vol. 13, issue 4 (2024): 184 – 192.
[24] Centre for Digital Governance, “The false promise of transparent deepfakes: how transparency obligations in the draft AI Act fail to deal with the threat of disinformation and image-based sexual abuse,” Hertie School Centre for Digital Governance – Blog, 9 November 2022, https://www.hertie-school.org/en/digital-governance/research/blog/detail/content/the-false-promise-of-transparent-deep-fakes-how-transparency-obligations-in-the-draft-ai-act-fail-to-deal-with-the-threat-of-disinformation-and-image-based-sexual-abuse.
[25] Inês Neves, “The EU Directive on violence against women and domestic violence – fixing the loopholes in the Artificial Intelligence Act”, The Official Blog of UNIO – Thinking and Deabting Europe, 29 March 2024, https://officialblogofunio.com/2024/03/29/the-eu-directive-on-violence-against-women-and-domestic-violence-fixing-the-loopholes-in-the-artificial-intelligence-act/#_ftn1.
Picture credit: by Google DeepMind on pexels.com.