Taking stock of the phenomenal rise of generative Artificial Intelligence (“AI”), competition enforcement authorities throughout the world are actively engaging in research and consultations to identify and prevent potential violations of competition laws in this sector.
It is in this context that the French Competition Authority (the “FCA”) issued its first opinion on the competitive functioning of the AI sector (the “Opinion”) on 28 June 2024.[1]
Earlier this month, the FCA’s President, Benoît Coeuré, also provided some useful insights on the Opinion and on the FCA’s understanding of the sector during a conference held in Paris.[2]
According to the FCA, the main purpose of the Opinion is to offer a first competition analysis of the AI sector.[3] It identifies the main competition risks associated with AI and some potential anticompetitive practices which could be subject to investigations and fines, as well it makes some recommendations in this regard.
Consequently, the Opinion should not be seen as a mere first analysis by the FCA of the sector, but rather as setting out its priorities and providing a useful analysis grid to the market players to assess their own practices pending future decisions.
In this blog, we seek to anticipate the potential upcoming FCA investigations by reviewing what we consider the main takeaways of the Opinion, in particular regarding (i) the key markets and players in the sector, (ii) the main characteristics of the relevant markets, (iii) the main competition risks related to essential inputs, (iv) the competition risks related to minority shareholdings and (iv) the main recommendations of the FCA.
Key markets and players in the AI sector
The Opinion provides a first opportunity for the FCA to share its preliminary analysis of the key markets and players in the AI sector.
While the FCA considers that the AI sector is still evolving and its markets cannot yet be precisely defined,[4] it nonetheless sets out its preliminary view on how the emerging relevant markets and main key players could be defined.
The FCA identifies three levels in the AI value chain: first, the upstream value chain, i.e. the infrastructure stage for developing AI models; second, the midstream value chain, i.e. the stage of training and specialization of artificial intelligence models; third, the downstream value chain, i.e. the AI systems deployment stage, including the commercialization of new generative AI-based services to the general public, such as ChatGPT.[5]
In this regard, the FCA also notes that some studies identified two main relevant markets: the primary market for the design and pre-training of foundational models and the secondary market for the development of models subject to specialization or fine-tuning to meet predetermined objectives.[6]
In the Opinion, the FCA chooses to focus on the upstream value chain, in which two main categories of key players are active at the infrastructure stage of AI models design: firstly, the suppliers of computer components, particularly developers of graphics processors and AI accelerators, all of which are essential for training AI models;[7] secondly, cloud computing providers,[8] which are necessary for computing power and data access.[9]
Specific characteristics of the AI market
In the Opinion, the FCA finds that certain market conditions in the AI sector could favour some anticompetitive practices.[10]
These market conditions are that:
- there are various barriers to entry on the upstream market, such as the need to use expensive and hard-to-find inputs and very rare and sought-after technical skills;[11]
- some companies already active on the market benefit from various competitive advantages, which assist them in gaining market shares, such as a privileged access to cloud services and data.[12]
Examples provided by the FCA include Nvidia for computing power, as well as Alphabet, Meta and Microsoft for access to data.
Competition risks related to essential inputs
The FCA expresses specific concerns regarding potential competition violations in relation to access to essential inputs for developing AI models.
Three categories of inputs, namely computer components, cloud services and data are identified as raising competition concerns, in addition to access to human resources for AI development.
Firstly, the FCA addresses the risks of abuses related tothe computer components sector, such as graphics processors or AI accelerators.
The FCA finds that Nvidia is the dominant player in this market.[13]According to the FCA, Nvidia has been developing a range of graphics processors specialized for data centre computing since 2018 and has seen significant growth with the rise of AI.[14]
In this regard, the FCA identifies several potentially harmful anticompetitive behaviours, such as price-fixing, supply restrictions, unfair contractual terms, or discriminatory practices,[15] which would be prohibited under Article 102 TFEU.
In light of these considerations, it does not come as a surprise that a few days after the Opinion was published, the press reported that the FCA had launched an investigation into Nvidia.[16] Nvidia is also investigated by the U.S. Department of Justice.[17]
Secondly, the FCA examines the risks related to cloud services, which provide modelers with access to necessary data and computing power to train AI models.
In this regard, the FCA considers that risks of market lock-out in the sector are intensifying.
As the FCA had already noted in its opinion no. 23-A-08,[18] large companies in the sector continue to offer cloud credits[19] to innovative start-ups to attract the best players in the sector. This practice could contribute to further foreclosure of the relevant market.
It is likely that the FCA will make it one of its priorities to investigate potential anticompetitive conducts of players having a dominant position in the cloud service supply, to prevent market foreclosure and concentrations.
Microsoft Azure, AWS and GCS, as well as other companies, namely 3DSOutscale, Alibaba Cloud, IBM, Oracle Cloud, OVHcloud and Scaleway may be companies targeted in this context.[20]
Thirdly, the FCA finds that there are risks regarding access to necessary datato train AI models.
These risks include potential discriminatory access or denial of access to data by companies with significant control, which could lead to market-locking effects. Additionallly, the FCA highlights concerns about data access in relation to users’ rights.
According to the FCA, several players report that large companies in the sector continue to use strategies to limit third parties’ access to their users’ data, abusing the rules of the law on personal data protection.[21] As the FCA notes, such practices could be dealt with under Article 102 TFEU, as the Court of Justice of the EU (“the CJEU”) decided in its Meta judgment.[22]
Moreover, the FCA is concerned about the use of publisher content by foundation model providers without the rights holders’ authorization.[23]
In its “droits voisins” (neighbouring rights) decision,[24] the FCA had already addressed this issue. It fined Google for using publisher and news agency content to train its foundation model for its AI service without notifying the publishers or the FCA.[25]
Fourthly, the FCA finds that human resources for the development of AI are a rare asset for companies, because of the very few talented professionals in the sector.[26]
Certain practices, such as non-compete clauses, non-solicitation clauses and mass recruitment, may be considered abusive when implemented by dominant players.
Competition risks related to minority shareholdings
In addition to the risks related to accessing the necessary inputs, the FCA considers that competition risks also arise from major digital companies acquiring minority stakes in leading startups in the sector and a lack of transparency regarding these investments.[27]
In the Opinion, the FCA finds that these practices could lead to a weakening of the competitive intensity, vertical integration effects and an increase in transparency on the market.[28]
It is interesting to note that the FCA agrees in the Opinion with the European Commission’s decision to allow national authorities to refer a merger to the European Commission that does not have a European dimension and escapes the control of national competition authorities, under Article 22 of Regulation 139/2004.[29]
However, in the recent Illumina ruling,[30] the CJEU ruled that the European Commission was not authorised to encourage or accept referrals of proposed concentrations without a European dimension from national competition authorities where those authorities were not competent to examine those concentrations under their own national laws.
This ruling limits the rights of the FCA to control the acquisition of minority shareholdings in the AI sector under Regulation 139/2004.
In this regard, the FCA has recently declared that it intends to leverage the full potential of antitrust laws to address these practices.[31] In addition, it is reflecting on the possibility of strengthening its concentration control tools to address potentially problematic concentration transactions that do not meet the notification thresholds in France.[32]
The FCA’s recommendations to address competition risks
After setting out its main concerns, the FCA proposes in the Opinion some recommendations to prevent distortions of competition in the sector.
The FCA’s recommendations sets out how it intends to monitor dominant operators. Its recommendations are five-fold:
Firstly, the FCA considers that it is necessary to make the regulatory framework more effective.
In this regard, the FCA urges the European Commission to fully apply the Digital Markets Act (the “DMA”) provisions to control ex ante economic operators. It specifically proposes to designate cloud service providers as “gatekeepers”, thereby subjecting them to the obligations arising from the DMA.
Moreover, the FCA encourages the DGCCRF (Directorate General for Competition Policy, Consumer Affairs and Fraud Control) to pay special attention to the use of cloud services and to penalize any cloud credit practices which breach French law.[33]
Secondly, the FCA calls for all the national competent authorities to ensure compliance with competition law in the field of Generative Artificial Intelligence and use all necessary tools to this end.[34]
Thirdly, the FCA supports the development of public supercomputers, which serve as an alternative to cloud providers and enable to access computing power, benefiting innovation. It therefore proposes to continue investing in the development of supercomputers at the European level and to initiate a governmental discussion about an open and non-discriminatory framework allowing private actors to use the resources of public supercomputers for a fee.[35]
Fourthly, the FCA calls for ensuring a balance between fair compensation for rights holders and access to the necessary data for model developers to innovate. Additionally, the FCA encourages fostering the availability of public and private sector data for the training or fine-tuning of AI models.[36]
Fifthly, the FCA suggests that the European Commission apply Article 14 DMA to concentrations of companies operating in the AI sector.
Under Article 14 DMA, “a gatekeeper shall inform the Commission of any intended concentration within the meaning of Article 3 of Regulation (EC) No 139/2004, where the merging entities or the target of concentration provide core platform services or any other services in the digital sector”.
According to the FCA, the scope of Article 14 DMA could include AI gatekeepers. This would force gatekeepers to report any minority stake acquired in the IA sector.[37]
Conclusion
By way of conclusion,andeven if it is still too early to understand all the concrete steps and measures that the FCA will take in the AI sector, the Opinion is very useful to anticipate the competition risks it will seek to address with priority.
In this regard, it is worth noting that the FCA has not expressed concerns about the risk of collusion between AI market operators in the upstream value chain, since this was not an issue addressed by almost all stakeholders interviewed during the public consultation of the FCA.[38]
In the Opinion’s conclusion, the FCA declares that it will attentively monitor practices that unduly limit access to essential inputs, partnerships entered into by already dominant digital companies and bundling practices likely to further consolidate the AI sector around these companies.[39]
As the FCA has already launched an investigation against Nvidia, it will not come as a surprise if similar investigations are also carried out against other players in a dominant position in the upstream AI market.
[1] See the Opinion 24-A-05 of the competitive functioning of the generative artificial intelligence sector.
[2] “Concurrence et intelligence artificielle”, conference held by Gide Loyrette Nouel, 5 September 2024.
[3] See the Opinion 24-A-05, para. 4.
[4] Ibid., para. 231.
[5] Ibid., para. 88 and following.
[6] Ibid., para. 231.
[7] Ibid., para. 42, 76.
[8] Ibid., para. 42, 81.
[9] As noted by the FCA, major digital players such as Alphabet and Microsoft, along with model developers, operate across the entire value chain. These model developers may range from specialized artificial intelligence startups to research labs responsible for training and possibly specializing in artificial intelligence models. See the Opinion 24-A-05, para. 43, 71.
[10] Ibid. para. 121 and following.
[11] Ibid., para. 122 and following.
[12] Ibid. para. 188 and following.
[13] Ibid, para. 241.
[14] Ibid., para. 65.
[15] Ibid., para. 242.
[16] Le Figaro, 4 September 2024, “Nvidia se défend d’entorses à la concurrence après des informations sur une enquête à ce sujet” (link).
[17] Le Figaro, 4 September 2024, “Nvidia fait l’objet d’une assignation à comparaître auprès du Ministère de la Justice” (link).
[18] Opinion 23-A-08 on the competitive operation of cloud computing.
[19] See the Opinion 24-A-05, para. 247.
[20] Ibid, para. 81.
[21] Ibid., para. 261.
[22] CJEU, 4 July 2023, Meta Platforms e.a. (Conditions générales d’utilisation d’un réseau social), C-252/21, ECLI:EU:C:2023:537 (link).
[23] See the Opinion 24-A-05, para. 264.
[24] Decision 24-D-03 of 15 March 2024 concerning the compliance with the commitments of Decision 22-D-13 of 21 June 2022 related to practices implemented by Google in the press sector.
[25] Google linked the use of publisher content to the display of protected content without offering a technical solution for publishers to opt out of Bard’s use of their content. The FCA considered this practice as an abuse of a dominant position and imposed a fine of €250 million on Google for failing to comply with its previous commitments.
[26] See the Opinion 24-A-05, para. 265 and following.
[27] Ibid., para. 290 and following.
[28] Ibid., para. 293.
[29] Ibid., para. 300.
[30] CJEU, 3 September 2024, Illumina / Commission, C-611/22 P, ECLI:EU:C:2024:677 (link).
[31] FCA’s press release, “L’Autorité de la concurrence prend note de l’arrêt Illumina/Grail de la Cour de justice de l’Union européenne”, 3 September 2024 (link).
[32] Ibid.
[33] See the Opinion 24-A-05, para. 330 and following.
[34] Ibid., para. 339 and following.
[35] Ibid. para. 349 and following.
[36] Ibid., para. 355 and following.
[37] Ibid., para. 362 and following.
[38] Ibid., para. 316.
[39] Ibid., para. 369.