
The beginning of 2025 was marked by two important developments in the digital platform landscape. First, Meta announced that it would change the content moderation policies of Facebook and Instagram in the US, significantly reducing the amount of fact-checking. However, this decision will not come without regulatory challenges on the other side of the Atlantic; Meta is required to abide by the Digital Services Act (DSA), including the obligations the latter establishes with respect to fact-checking. Second, the US Supreme Court recently upheld the Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACAA), effectively allowing TikTok’s ban in the US in light of the national security concerns over its Chinese ownership. President Trump has ordered a 75-day extension, which resulted in TikTok going dark for just over 24 hours.
In addition to the questions they pose from the perspective of US law, these developments raise concerns over the effective enforcement of (recently adopted) EU law. For instance, the Financial Times reported that “Brussels” was reassessing its investigations into big tech practices under the Digital Markets Act (DMA), which could lead to a scale back or change in scope. These concerns have quickly been debunked by the Commission, which stated that it will fully enforce the EU rules governing social networks and other platforms.
This blog explores how recent developments in the US reflect fundamentally different approaches to platform governance and what they mean for EU digital regulation.
Meta’s Moderation Policy Shift: EU and US Perspectives
On 7 January 2025, Meta announced significant changes in the content moderation policies of Facebook and Instagram. Meta plans to discontinue its fact-checking program in the US and introduce a community-driven system akin to X’s Community Notes feature. According to Meta, these changes aim to uphold “a commitment to free expression” and address concerns about excessive censorship and unjust enforcement actions. The decision has sparked intense debate worldwide about the platform’s tolerance for harmful content under the guise of free expression and has raised questions about the spread of such content beyond US borders. This section examines the implications of Meta’s policy changes in the context of the DSA’s fact-checking obligations.
First things first: What is Fact-Checking?
Fact-checking verifies the accuracy of published content by reviewing content and comparing claims to credible sources. It is categorized into two types: internal and external. Internal fact-checking is conducted in-house by the platform to prevent the release of inaccurate content, while third-party organizations carry out external fact-checking. The process can be either manual or automated:
- Manual Fact-Checking: This involves identifying potentially false information, seeking editorial approval, verifying claims using expert sources, labelling findings, and transparently publishing the results.
- Automated Fact-Checking: This relies on technology to identify relevant or “check-worthy” statements, collect evidence, validate claims using Machine Learning (ML) and Natural Language Processing (NLP), and justify the conclusions to maintain transparency.
Fact-checking plays a crucial role in combatting misinformation and fostering accurate public discourse. Even though it faces certain limitations, such as algorithmic bias and human subjectivity in the verification process, these limitations are not insurmountable.
Meta’s New Approach: A Departure from Fact-Checking
One of the (non-political) reasons for Meta’s decision was a reported 10-20% error rate in content removal and excessive penalties on users. According to Meta, too much content was being fact-checked that “people would understand to be legitimate political speech and debate. Our system then attached real consequences in the form of intrusive labels and reduced distribution. A program intended to inform too often became a tool to censor.” However, the root cause of this error rate is unclear. Users do have an internal complaint procedure to challenge the “false” fact-checking of their content. Moreover, in the case of EU users, the decision can be raised with a “certified out-of-court dispute settlement body” (a DSA requirement) in order to resolve the issue. However, the success of this procedure and how many fact-checking errors this has resolved is not clear.
Although automated screening will still apply to serious violations like terrorism and child exploitation, Meta intends to shift to a community-driven verification model akin to X’s, which in essence means users police themselves when it comes to fact-checking. X’s system involves user-generated correction notes that appear under flagged posts once consensus is reached. In practice, this system means that users who see a post they believe contains misleading information, can add “notes” providing additional context or corrections. Other users can then rate these notes as helpful or not helpful. Once enough users have rated a note and there is sufficient consensus that it is helpful and accurate, the note becomes visible underneath the original post for all users to see. Research from Cornell indicates this method has shown promise: flagged content saw 50% fewer retweets and an 80% higher deletion rate. However, this system faces several challenges:
- Delayed responses: Corrections can take hours or days to appear.
- Limited coverage and accuracy: A 2024 analysis of U.S. election-related posts found only 29% of fact-checkable tweets received helpful notes, and of those, only 67% addressed verifiable claims.
- Manipulation risks: Bad-faith actors may exploit the system to influence which sources are considered credible.
The French government has already expressed its concerns regarding Meta’s decision to change its fact-checking policies, arguing that community-based moderation prioritizes viral reach over accurate content verification, underscoring ongoing doubts about the model’s overall effectiveness.
The EU’s approach to content moderation
Arguably the first piece of EU platform regulation is the e-Commerce Directive (adopted in 2000), which made platforms liable for illegal content only when they had “actual knowledge” of it and failed to act (Article 14). However, the e-Commerce Directive did not require proactive monitoring (Article 15). The scope of the Audiovisual Media Services Directive became broader to protect users from harmful content distributed through video-sharing platforms like YouTube. However, the most complete piece of legislation on content moderation to date is the DSA. The DSA modernized content moderation rules, introducing comprehensive obligations for tackling illegal content, hate speech, and misinformation. Very Large Online Search Engines (VLOSEs) and Very Large Online Platforms (VLOPs), including Meta’s platforms, face the strictest requirements. They must inter alia:
- Implement transparent content moderation policies.
- Publish annual moderation reports.
- Provide user-friendly reporting mechanisms.
- Maintain due diligence in the design and operation of their services.
Despite Meta’s criticism of the EU’s “institutionalized censorship,” the DSA not only explicitly protects fundamental freedoms, including freedom of expression, but aims to strengthen them (see preamble 47). At the same time, it is important to note that freedom of expression is not an absolute right. On the contrary, it is a right on which restrictions can be imposed to safeguard other fundamental rights and values. Ultimately, the DSA requirements seek to set a new standard for online platforms’ accountability in addressing disinformation, illegal content, and other risks to society (see Recitals 40 and 41).
A closer look into the relationship between fact-checking and the DSA
As a VLOP, Meta is required to comply with several obligations under the DSA: Article 33(1) explicitly requires VLOPs to take active measures against disinformation. Additionally, it mandates VLOPs to address systemic risks to public discourse (Articles 35(1) and 34(1), subparagraph (2)), with Recital 104 explicitly identifying disinformation as a systemic threat to democracy. Moreover, VLOPs have a duty to mitigate specific systemic risks by collaborating with “trusted flaggers”, that is, independent experts tasked with identifying and reporting illegal content. In an attempt to mitigate these risks further, the DSA adopts a co-regulatory approach supported by additional measures. The framework combines mandatory requirements with voluntary measures and includes:
- The European Digital Media Observatory coordinating fact-checkers and experts.
- Voluntary Codes of Conduct that support the DSA’s application. Although failure to comply with these obligations does not constitute an infringement of the DSA, signatories commit to upholding their commitments under the codes they join and are subject to an annual audit under the DSA to verify compliance.
For entities that have been designated as VLOPs and VLOSEs under the DSA, adherence to these codes can support risk mitigation efforts. This is because:
- The Commission considers voluntary commitment when drafting action plans per Article 75 (2): VLOPs and VLOSEs are obliged to draft up and communicate an action plan setting out the necessary measures that are sufficient to terminate or remedy an infringement.
- Recital 104 notes that refusal to engage with the codes of conduct without justification may be factored into determining potential DSA violations.
- Article 35(1)(h) of the DSA provides that compliance with relevant Article 45 codes is among the suggested ways for VLOPs and VLOSEs to discharge their risk mitigation obligations.
Thus, this framework incentivizes service providers like Meta to adopt voluntary codes, enhancing accountability and alignment with the DSA’s objectives. Notably, Meta is a signatory to the 2022 Code of Practice on Disinformation, which, among others, extends fact-checking coverage across all EU Member States and languages and ensures that platforms will make more consistent use of fact-checking in their services. Therefore, Meta has committed to using fact-checking systems on its EU platforms. Of course, this does not mean that Meta will live up to its promise. We all remember what happened in the aftermath of the Commission’s approval of the Facebook/WhatsApp merger. In this case, though the Commission conceded that the concentration of (personal) data in the hands of Facebook could raise foreclosure concerns, it concluded that this was not a plausible theory of harm because data combination would require a change in WhatsApp’s privacy policy, which the merged entity did not contemplate doing: Facebook explicitly stated that WhatsApp would continue to offer its services in a manner consistent with the promises it has made to its users (see paragraphs 182 and 185). In less than two years after the acquisition was cleared, WhatsApp announced the possibility of linking WhatsApp user phone numbers with Facebook user identities.
Comparison to the US
In the United States, online platforms are primarily governed by three key pieces of legislation: (i) the First Amendment to the US Constitution, (ii) Section 230 of the Communications Decency Act (Section 230), and the Digital Millennium Copyright Act (DMCA). Of these, Section 230 plays the most significant role in shaping the way platforms operate, particularly concerning user-generated content.
Section 230 is often referred to as the “safe harbour” law, providing legal immunity to platforms for content posted by their users. Essentially, it means that if a user shares something harmful or illegal, such as defamatory statements or obscene content, the platform is typically not held liable for that content. This immunity exists because Section 230 does not consider the platform as the “publisher” of such material (47 U.S. Code § 230 (c) (1)). It is important to note that Section 230 does not impose any direct content moderation requirements on platforms. Instead, platforms are only held accountable if they act in “bad faith” (47 U.S. Code § 230 (c) (2) (A)), which could include discriminatory practices, such as suppressing speech for biased reasons.
When we look at the DSA, we see a stark contrast in approach. The DSA holds companies to a higher standard of accountability, introducing steep fines (which may amount to 6% of a company’s global revenue) for failure to comply. It also mandates greater transparency in how platforms moderate content, particularly with respect to the algorithms used and the decision-making processes behind them. Unlike Section 230, which grants platforms “safe harbour” protections from liability as long as they act in “good faith” or as “good Samaritans”, the DSA focuses on holding platforms accountable for the content shared on their sites. It establishes a more structured enforcement framework, which includes oversight by national authorities and the European Board for Digital Services. Section 230, by contrast, lacks a direct enforcement mechanism to ensure that platforms adhere to these standards. Additionally, platforms are not required to collaborate with fact-checking organizations to combat issues like disinformation and harmful content online.
The US’s more lenient approach to platform liability largely explains Meta’s choice to eliminate fact-checking on Facebook and Instagram in the US. It seems that Meta will not face any regulatory consequences in the US as a result of this decision. However, this shift prompts an important inquiry: How will Meta navigate cross-border content flows in practice given the varying regulatory standards? In particular, when a European user watches American content, will that American content be fact-checked? Clearly, the internet’s inherently borderless environment complicates the enforcement of region-specific content moderation policies in practice.
Another question that arises concerns Meta’s liability if it were to implement a similar approach to fact-checking in the EU. Aside from the obvious, that is, a failure to comply with the DSA obligations established above, Meta could be found to be infringing the EU competition rules. In the recent Meta Platforms judgment, the Court of Justice of the EU (CJEU) ruled that, in the context of examining whether an undertaking has abused its dominant position, competition authorities may need to examine whether that undertaking’s conduct complies with rules other than those relating to competition law, “such as the rules on the protection of personal data laid down by the GDPR” (see paragraph 48). The wording of the ruling suggests that competition authorities may consider the breach of rules other than data protection regulation, including the DSA, in order to assess infringements of competition rules.
The TikTok ban
In a recent judgement, the Supreme Court of the Unites States (Supreme Court) recently upheld the Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACAA), effectively allowing the potential ban of TikTok in the US. While many initially framed this as a First Amendment battle (i.e., freedom of speech), the Supreme Court’s judgement reveals a more complex narrative centred on national security concerns and escalating US-China tensions.
Despite TikTok’s role as a major platform for communication and content creation, the Supreme Court’s analysis is not primarily focused on free speech rights. Instead, the Supreme Court positioned the case within a broader framework of national security concerns. While acknowledging that the PAFACAA burdens the right to free speech, the Supreme Court emphasized that the regulation wasn’t about content control (i.e., content-based regulation) but rather about who controls the platform and the associated risks (i.e., content-neutral). The Supreme Court still had to consider whether the PAFACAA violated the First Amendment. However, instead of definitively ruling on whether the strict First Amendment scrutiny applied, it “assumed without deciding that the challenged provisions fall within this category and are subject to First Amendment scrutiny”. In doing so, that Supreme Court observed that the nature of the PAFACAA was content-neutral, and that such content-neutral laws are subject to an “intermediate level of scrutiny”. A law of this kind is deemed constitutional if it “advances important governmental interests unrelated to the suppression of free speech”. Therefore, as the PAFACAA was content-neutral and inherently focused on the concerns of China weaponizing data collected via TikTok, the First Amendment would not be violated, and a lower level of scrutiny could be applied to determine whether the ban was constitutional. This strategic move highlighted that free speech concerns, while present, were not the decisive factor in the Supreme Court’s decision to uphold the law.
Therefore, at its core, the Supreme Court’s judgement centred on the potential threats posed by TikTok’s Chinese ownership. The platform’s ability to access vast amounts of personal data from its 170 million US users raised significant security concerns. The Supreme Court noted (see pages 13 and 14) that this data collection could enable: (i) tracking of federal employees and contractors, (ii) compilation of blackmail dossiers, (iii) corporate espionage operations and (iv) intelligence and counterintelligence operations. The Supreme Court found these concerns particularly compelling given China’s documented history of collecting data of US citizens for intelligence purposes. Therefore, the Court upheld the PAFACAA on the basis that it was sufficiently specific to the US Government’s interest in preventing a foreign adversary from obtaining sensitive data from American users.
Following the Supreme Court’s judgement, President Trump signed an executive order granting TikTok a 75-day extension to comply with the ban unless the platform is sold. While this order delays enforcement, it does not overturn the law itself. The situation remains fluid, with several possible outcomes, including: (i) TikTok could be sold to a US company, (ii) the Department of Justice (DoJ) might be instructed to ignore the ban, and (iii) app stores will have to decide whether they continue to distribute the apps during this transitional period.
The contrast between US and EU approaches to digital platform regulation is striking. While the US has opted for a potential outright ban based on national security concerns, the EU’s regulatory framework focuses more on systematic regulation and user protection, which is more rights-based. The EU’s approach to content regulation, anchored in amongst others the GDPR, the DSA, and the DMA, and cybersecurity rules, emphasizes strong legal protections for social media users, data protection as a fundamental right, regulation rather than prohibition, and temporary suspensions as a means of last resort.
This difference was recently illustrated in the EU’s handling of the ChatGPT case in Italy in March 2023, where regulators imposed a temporary ban based on specific GDPR violations. In particular, the Italian DPA criticized OpenAI for not having proper mechanisms in place for age verification, which meant that minors could have been exposed to inappropriate content. The ban was lifted once OpenAI implemented the required age verification mechanism.
Conclusions
The contrasting regulatory approaches between the EU and US signal a fragmented digital marketplace. While the EU maintains a rights-based framework prioritizing user protection and systematic content moderation through the DSA, the US oscillates between minimal intervention (Meta’s case) and outright prohibition (TikTok’s case), albeit on different grounds. This divergence raises crucial questions about the future of global platform governance and the feasibility of region-specific content moderation in an interconnected digital ecosystem.
Konstantina Bania, Katerina Dres, and Philine Wassenaar.