January 14, 2025
New Era and New Risks: Meta’s Content Moderation Reforms and Freedom of Expression Online

New Era and New Risks: Meta’s Content Moderation Reforms and Freedom of Expression Online

Meta CEO Mark Zuckerberg yesterday announced significant new changes to the company’s content moderation policies. The five-minute video is worth watching in its entirety, as it demonstrates the shifting political sands that seemingly pressured even the world’s largest social media company to pay heed. Zuckerberg said the company’s reliance on third-party fact checkers had resulted in too much censorship and vowed to return to an emphasis on freedom of expression. That means the fact checkers are gone, replaced by the Twitter (X) model of community notes. Moreover, the company is moving its content moderation team from California to Texas (a nod to claims the California-based teams were biased), increasing the amount of political content in user feeds, and pledging to work with the Trump administration to combat content regulation elsewhere, including in Europe and South America.

With more than three billion users, the implications of the decision are enormous assuming the same approach is taken in all markets (the company is starting in the U.S.). But beyond what it means for Facebook and Instagram users, the change is likely part of a broader shift in Internet regulation with the pendulum swinging back toward lighter touch rules coming out of the United States. In other words, the recent experience on Twitter that has left many uncomfortable may become the norm, not the outlier.

In thinking about the decision, I reached back to my appearance six weeks ago before the Standing Committee on Canadian Heritage as part of its study on protecting freedom of expression. My opening statement included the following:

I think Bill C-11 and Bill C-18 both have indirect effects on expression. In the case of Bill C-11, supporters were far too dismissive of the implications of regulating user content, with some going so far as to deny it was in the bill, only to later issue a policy direction that confirmed its presence.

Bill C-18 not only led to the blocking of news links but also failed to recognize that linking to content is itself expression. The net effect has been to cause harm to news-related expression in Canada. We need to do better when it comes to digital policy, as we haven’t always taken the protection of expression sufficiently seriously in the digital policy debate.

Second, there is expression that chills other expression. This can occur when expression includes harassment or strikes fear in some communities, invariably leading to a chill in their ability to express themselves. My own community, the Jewish community, is a case in point. The rise in anti-Semitism, in a manner not seen in Canada in generations, has sparked safety fears and chilled expression.

The committee was prescient in addressing freedom of expression issues, though its study will never see the light of day given the government’s decision to prorogue Parliament. While Meta’s changes seem largely driven by political considerations, some of the concerns Zuckerberg identifies are real. In the Canadian context, the government was too dismissive of the speech implications of the online streaming and online news bills. As a result, Facebook has blocked news links in Canada for more than 18 months, a policy that is a direct result of government legislation and which harms freedom of expression. In fact, the government doubled down on the policy by urging the CRTC to review user screenshots of news, thereby also targeting the speech of millions of Canadians. Those bills – when combined with the now-dead Bill C-63 on online harms – are genuine concerns and require a rethink that better centres freedom of expression.

However, my comments also sought to emphasize that fixing bad legislation is not the same as rejecting all legislation or efforts to address speech that can cause harm. Community notes has been a valuable innovation, but it does not replace actual efforts to identify illegal or harmful content. There is a need for platforms to act responsibly (to borrow the language of Bill C-63) and that includes mitigating against real harms with appropriate policies, transparency, and a consistent application of their own rules. If they are unwilling to do so, legislation and potential liability is needed.

The experience of the Jewish community is a case in point. While some content rises to the level of illegality, much of the barrage of antisemitic content online falls within the awful but lawful category. This is therefore legal content that causes serious harms as the numerous anti-semitic attacks in Canada amply demonstrate. It is important to emphasize that community moderation on sites like Twitter or Wikipedia do not solve these issues and in some instances may make matters worse.

This week feels like the start of a new era on Internet policy. In the U.S., it seems likely that efforts to curry favour with the Trump Administration will not end with Meta and that many other companies are likely to follow a similar approach. In Canada, the online harms bill is dead and changes may be coming for the digital policies that are now law. In fact, U.S. pressure to change those laws may be on the agenda. There is a need for a policy correction, but this new era also brings significant risks. Many of the policies were born out of legitimate concerns for the consequences of harmful disinformation. Community notes alone won’t solve the issue and left unchecked the results may chill the very speech the companies profess to support. The last few years have been marked by digital policies that were too dismissive of expression risks and too quick to paint critics as anti-regulation when many were simply urging smarter regulation that did a better job of striking the balance between competing objectives. In the emerging new era of Internet regulation, that should remain our goal.

Leave a Reply

Your email address will not be published. Required fields are marked *