December 30, 2024
Can Congress legislate a solution?

Can Congress legislate a solution?

Existing U.S. laws don’t prevent misinformation on the Internet, as we have seen, nor do social media companies keep it off their services on their own. The question naturally arises whether new Internet laws could change things. The trick would be to reconcile Internet technology (which allows anyone to publish anything worldwide), gatekeeping and editing on the basis of credibility and reliability, and First Amendment freedom. But that perfect formula certainly isn’t before Congress.

Several bills have been introduced in Congress to change Internet laws, but no changes directly addressing misinformation are likely to be passed soon, and there is no consensus on changes affecting basic Internet laws.

Initially, it would take drastic legislative changes to significantly affect the Internet, because under current circumstances, the Internet “has largely operated under a system of self-governance,” as a Congressional Research Service report notes. After Section 230 (which essentially creates the self-governance system), the next two most important Internet laws are several special provisions of the Copyright Act, and the general rule of the Federal Trade Commission act barring “unfair or deceptive acts or practices.” None of these laws has proven an effective barrier to Internet misinformation.

Nor would the next-most-often proposed Internet law, on net neutrality, significantly control misinformation. Net neutrality would bar broadband Internet access services from blocking or throttling lawful Internet traffic, or from engaging in paid prioritization of Internet traffic. None of these provisions would control content.

Against this background, several bills have been introduced to further regulate the Internet, and proposals have been made for applying existing laws against Internet giants.

Cutbacks of Section 230

Section 230 has already been amended once, to prevent it from protecting intermediaries from certain sex-trafficking crimes and claims. And President Trump issued an executive order that attempts to cut back on Section 230’s protection. But the President’s order attempts to weaken Section 230 by an interpretation contrary to the well-settled caselaw, and through intervention of an agency that may have no authority to do so. It seems unlikely to succeed, though it may have accelerated the debate on Section 230’s future.

As to new legislation, it would be a challenge to amend Section 230 in a way that covered misinformation without crippling Internet discourse generally. Any law that grants or withholds immunity based on the speaker or subjects discussed, particularly in the political arena, would raise serious First Amendment concerns.

One proposed bill would remove the immunity if intermediaries displayed user-generated content “in an order other than chronological order,” but that approach, an unusual government-imposed content control, would not prevent misinformation and perhaps even give an advantage to sophisticated misinformation actors.

Senator Josh Hawley’s proposed “Limiting Section 230 Immunity to Good Samaritans Act” would allow private litigation to challenge  content moderation decisions made by large online companies, with the issue being whether each decision had been made in “good faith” or “without an honest belief or purpose.” Reminiscent of the “heckler’s veto,” before Section 230 by which a mere complaint could force service providers to take down content, this bill would arm the complaining parties with a $5,000 per item civil penalty threat. As a practical matter, it would give those with litigation resources a powerful tool to chill all internet content about themselves on major services. It would also open up litigation over any provider’s compliance with its service terms.

A Department of Justice report issued in June 2020 suggests similar changes in Section 230, including narrowing Section 230’s coverage and opening up content moderation decisions to litigation, but doesn’t confront the reality that these changes would permit the return of the heckler’s veto and self-censorship problems that Section 230 was designed to prevent.

Similarly, replacing Section 230 with a notice-and-takedown procedure, like that of Section 512 of the Copyright Act, would permit an effective heckler’s veto by those wealthy and aggressive enough to regularly make takedown demands.

Another reformer has suggested that the protections of Section 230 be withheld from platforms, of a certain minimal size, “that use algorithmic amplification for attention.” The idea here is that bots are the biggest problem, and that social media services that employ them, and reap the benefits of the revenue they bring in, should be responsible for vetting the content brought in by the bots.

A “Fairness Doctrine” based change, mandating equal treatment of different political positions, seems unlikely for multiple reasons. The original Fairness Doctrine, which applied to fairly narrow circumstances, grew out of the perceived scarcity of broadcasting stations, and was upheld by the Supreme Court on that basis. It has been off the books for several decades, and there is a broad consensus that it would be unconstitutional if reinstated, even in the limited broadcasting arena.

Accordingly, while repeal or change of Section 230 is likely to be on the legislative agenda in the near future, it is premature to predict whether or how it will be restricted. Among other things it is likely to be vigorously defended by the powerful tech industry lobby.

Antitrust law changes or prosecutions

Some experts, notably including Columbia University law professor Tim Wu, have suggested using antitrust enforcement to split up or more forcefully regulate giant tech companies to thereby mitigate their power and the power of content posted on their forums.

The “network effect” attendant to leading social media forums clearly exacerbates the effect of misinformation posted there. Put simply, once a particular social media company becomes popular, new members will gravitate to it in order to connect with other people they know or want to connect with. Soon that company becomes dominant, meaning that posts on its service — including misinformation — will circulate broadly. If a more diverse set of differently managed social media services existed, misinformation providers would likely find it more difficult to widely distribute their messages. But government acceptance of many past social media mergers has only helped these network effects develop.

Any antitrust actions against tech or social media companies would have to be based on anti-competitive conduct, would raise many legal and factual issues, and would likely take years to litigate. Changes to antitrust law to adapt it to Internet business structures will be highly controversial and divisive. Neither route is likely to solve the misinformation problem.

Data privacy protection

Federal data privacy legislation, if enacted, could govern and restrict how social media companies and other Internet intermediaries collect, use, and distribute user data. Since user data, including Internet searches, purchases, and associations, can reveal a person’s political and economic preferences and create psychological profiles (as Cambridge Analytica demonstrated in 2016), personal data collection has significant ties to political misinformation. But federal data privacy bills have been proposed in Congress since 2010, and while interest in such bills has significantly picked up recently, such legislation still seems a long way off.

In short, no legislation currently proposed or likely to pass in the near future appears to offer any reasonable legislative solution to political misinformation. And legislation alone can probably never effectively address it. As one report notes, it would take at least “reshaping of platforms and policies, laws and infrastructures, technologies and standards,” since all those things are implicated in the circulation of misinformation.

So with no current or likely legal or business solutions, the onus on finding and dealing with Internet misinformation falls on . . . you, the user. We’ll discuss what you can and should do in our next post.

Mark Sableman is a member of Thompson Coburn’s Media and Internet industry group, and a member of the Firm’s Intellectual Property practice group.

Leave a Reply

Your email address will not be published. Required fields are marked *