December 27, 2024
How well do you know the Online Safety Bill?

How well do you know the Online Safety Bill?

With the Online Safety Bill returning to the Commons next month, this is an opportune moment to refresh our knowledge of the Bill.  The labels
on the tin hardly require repeating: children, harm, tech giants, algorithms, trolls, abuse and the rest. But, to beat a well-worn drum, what really matters is what is inside the tin. 

Below is a miscellany of statements about the Bill: familiar slogans and narratives, a few random assertions, some that I have dreamed up to tease out lesser-known features. True, false, half true, indeterminate? Click on the expandable text to find out.
 

The Bill makes illegal online what is illegal offline.

No. We have to go a long way to find a criminal offence that does not already apply online as well as offline (other than those such as driving a car without a licence, which by their nature can apply only to the physical world). One of the few remaining anomalies is the paper-only requirement for imprints on election literature – a gap that will be plugged when the relevant provisions of the Elections Act 2022 come into force.


Moreover, in its fundamentals the Bill departs from the principle of online-offline equivalence. Its duties of care are extended in ways that have no offline comparable. It creates a broadcast-style Ofcom regulatory regime that has no counterpart for individual speech offline: regulation by discretionary regulator rather than by clear, certain, general laws.


The real theme underlying the Bill is far removed from offline-online equivalence. It is that online speech is different from offline: more reach, more persistent, more dangerous and more in need of a regulator’s controlling hand.

Under the Bill’s safety duty, before removing a user’s post a platform will have to be
satisfied to the criminal standard that it is illegal.

No. The current version of the Bill sets ‘reasonable grounds to infer’ as the platform’s threshold for adjudging illegality.


Moreover, unlike a court that comes to a decision after due consideration of all the available evidence on both sides, a platform will be required to make up its (or its algorithms’) mind about illegality on the basis of whatever information is available to it, however incomplete that may be. For proactive monitoring of ‘priority offences’, that would be the user content processed by the platform’s automated filtering systems. The platform would also have to ignore the possibility of a defence unless they have reasonable grounds to infer that one may be successfully relied upon.


The mischief of a low threshold is that legitimate speech will inevitably be suppressed at scale under the banner of stamping out illegality. In a recent House of Lords debate Lord Gilbert, who chaired the Lords Committee that produced a Report on Freedom of Expression in the Digital Age, asked whether the government had considered a change in the standard from “reasonable grounds to believe” to “manifestly illegal”.  The government minister replied by referring to the “reasonable grounds to infer” amendment, which he said would protect against both under-removal and over-removal of content.

The Bill will repeal the S.127 Communications Act 2003 offences.


Half true. Following a recommendation by the England and Wales Law Commission the Bill will replace both S.127 (of Twitter Joke Trial notoriety) and the Malicious Communications Act 1988 with new offences, notably sending a harmful communication.

However, the repeal of S.127 is only for England and Wales. S.127 will continue in force in Scotland. As a result, for the purposes of a platform’s illegality safety duty the Bill will deem the remaining Scottish S.127 offence to apply throughout the UK. So in deciding whether it has reasonable grounds to infer illegality a platform would have to apply both the existing S.127 and its replacement. [Update: the government announced on 28 November 2022 that the ‘grossly offensive’ offences under S.127(1) and the MCA 1988 will no longer be repealed, following its decision to drop the new harmful communications offence.] 

A platform may be required to adjudge whether a post causes spiritual injury.


True. The National Security Bill will create a new offence of foreign interference. One route to committing the offence involves establishing that the conduct involves coercion. An example of coercion is given as “causing spiritual injury to, or placing undue spiritual pressure on, a person”.


The new offence would be designated as a priority offence under the Online Safety Bill, meaning that platforms would have to take proactive steps to prevent users encountering such content.

A platform may be required to adjudge whether a post represents a contribution to a matter of public interest.


True. The new harmful communications offence (originating from a recommendation by the Law Commission) provides that the prosecution must prove, among other things, that the sender has no reasonable excuse for sending the message. Although not determinative, one of the factors that the court must consider (if it is relevant in a particular case) is whether the message is, or is intended to be, a contribution to a matter of public interest.


A platform faced with a complaint that a post is illegal by virtue of this offence would be put in the position of making a judgment on public interest, applying the standard of whether it has reasonable grounds to infer illegality. During the Commons Committee stage the then Digital Minister Chris Philp elaborated on the task that a platform would have to undertake. It would, he said, perform a “balancing exercise” in assessing whether the content was a contribution to a matter of public interest. [Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]


The House of Lords Communications and Digital Committee Report on Freedom of Speech in the Digital Age contains the following illuminating exchange: ‘We asked the Law Commission how platforms’ algorithms and content moderators could be expected to identify posts which would be illegal under its proposals. Professor Lewis told us: “We generally do not design the criminal law in such a way as to make easier the lives of businesses that will have to follow it.”’ However, it is the freedom of speech of users, not businesses, that is violated by the arbitrariness inherent in requiring platforms to adjudge vague laws.

Platforms would be required to filter users’ posts.


Highly likely, at least for for some platforms. All platforms would be under a duty to take proportionate proactive steps to prevent users encountering priority illegal content, and (for services likely to be accessed by children) to prevent children from encountering priority content harmful to children. The Bill gives various examples of such steps, ranging from user support to content moderation, but the biggest clues are in the Code of Practice provisions and the enforcement powers granted to Ofcom.


Ofcom is empowered to recommend in a Code of Practice (if proportionate for a platform of a particular kind or size) proactive technology measures such as algorithms, keyword matching, image matching, image classification or behaviour pattern detection in order to detect publicly communicated content that is either illegal or harmful to children. Its enforcement powers similarly include use of proactive technology. Ofcom would have additional powers to require accredited proactive technology to be used in relation to terrorism content and CSEA (including, for CSEA, in relation to private messages).

The Bill regulates platforms, not users.


False dichotomy. The Bill certainly regulates platforms, but does so by pressing them into service as proxies to control content posted by users. The Bill thus regulates users at one remove. It also contains new criminal offences that would be committed directly by users.

The Bill outlaws hurting people’s feelings.


No, but the new harmful communications offence comes close. It would criminalise sending, with no reasonable excuse, a message carrying a real and substantial risk that it would cause psychological harm – amounting to at least serious distress – to a likely member of the audience, with the intention of causing such harm. There is no requirement that the response of a hypothetical seriously distressed audience member should be reasonable. One foreseeable hypersensitive outlier is enough. Nor is there any requirement to show that anyone was actually seriously distressed.

The Law Commission, which recommended this offence, considered that it would be kept within bounds by the need to prove intent to cause harm and the need to prove lack of reasonable excuse, both to the criminal standard. However, the standard to which platforms will operate in assessing illegality is reasonable grounds to infer[Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

The Bill also refers to psychological harm in other contexts, but without defining it further. The government intends that psychological harm should not be limited to a medically recognised condition.

The Bill recriminalises blasphemy.


Quite possibly. Blasphemy was abolished as a criminal offence in England and Wales in 2008 and in Scotland in 2021. The possible impact of the harmful communications offence (see previous item) has to be assessed against the background that people undoubtedly exist who experience serious distress (or at least claim to do so) upon encountering content that they regard as insulting to their religion.
 [Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

The Bill is all about Big Tech and large social media companies.

No. Whilst the biggest “Category 1” services would be subject to additional obligations, the Bill’s core duties would apply to an estimated 25,000 UK service providers from the largest to the smallest, and whether or not they are run as businesses. That would include, for instance, discussion forums run by not-for-profits and charities. Distributed social media instances operated by volunteers also appear to be in scope.

The Bill is all about algorithms that push and amplify user content.

No. The Bill makes occasional mention of algorithms, but the core duties would apply regardless of whether a platform makes use of algorithmic curation. A plain vanilla discussion forum is within scope.

The Secretary of State can instruct Ofcom to modify its Codes of Practice.


True. Section 40 of the Bill empowers the Secretary of State to direct OFCOM to modify a draft code of practice if the Secretary of State believes that modifications are required (a) for reasons of public policy, or (b) in the case of a terrorism or CSEA code of practice, for reasons of national security or public safety. The Secretary of State can keep sending the modified draft back for further modification.

A platform will be required to remove content that is legal but harmful to adults.

No. The legal but harmful to adults duty (should it survive in the Bill) applies only to Category 1 platforms and on its face only requires transparency. Some have argued that its effect will nevertheless be heavily to incentivise Category 1 platforms to remove such content. [Update: the government announced on 28 November 2022 that the legal but harmful to adults duty will be dropped.]

The Bill is about systems and processes, not content moderation.


False dichotomy. Whilst the Bill’s 
illegality and harm to children duties are couched in terms of systems and processes, it also lists measures that a service provider is required to take or use to fulfil those duties, if it is proportionate to do so. Content moderation, including taking down content, is in the list. It is no coincidence that the government’s Impact Assessment estimates additional moderation costs over a 10 year period at nearly £2 billion.

Ofcom could ban social media quoting features.

Indeterminate. Some may take the view that enabling social media quoting encourages toxic behaviour (the reason why the founder of Mastodon did not include a quote feature). A proponent of requiring more friction might argue that it is the kind of non-content oriented feature that should fall within the ‘safety by design’ aspects of a duty of care – an approach that some regard as preferable to moderating specific content.

Ofcom deprecation of a design feature would have to be tied to some aspect of a safety duty under the Bill and perhaps to risk of physical or psychological harm. There would likely have to be evidence (not just an opinion) that the design feature in question contributes to a relevant kind of risk within the scope of the Bill. From a proportionality perspective, it has to be remembered that friction-increasing proposals typically strike at all kinds of content: illegal, harmful, legal and beneficial.  

Of course the Bill does not tell us which design features should or should not be permitted. That is in the territory of the significant discretion (and consequent power) that the Bill places in the hands of Ofcom. If it were considered to be within scope of the Bill and proportionate to deprecate a particular design feature, in principle Ofcom could make a recommendation in a Code of Practice. That would leave it to the platform either to comply or to explain how it satisfied the relevant duty in some other way. Ultimately Ofcom could seek to invoke its enforcement powers.

The Bill will outlaw end to end encryption.

Not as such, but… . Ofcom will be given the power to issue a notice requiring a private messaging service to use accredited technology to scan for CSEA material. A recent government amendment to the Bill provides that a provider given such a notice has to make such changes to the design or operation of the service as are necessary for the technology to be used effectively. That opens the way to requiring E2E encryption to be modified if it is incompatible with the accredited technology – which might, for instance, involve client-side scanning.  Ofcom can also require providers to use best endeavours develop or source their own scanning technology.

The government’s response to the Pre-legislative Scrutiny Committee is also illuminating: “End-to-end encryption should not be rolled out without appropriate safety mitigations, for example, the ability to continue to detect known CSEA imagery.” 

The press are exempt.


True up to a point, but it’s complicated.

First, user comments under newspaper and broadcast stories are intended to be exempt as ‘limited functionality’ under Schedule 1 (but the permitted functionality is extremely limited, for instance apparently excluding comments on comments).

Second, platforms’ safety duties do not apply to recognised news publisher content appearing on their services. However, many news and other publishers will fall outside the exemption. 

Third, various press and broadcast organisations are exempted from the new harmful and false communications offences created by the Bill. 

[Update: the government announced on 28 November 2022 that the proposed new harmful communications offence will be dropped.]

[Updated 3 December 2022 to take account of the government announcement on 28 November 2022.]

Leave a Reply

Your email address will not be published. Required fields are marked *