Skip to content

Selfpos

  • Home
  • European Law
  • Canada Law
  • Internet Law
  • Property Law
  • New York Law
  • More
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
  • Toggle search form
Cyberleagle: Ofcom’s proactive technology measures: principles-based or vague?

Cyberleagle: Ofcom’s proactive technology measures: principles-based or vague?

Posted on August 5, 2025 By rehan.rafique No Comments on Cyberleagle: Ofcom’s proactive technology measures: principles-based or vague?

Ofcom has
published its long-expected consultation on additional measures that it recommends
U2U platforms and search engines should implement to fulfil their duties under
the Online Safety Act.  The focus, this
time, is almost entirely on proactive technology: automated systems intended to
detect particular kinds of illegal content and content harmful to children,
with a view to blocking or swiftly removing them.

The
consultation marks a further step along the UK’s diverging path from the EU
Digital Services Act. The DSA prohibits the imposition of general monitoring
obligations on platforms. Those are just the kind of obligations envisaged by the
Online Safety Act’s preventative duties, which Ofcom is gradually fleshing out
and implementing.

Ofcom
finalised its first Illegal Harms Code of Practice in December 2024. For U2U services the Code contained two proactive technology recommendations: hash and URL matching for
CSAM. The initial consultation had also suggested fuzzy keyword matching to detect some kinds of fraud, but Ofcom did not proceed with that. The regulator indicated
that it would revisit fraud detection in a later, broader consultation. That
has now arrived.

The new U2U proposals go beyond fraud. They propose perceptual hash-matching for visual
terrorism content and for intimate image abuse content. They suggest that
content should be excluded from recommender feeds if there are indications that
it is potentially illegal, unless and until it is determined via content
moderation to be legal. 

Most
ambitiously, Ofcom wants certain relatively large platforms to research the availability and suitability (in accordance with
proposed criteria) 
of proactive technology for detection of fraud and some other illegal behaviour, then
implement it if appropriate. Those platforms would also have to review existing
technologies that they use for these purposes and, if feasible, bring them into
line with Ofcom’s criteria.

Ofcom calls
this a ‘principles-based’ measure, probably because it describes a
qualitative evaluation and configuration process rather than prescribing any concrete parameters within
which the technology should operate.

Freedom of expression

Legal obligations
for proactive content detection, blocking and removal engage the fundamental freedom
of expression rights of users. Obligations must therefore comply with ECHR human
rights law, including requirements of clarity and certainty.

Whilst a principles-based
regime may be permissible, it must nevertheless be capable of predictable
application. Otherwise it will stray into impermissible vagueness. Lord Sumption in Catt said that what is required is a
regime the application of which is:

“reasonably predictable, if necessary with the assistance of
expert advice. But except perhaps in the simplest cases, this does not mean
that the law has to codify the answers to every possible issue which may arise.
It is enough that it lays down principles which are capable of being predictably
applied to any situation.”

In Re Gallagher he said that:

“A measure is not “in accordance with the law” if it purports
to authorise an exercise of power unconstrained by law. The measure must not
therefore confer a discretion so broad that its scope is in practice dependent
on the will of those who apply it, rather than on the law itself. Nor should it
be couched in terms so vague or so general as to produce substantially the same
effect in practice.”

Typically these
strictures would apply to powers and duties of public officials. The Online
Safety Act is different: it requires U2U service providers to make content
decisions and act (or not) to block or remove users’ posts. Thus the legal
regime that requires them to do that has to provide sufficient predictability
of their potential decisions and resulting acts.

In addition
to fraud and financial services offences, Ofcom’s proposed principles-based
measures would apply to image based CSAM, CSAM URLs, grooming, and encouraging
or assisting suicide (or attempted suicide).

Any real-time
automated content moderation measure poses questions about human rights compatibility.
The auguries are not promising: proactive technology, armed only with the user’s
post and perhaps some other on-platform data, will always lack contextual
information. For many offences off-platform information can be the difference
between guilt and innocence.  Decisions
based on insufficient information inevitably stray into arbitrariness.

Then there
is the trade-off between precision and recall. Typically, the more target
content the automated tool is tuned to catch, the more false positives it will
also throw up. False positives result in collateral damage to legitimate
speech. It does not take many false positives to constitute disproportionate
interference with users’ rights of freedom of expression.

Lord Grade,
the Chairman of Ofcom, said in a recent speech that the aims of tackling
criminal material and content that poses serious risks of harm to children’s
physical or emotional health were not in conflict with freedom of expression.
Indeed so, but focusing only on the aim misses the point: however worthy the
end, it is the means – in this case proactive technology – that matters.

Prescribed by law

Ofcom’s
Proactive Technology Draft Guidance says this about proportionality of the
proposed measures:

“Proactive technology used for detection of harmful content
involves making trade-offs between false positives and false negatives.
Understanding and managing those trade-offs is essential to ensure the
proactive technology performs proportionately, balancing the risk of
over-removal of legitimate content with failure to effectively detect harm.” (para
5.14)

Proportionality
is a requirement of human rights compliance. However, before considering proportionality
a threshold step has to be surmounted: the ‘prescribed by law’ or ‘legality’ condition.
This is a safeguard against arbitrary restrictions – laws should be sufficiently
precise and certain that they have the quality of law.

The
prescribed by law requirement is an aspect of the European Convention on Human
Rights. It has also been said to be a UK constitutional principle that
underpins the rule of law:

“The acceptance of the rule of law as a constitutional
principle requires that a citizen, before committing himself to any course of
action, should be able to know in advance what are the legal consequences that
will flow from it.” (Lord Diplock, Black-Clawson [1975])

The
Constitutional Reform Act 2005 refers in S.1 to:

“the existing constitutional principle of the rule of law”.

For content
monitoring obligations the quality of law has two facets, reflecting the potential
impact of the obligations on the fundamental rights of both platforms and users.

The platform
aspect is written in to the Act itself:

“the measures described in the code of practice must be
sufficiently clear, and at a sufficiently detailed level, that providers
understand what those measures entail in practice”. (Schedule 4)

The user
aspect is not spelled out in the Act but is no less significant for that. Where
a user’s freedom of speech may be affected by steps that a platform takes to
comply with its duties, any interference with the user’s right of freedom of
expression must be founded on a clear and precise rule.

That means
that a user must be able to foresee in advance with reasonable certainty
whether something that they have in mind to post is or is not liable to be
blocked, removed or otherwise affected as a result of the obligations that the
Act places on the platform.

That is not
simply a matter of users themselves taking care to comply with substantive law
when they consider posting content. The Act interpolates platforms into the
process and may require them to make judgements about whether the user’s post
is or is not illegal. Foreseeability is therefore a function both of the
substantive law and of the rules about how a platform should make those
judgements.

If, therefore,
the mechanism set up by the Act and Ofcom for platforms to evaluate, block and
take down illegal content is likely to result in unpredictable, arbitrary determinations
of what is and is not illegal, then the mechanism fails the ‘prescribed by law’
test and is a per se violation of the right of freedom of expression.

Equally, if
the regime is so unclear about how it would operate in practice that a court is
not in a position to assess its proportionality, that would also fail the
‘prescribed by law’ test. That is the import of Lord Sumption’s observations in
Catt and Gallagher (above).

A
prescriptive bright-line rule, however disproportionate it might be, would
satisfy the ‘prescribed by law’ test and fall to be assessed only by reference
to necessity and proportionality. Ofcom’s principles-based recommendations,
however, are at the opposite end of the spectrum: they are anything but a
bright-line rule. The initial ‘prescribed by law’ test therefore comes into
play.

How do
Ofcom’s proposed measures stack up?

Service
providers themselves would decide how accurate the technology has to be, what
proportion of content detected by the technology should be subjected to human
review, and what is an acceptable level of false positives.

Whilst Ofcom
specifies various ‘proactive technology criteria’, those are expressed as qualitative
factors to be taken into account, not quantitative parameters. Ofcom does not
specify what might be an appropriate balance between precision and recall, nor
what is an appropriate proportion of human review of detected content.

Nor does
Ofcom indicate what level of false positives might be so high as to render the
technology (alone, or in combination with associated procedures) insufficiently
accurate.

Examples
of Ofcom’s approach include:

“However, there are some limitations to the use of proactive
technology in detecting or supporting the detection of the relevant harms. For
example, proactive technology does not always deal well with nuance and context
in the same way as humans.

However, we mitigate this through the proactive technology
criteria which are designed to ensure proactive technology is deployed in a way
that ensures an appropriate balance between precision and recall, and that an
appropriate proportion of content is reviewed by humans.” (Consultation, para
9.92)

“Where a service has a higher tolerance for false positives,
more content may be wrongly identified. … The extent of false positives will
depend on the service in question and the way in which it configures its
proactive technology. The measure allows providers flexibility in this regard,
including as to the balance between precision and recall (subject to certain
factors set out earlier in this chapter).” (Consultation, paras 9.135, 9.136)

“… when determining what is an appropriate proportion of
detected content to review by humans, providers have flexibility to decide what
proportion of detected content it is appropriate to review, however in so
doing, providers should ensure that the following matters are taken into
account…” (Consultation, para 9.19)

“However, in circumstances where false positives are consistently high and cannot be meaningfully reduced or mitigated, particularly where this may have a significant adverse impact on user rights, providers may conclude that the proactive technology is incapable of meeting the criteria.” (Proactive Technology Draft Guidance, para 5.19)

How high is high? How significant is significant? No answer is given, other than that the permissible level of false positives is related to the nature of the subsequent review of detected content. As we shall see, the second stage review does not require all content detected by the proactive technology to be reviewed by human beings. The review could, seemingly, be conducted by a second automated system.

The result is that two service providers in similar circumstances could arrive at completely
different conclusions as to what constitutes an acceptable level of legitimate
speech being blocked or taken down. Ofcom acknowledges that the flexibility of its
scheme:

“could lead to significant variation in impact on users’
freedom of expression between services”. (Consultation, para 9.136)

That must
raise questions about the predictability and foreseeability of the regime.

If the
impact on users’ expression is not reasonably foreseeable, that is a quality of
law failure and no further analysis is required. If that hurdle were
surmounted, there is still the matter of what level of erroneous blocking and
removal would amount to a disproportionate level of interference with users’
legitimate freedom of expression. 

Proportionality?

Ofcom concludes that:

“Having taken account of the nature and severity of the harms
in question, the principles we have built into the measure to ensure that the
technology used is sufficiently accurate, effective and lacking in bias, and
the wider range of safeguards provided by other measures, we consider overall
that the measure’s potential interference to users’ freedom of expression to be
proportionate.” (Consultation, para 9.154)

However, it
is difficult to see how Ofcom (or anyone else) can come to any conclusion as to
the overall proportionality of the recommended principles-based measures when they
set no quantitative or concrete parameters for precision versus recall,
accuracy of review of suspect content, or an ultimately acceptable level of
false positives.

Ofcom’s
discussion of human rights compliance starts with proportionality. While it notes
that the interference must be ‘lawful’, there is no substantive discussion of
the ‘prescribed by law’ threshold.

Prior
restraint

Finally, on
the matter of human rights compatibility, proactive detection and filtering
obligations constitute a species of prior restraint (Yildirim v Turkey
(ECtHR), Poland v The European Parliament and Council (CJEU)).

Prior
restraint is not impermissible. However, it does require the most stringent
scrutiny and circumscription, in which risk of removal of legal content will
loom large. The ECtHR in Yildirim noted that “the dangers inherent in
prior restraints are such that they call for the most careful scrutiny on the
part of the Court”.

The
proactive technology criteria

Ofcom’s proactive
technology criteria are, in reality, framed not as a set of criteria but as a
series of factors that the platform should take into account.  Ofcom describes them as “a practical,
outcomes-focused set of criteria.” [Consultation, para 9.13]

Precision
and recall
One criterion
is that the technology has been evaluated using “appropriate” performance
metrics and

“configured so that its performance strikes an appropriate
balance between precision and recall”.  (Recommendation
C11.3(c))

Ofcom evidently
must have appreciated that, without elaboration, “appropriate” was an impermissibly vague determinant. The draft Code of Practice goes on (Recommendation C11.4(a)):

“when configuring the technology so that it strikes an
appropriate balance between precision and recall, the provider should ensure
that the following matters are taken into account:

i) the service’s risk of relevant harm(s), reflecting the
risk assessment of the service and any information reasonably available to the
provider about the prevalence of target illegal content on the service;

ii) the proportion of detected
content that is a false positive;

iii) the effectiveness of the systems and/or processes used
to identify false positives; and

iv) in connection with CSAM or grooming, the importance of
minimising the reporting of false positives to the National Crime Agency (NCA)
or a foreign agency;”

These
factors may help a service provider tick the compliance boxes – ‘Yes, I have
taken these factors into account’ – but they do not amount to a concrete
determinant of what constitutes an appropriate balance between precision and
recall.

Review of
detected content
Accuracy
of the proactive technology is, as already alluded to, only the first stage of
the recommended process. The service provider has to treat a detected item as
providing ‘reason to suspect’ that it is illegal content, then move on to a
second stage: review.

“Where proactive technology detects or supports the detection
of illegal content and/or content harmful to children, providers should treat
this as reason to suspect that the content may be target illegal content and/or
content harmful to children.

Providers should therefore take appropriate action in line
with existing content moderation measures, namely ICU C1 and ICU C2 (in the
Illegal Content User-to-user Codes of Practice) and PCU C1 and PCU C2 (in the
Protection of Children User-to-user Code of Practice), as applicable.”
(Consultation, para 9.74)

That is
reflected in draft Codes of Practice paras ICU C11.11, 12.9 and PCU C9.9, 10.7.
For example:

“ICU C11.11 Where proactive technology detects, or supports
the detection of, target illegal content in accordance with ICU C11.8(a), the
provider should treat this as reason to suspect that the content may be illegal
content and review the content in accordance with Recommendation ICU C1.3.”

‘Review’
does not necessarily mean human review. Compliance with the proactive
technology criteria requires that:

“…policies and processes are in place for human review and
action is taken in accordance with that policy, including the evaluation of
outputs during development (where applicable), and the human review of an appropriate
proportion
of the outputs of the proactive technology during deployment.
Outputs should be explainable to the extent necessary to support meaningful
human judgement and accountability.” (Emphasis added) (draft Code of Practice
Recommendation ICU C11.3(g))

The
consultation document says:

“It should be noted that this measure does not itself
recommend the removal of detected content. Rather, it recommends that providers
moderate detected content in accordance with existing content moderation
measures (subject to human review of an appropriate proportion of detected
content, as mentioned above).” (Consultation, para 9.147)

And:

“Providers have flexibility in deciding what proportion of
detected content is appropriate to review, taking into account [specified
factors]” (Consultation, para 9.145)

Ofcom has
evidently recognised that “appropriate proportion” is, without elaboration, another 
impermissibly vague determinant. It adds (Recommendation C11.4(b)):

“when determining what is an appropriate proportion of
detected content to review by humans, the provider should ensure that the
following matters are taken into account:

i) the principle that the resource dedicated to review of
detected content should be proportionate to the degree of accuracy achieved by
the technology and any associated systems and processes;

ii) the principle that content with a higher likelihood of
being a false positive should be prioritised for review; and

iii) in the case of CSAM or grooming, the importance of
minimising the reporting of false positives to the NCA or a foreign agency.”

As with
precision and recall, these factors may help a service provider tick the compliance
boxes but are not a concrete determinant of the proportion of detected content
to be submitted to human review in any particular case.

Second
stage review – human, more technology or neither?

The upshot
of all this appears to be that content detected by the proactive technology
should be subject to review in accordance with the Code of Practice moderation
recommendations; and that an ‘appropriate proportion’ of that content should be
subject to human review.

But if only
an ‘appropriate proportion’ of content detected by the proactive technology has
to be subject to human review, how is the rest to be treated? Since it appears
that some kind of ‘appropriate action’ is contemplated in accordance with
Ofcom’s content moderation recommendations, the implication appears to be that moderation
at the second stage could be by some kind of automated system.

In that
event it would seem that the illegal content judgement itself would be made by
that second stage technology in accordance with Recommendation C1.3.

Recommendation
C1.3, however, does not stipulate the accuracy of second stage automated
technology. The closest that the Code of Practice comes is ICU C4.2 and 4.3:

“The provider should set and record performance targets for
its content moderation function, covering at least:

a) the time period for taking
relevant content moderation action; and

b) the accuracy of decision making.

In setting its targets, the provider should balance the need
to take relevant content moderation action swiftly against the importance of
making accurate moderation decisions.”

Once again,
the path appears to lead to an unpredictable balancing exercise by a service
provider.

Curiously, elsewhere
Ofcom appears to suggest that second stage “complementary tools” could in some cases
merely be an ‘additional safeguard’:

“What constitutes an appropriate balance between precision
and recall will depend on the nature of the relevant harm, the level of risk
identified and the service context. For example, in some cases a provider might
optimise for recall to maximise the quantity of content detected and apply
additional safeguards, such as use of complementary tools or increased levels
of human review, to address false positives. In other cases, higher precision
may be more appropriate, for example, to reduce the risk of adverse impacts on
user rights.” (Proactive Technology Draft Guidance, para 5.18)

If the
implication of ‘in some cases’ is that in other cases acting on the output of
the proactive technology without a second stage review would suffice, that would
seem to be inconsistent with the requirement that all detected content be subject
to some kind of moderation in accordance with Recommendation C1.3.

Moreover,
under Ofcom’s scheme proactive technology is intended only to provide ‘reason
to suspect’ illegality. That would not conform to the standard stipulated by
the Act for an illegal content judgement: ‘reasonable grounds to infer’.

Conclusion

When, as
Ofcom recognises, the impact on users’ freedom of expression will inevitably
vary significantly between services, and Ofcom’s documents do not condescend to what
is or is not an acceptable degree of interference with legitimate speech, it is
difficult to see how a user could predict, with reasonable certainty, how their
posts are liable to be affected by platforms’ use of proactive technology in
compliance with Ofcom’s recommendations.

Nor is it
easy to see how a court would be capable of assessing the proportionality of the
measures. As Lord Sumption observed, t
he regime should not be couched in terms so vague or so general as, substantially, to confer a discretion so broad that its scope is in practice dependent on the will of those who apply it. Again, Ofcom’s acknowledgment that the flexibility of its scheme could lead to significant variation in impact on users’ freedom of expression does not sit easily with that requirement.  

Ofcom, it
should be acknowledged, is to an extent caught between a rock and a hard place. It has to
avoid being overly technology-prescriptive, while simultaneously ensuring that
the effects of its recommendations are reasonably foreseeable to users and
capable of being assessed for proportionality. Like much else in the Act, that
may in reality be an impossible circle to square. That does not bode well for
the Act’s human rights compatibility.

Internet Law

Post navigation

Previous Post: Dozens of Ferrets Saved From Cruel Manitoba Ferret Mill, But Owner Still Selling Animals

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Cyberleagle: Ofcom’s proactive technology measures: principles-based or vague?
  • Dozens of Ferrets Saved From Cruel Manitoba Ferret Mill, But Owner Still Selling Animals
  • Failure to Advise Client of Legal Malpractice Potential Claim
  • The new European internal security strategy: ProtectEU | Epthinktank
  • Exploitation of an Elderly Person (Chapter 2)– The 1-2-3 of Bringing Abusers to Justice!**

Copyright © 2025 Selfpos.

Powered by PressBook Blog WordPress theme