The Online Safety Bill is back in Parliament. It had been stalled for five months whilst the government made a few changes. A Parliamentary debate on Monday (5th December) revealed the shift in policy direction for the first time. It's relatively small
change, with big implications.
According to the government, the Online Safety Bill is supposed to protect children. However, from a digital rights perspective it is probably the most worrying piece of legislation ever imagined to
date. The government's focus is on the content it wants to ban, with little attention paid to the impact on freedom of expression or privacy. The lack of definition or precision in the text leaves wide open loopholes for over-removals of content and the
possibility of government- imposed, privatised surveillance.
The emphasis was on new amendments to be tabled early next year. Self-harm content, deep fakes and the sharing of non-consensual intimate images, will be defined as new
criminal offences and illegal content.
The subtle policy shift turns on a requirement for large online platforms to tackle the so-called "legal but harmful" content. This is a legally-problematic, grey area. It is about
content that is not illegal but which the government wants to ban, and understood to include eating disorders, self-harm, and false claims about medicines.
The government has announced a plan to delete this requirement, but only
for adult users, not for children. An amendment will be tabled next week.
A further, legally problematic, amendment requires platforms to allow adult users to filter out these kinds of harmful content for themselves. The idea is a
kind of filter button where users can select the type of harmful content that they don't want to see.
In tandem, there will be an amendment that makes online platforms enforce their terms and conditions with regard to content that
is not addressed by the Bill.
We have seen drafts of some of these amendments, and await the final versions.
This filter, together with the requirement to enforce terms and conditions, and an existing
requirement to remove all illegal content, is what the government is calling its "triple shield". The government claims this will protect users from the range of harms set out in the Bill. It also claims the move will protect free speech. This
claim does not stack up, as the underlying censorship framework remains in place, including the possibility of general monitoring and upload filters.
Moreover, the effect of these amendments is to mitigate in favour of age-gating.
The notion of "legal but harmful" content for children remains in the Bill. In Monday's debate, government Ministers emphasised the role of "age assurance" which is a requirement in the Bill although it does not say how it should be
implemented.
The government's position on age-gating is broader than just excluding under-18s from 'adult' content. The Secretary of State, Michelle Donelan, said that all platforms must know the age of their users. They may be
required to differentiate between age-groups, in order to prevent children from engaging with age-inappropriate harmful content to be defined by the government. The likely methods will use biometric surveillance.
MPs have also
passed an amendment that confirms chat controls on private messaging services. This is the "spy clause"
, renumbered S. 106 (formerly S.104). It's a stealth measure that is almost invisible in the text, with
no precision as to what providers will do. The government's preferred route is understood to be client-side scanning. This completes a trio of surveillance on public posts, private chats and children.