Law proposes ‘explicit duty’ for online firms to design services to prevent illegal content

Written by Sam Trendall on 24 August 2022 in News
News

Government continues to tweak and reinforce the provisions of the Online Safety Bill

Credit: Robinraj Premchand/Pixabay

Proposed new laws will introduce an “explicit duty” for firms to design websites and services in a way that mitigates against the possibility that their platform will host illegal activity or content.

Government has announced some small – but significant – tweaks to its Online Safety Bill. If it passes into law, the bill will give the UK one of the world’s most far-reaching legal frameworks for the online world, including some of the biggest potential punishments for breaches.

The latest amendments relate to the duties placed on companies in respect of illegal content and activity on their platforms.

A primary intention of the changes is to make it clear that, as far as possible, preventing illegality and harm should be built into websites and digital services as part of their design process. This includes consideration of online grooming, and other instances where the platform in question is not the location of the crime, but may play a key role in enabling its perpetration.

“The amendments introduce an explicit duty on companies relating to the design of their services,” said a government fact sheet. “Companies must take a safety-by-design approach, managing the risk of illegal content and activity appearing on their services, rather than focusing primarily on content moderation.”


Related content


It added: “In addition, these amendments make clear that platforms have duties to mitigate the risk of their service ‘facilitating’ an offence, including where that offence may actually occur on another site – such as can occur in cross-platform child sexual exploitation and abuse (CSEA) offending – or even offline. This addresses concerns raised by stakeholders that the bill will not adequately tackle activities such as breadcrumbing, where CSEA offenders are posting links or having conversations on a particular site, which are preparatory to a CSEA offence, which then might occur on a different platform, or even offline.”

Although the changes are intended to ensure that companies try and prevent crime taking place on their platforms – rather than just clamping down after it has been detected – further tweaks have also been made to clarify obligations regarding reactive moderation, and “how providers should determine whether content amounts to illegal content”.

An additional clause has been added to the bill with the aim of ensuring that “providers’ systems and processes should consider all reasonably-available contextual information when making judgements” about possible illegal content – including referring to other duties placed on them by other parts of the Online Safety Bill.

The clause also includes specific provisions aimed at ensuring websites implement appropriate processesto prevent the publication of fraudulent adverts. 

These processes should support companies to “ascertain whether, on the basis of all the reasonably available information, there are reasonable grounds to infer that: a) all the relevant elements of [and] offence, including the mental elements, are present; and/or b) no defence is available”, the fact sheet said.

Another newly introduced clause places a requirement on Ofcom, which will serve as the UK’s online harms regulator, to publish guidance on how online firms should make judgements about possible illegal content.

“We expect this will include examples of the kind of contextual and other information that is likely to be relevant when drawing inferences about mental elements and defences, and how far providers should go in looking for that information,” the government said.

Social networks – such as Facebook, Twitter, and TikTok – would be among those most heavily impacted by the new laws, if and when they come into effect. 

Failure to meet their legal obligations could see companies punished by fines of up to 10% of their global turnover. This would equate to more than £5bn in Facebook’s case, or about £250m for Twitter. Senior managers at firms in breach of the laws could also be sentenced to two years in prison.

Amendments to the Online Safety Bill are currently being considered by MPs, who will then debate and vote upon a third reading of the bill before. If it passes this stage, it will move on to the House of Lords.

The latest tweaks were made following “engagement with a broad range of stakeholders”, according to the government.

 

About the author

Sam Trendall is editor of PublicTechnology. He can be reached on sam.trendall@dodsgroup.com.

Tags

Share this page

Tags

Categories

CONTRIBUTIONS FROM READERS

Please login to post a comment or register for a free account.

Related Articles

Braverman floats criminalisation of ‘highly encrypted devices’
25 January 2023

Government consults on proposals to create new offences to clamp down on technologies it believes are enabling serious crime

Cabinet Office migrates data as Covid fraud hotline is wound down
18 January 2023

Dedicated reporting tools for coronavirus-related scams are being shuttered and case information transferred to law-enforcement entity

Government must earn public trust that AI is being used safely and responsibly
5 January 2023

Leaders from two of government’s core digital and data units – the CDDO and CDEI – introduce new guidelines intended to promote transparency in the public sector’s use of algorithms

Extremism increasingly spread via mainstream apps and sites, government research finds
16 December 2022

MoJ-backed study concludes that specialist sites and the dark web are no longer the only means of online radicalisation