Ofcom tests accuracy of social platforms’ AI classification tools with ‘sensitive material’


New online safety regulator aims to understand the benefits and limitations of automated moderation tools used by social networks, in a project that requires specialised support and care for staff

Regulator Ofcom has acquired tranches of “sensitive material” to test the capabilities of the automation technology used by online platforms to detect illegal or harmful content.

Measures set out in the Online Safety Act passed last year require the likes of Facebook, Twitter, Instagram and TikTok to take “proportionate measures to effectively mitigate and manage the risks of harm from illegal content” posted on their platforms. These new regulations will be overseen and enforced by Ofcom – which will have the power to issue fines of £18m or up to 10% of the worldwide turnover for any firm that breaches the rules. In the case of Twitter, this could equate to almost £300m, while Facebook could theoretically be hit with a near-£11bn penalty.

A report commissioned by Ofcom and published last year, found that “although online platforms… are actively fighting to improve online safety [and] many have invested significantly in developing novel processes and technologies that may help improve safety, there are still a number of limitations and difficulties with the use of automated content classifiers (ACCs) in online content moderation”.

ACCs are machine learning-based tools that are intended to automate the process of determining whether material posted to a platform should be published or not. This could include detecting the likes of hate speech, explicit material, or content that breaches intellectual property laws.

To help test the efficacy of this technology, on 1 April Ofcom entered into a one-month contract with Defined.AI, a company which specialises in “curating and providing high-quality and ethical data for AI applications”.

This includes a customisable data set of more than 300,000 images and 1,700 videos featuring “adult content” or other restricted or regulated material, as well as many other data sets of various forms of content.

According to a newly published commercial notice, “Ofcom requires curated data sets in compliance with data regulations for training and evaluating AI models”. As part of the engagement, which is valued at a little over £250,000, the supplier will provide the regulator with “data sets of sensitive material to allow Ofcom to test robustness and accuracy of automatic content classifiers”.


Related content


PublicTechnology understands that the data sets will contain a range of different types of sensitive material that platforms might be required to classify – and take steps to eradicate – under the terms of the online safety laws.

The tests are intended to provide Ofcom with insights into accuracy and effectiveness of ACCs currently used by platforms, as well as detecting any possible biases or other issues. The findings will help support the regulator’s policymaking and inform its engagement with online platforms.

Given the nature of the data being handled, security measures in place for this work will include blurring and redaction of material, where required. Staff working on the project will also be given special training on safety considerations and how best to protect colleagues, PublicTechnology understands.

Such initiatives are part of a wider measures for Ofcom staff whose work involves potentially distressing material, who are asked to take part in regular psychological assessments and are pointed towards specialist support, if required.

For its part, Defined.AI claims that it is “committed to ethical data collection practices, ensuring that our data sets are derived from fully consented, transparent processes [and] our global, diverse crowdsourcing strategy not only expands the dataset’s scope, but also steadfastly maintains standards of privacy and integrity.

Having passed into law in October, the measures of the Online Safety Act are being rolled out incrementally, with Ofcom working on implementing the new regulatory regime in the most harmful areas first.

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere