New online safety regulator aims to understand the benefits and limitations of automated moderation tools used by social networks, in a project that requires specialised support and care for staff
Regulator Ofcom has acquired tranches of “sensitive material” to test the capabilities of the automation technology used by online platforms to detect illegal or harmful content.
Measures set out in the Online Safety Act passed last year require the likes of Facebook, Twitter, Instagram and TikTok to take “proportionate measures to effectively mitigate and manage the risks of harm from illegal content” posted on their platforms. These new regulations will be overseen and enforced by Ofcom – which will have the power to issue fines of £18m or up to 10% of the worldwide turnover for any firm that breaches the rules. In the case of Twitter, this could equate to almost £300m, while Facebook could theoretically be hit with a near-£11bn penalty.
A report commissioned by Ofcom and published last year, found that “although online platforms… are actively fighting to improve online safety [and] many have invested significantly in developing novel processes and technologies that may help improve safety, there are still a number of limitations and difficulties with the use of automated content classifiers (ACCs) in online content moderation”.
ACCs are machine learning-based tools that are intended to automate the process of determining whether material posted to a platform should be published or not. This could include detecting the likes of hate speech, explicit material, or content that breaches intellectual property laws.
To help test the efficacy of this technology, on 1 April Ofcom entered into a one-month contract with Defined.AI, a company which specialises in “curating and providing high-quality and ethical data for AI applications”.
This includes a customisable data set of more than 300,000 images and 1,700 videos featuring “adult content” or other restricted or regulated material, as well as many other data sets of various forms of content.
According to a newly published commercial notice, “Ofcom requires curated data sets in compliance with data regulations for training and evaluating AI models”. As part of the engagement, which is valued at a little over £250,000, the supplier will provide the regulator with “data sets of sensitive material to allow Ofcom to test robustness and accuracy of automatic content classifiers”.
Related content
- Ofcom requires £113m extra investment and 100 new staff to deliver online safety regulation
- Former tech minister warns Online Safety Bill could weaken apps’ cyber protection
- Senior tech execs could face prison for breaching online safety laws
PublicTechnology understands that the data sets will contain a range of different types of sensitive material that platforms might be required to classify – and take steps to eradicate – under the terms of the online safety laws.
The tests are intended to provide Ofcom with insights into accuracy and effectiveness of ACCs currently used by platforms, as well as detecting any possible biases or other issues. The findings will help support the regulator’s policymaking and inform its engagement with online platforms.
Given the nature of the data being handled, security measures in place for this work will include blurring and redaction of material, where required. Staff working on the project will also be given special training on safety considerations and how best to protect colleagues, PublicTechnology understands.
Such initiatives are part of a wider measures for Ofcom staff whose work involves potentially distressing material, who are asked to take part in regular psychological assessments and are pointed towards specialist support, if required.
For its part, Defined.AI claims that it is “committed to ethical data collection practices, ensuring that our data sets are derived from fully consented, transparent processes [and] our global, diverse crowdsourcing strategy not only expands the dataset’s scope, but also steadfastly maintains standards of privacy and integrity.
Having passed into law in October, the measures of the Online Safety Act are being rolled out incrementally, with Ofcom working on implementing the new regulatory regime in the most harmful areas first.
Valuable info. Lucky me I found your web site by accident, and I’m shocked why this accident didn’t happened earlier! I bookmarked it.
The crux of your writing whilst sounding reasonable at first, did not really sit properly with me after some time. Somewhere within the paragraphs you actually managed to make me a believer but just for a while. I still have got a problem with your jumps in assumptions and you would do well to help fill in all those breaks. When you can accomplish that, I could certainly end up being fascinated.
Thanks for the thoughts you are discussing on this blog. Another thing I’d prefer to say is getting hold of duplicates of your credit report in order to scrutinize accuracy of any detail is one first measures you have to execute in credit improvement. You are looking to clean your credit history from damaging details flaws that damage your credit score.