Research from regulator Ofcom – which is responsible for online safety – finds that exposure is particularly prevalent among children and women, and includes scams, as well as sexual and political content
A large proportion of UK citizens are being exposed to online deepfakes, with children being especially impacted, Ofcom research has found.
By comparing results from two YouGov surveys, the research revealed differences between those aged eight to 15 and those aged 16 and upwards. It showed that, since the beginning of the year, half of those in the younger age group had seen at least one deepfake compared to 43% of over-16s.
Meanwhile, females in both age groups reported a higher exposure, with a difference of more than 15 percentage in the adult group. However children were found to be more confident in their ability to detect deepfakes, with their reported levels being 11% higher than the adult cohort.
Social media apps and video posting platforms, like YouTube, ranked as the top source of deepfake content in both surveys.
Related content
- Government extends use of digital simulation for ‘information incident’ crisis training
- Government comms unit taps into tech tool to track Russian social disinformation
- EXCL: Government buys £700k software to monitor death threats against ‘high-profile individuals’ involved in vaccine rollout
The most common type of content 8-15-year-olds say they have encountered was a funny or satirical deepfake, followed by scam adverts.
Deepfakes of a politician or a political event ranked the highest for adults. In addition, the adult survey measured the prevalence of sexual deepfakes. Almost 15% reported having seen one since the beginning of the year, with two in 10 of these admitting it had been of an underaged person.
The results come after research by the University of Edinburgh revealed more than 300 million people across the world were victims of online sex abuse over the last year.
Representatives for government’s controversial National Security Online Information Team – formerly known as the Counter Disinformation Unit – recently picked out deepfakes as one of three core areas of focus, alongside threats posed by hostile states and the potential undermining of democratic processes.
In the run up to the recent general election, as part of official security guidelines, the government published dedicated guidelines on the risks posed to the process by deepfakes and other issues created by generative artificial intelligence.
This story originally appeared on PublicTechnology sister publication Holyrood