Lack of public support for AI risks ‘concerted backlash’, study suggests

Report from RSA finds that most people oppose use of automated decision-making in various areas of public service

Research has found that the majority of citizens oppose the use of automation tools in making decisions in various areas of the public sector, including criminal justice, immigration, and social support.

The Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA) commissioned YouGov to poll 2,000 UK citizens, who were asked whether or not they were aware of the use of automated decision-making tools for a variety of sectors and use cases.

Large numbers of respondents lacked cognisance of the use of automation in many areas, with just 9% familiar with its use in the criminal justice system. The figures were little higher for immigration (14%), healthcare (18%), and social support (19%).

What is more, there is a stark lack of public support for the use of automation in all these areas, the research shows.

In criminal justice, just 12% support the use of automated decision-making tools, with 60% opposing them. For immigration, 16% are supportive and 54% are in opposition, while in healthcare, 20% of respondents support the technology’s use and 48% oppose it. The use of automaton in decisions about social support has support levels of just 17%, with 53% of citizens in opposition.


Related content


Respondents were asked to pick their two biggest concerns from a list of six. The absence of empathy in making decisions that affect individuals and communities emerged as by far the biggest worry, having been cited by 61%.

A lack of accountability in decision-making was picked by 31%, ahead of a lack of oversight and regulation on 26%, and the loss of jobs on 22%. Some 18% believe AI could reinforce the incumbent biases of decision-making systems, and 13% feel there is a lack of clarity in how decisions are reached. Just 6% indicated that they have no concerns.

When asked how their support for the use of automation could be increased, 29% said that nothing could do so.

But 36% said they would be more be supportive if people had a right to request an explanation of how decisions were reached, while 33% would like to see punishments for companies who fail to comply with regulation on monitoring and auditing of systems. 

About one in four (26%) would feel better about the use of AI if there were common principles to guide organisations’ use of automated tools, while 24% would like governments and businesses to engage more with the public, and 20% would be receptive to automation if it was only used “if it could be explained to someone with no technical expertise”. Some 17% would welcome sector-specific frameworks to regulate the use of the technology.

In his foreword for the report, titled Artificial Intelligence: Real Public Engagement, RSA chief executive Matthew Taylor said that “it is urgent and vital to hear the voice of informed citizens in shaping norms, practise, and policies”.

“Currently, it can feel that the growing ubiquity and sophistication of AI is closely matched by growing public concern about its implications,” he said. “On the one hand, unless the public feels informed and respected in shaping our technological future, the sense will grow that ordinary people have no agency – a sense that is a major driver in the appeal of populism. At worst, it could lead to a concerted backlash against those perceived to be exploiting technological change for their own narrow benefit.”

Taylor added: “On the other hand, if those who will shape our technological future – from politicians and officials to corporate leaders and technologists themselves – trust, understand and act on informed public opinion, AI could prove to be a powerful tool to open up new opportunities for human fulfilment.”

 

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere