Report offers AI action plan to build public trust
Centre for Public Impact says that government employees need to gain AI literacy
A global report has revealed that nearly a third of citizens are strongly concerned that the ethical issues of artificial intelligence (AI) have not been resolved when it comes to its use by public sector organisations.
‘How to make AI work in government and for people’ report, published by global not-for-profit foundation the Centre for Public Impact, investigates the use of AI by public sector organisations worldwide to improve policymaking and service delivery. It uses results from a survey which received more than 13,000 responses from 30 countries.
The report found that public trust is low due to over-hyped technology, scaremongering and anxiety over moral and ethical issues. It found that while the public supported governments using AI for process-heavy administrative tasks, trust declined where significant discretion was given over to AI in areas such as health diagnoses.
- Lack of public support for AI risks ‘concerted backlash’, study suggests
- Public-sector AI code of conduct published
- Citizens need rights for algorithm decisions to be explained – and challenged, MPs conclude
One of the ways for government organisations to improve take-up and gain public trust, it suggests, is for government employees to develop “AI literacy.”
“Train civil servants to spot potential applications in their work in government; enable frontline workers to enhance their AI collaboration skills,” it says. “They must be able to work with systems and their developers and constantly reassess whether the systems’ conclusions can be trusted; and encourage education and debate by engaging with the media, educational and cultural institutions, civil society groups and citizens to spread the word about AI”.
Other recommendations include constantly improving AI systems and adapting them to changing circumstances; being open with the public, employees and other organisations about what you are doing; and designing any AI intervention around the needs and problems of end users.
“It is essential to build these kind of authentic connections for the AI development process to succeed, and for end-users’ levels of trust in government to increase,” says the report.
In light of the survey results which found that 54% of respondents were “very concerned” about the potential impact of AI on jobs, the report also suggests viewing AI initiatives as solutions that replace tasks, rather than jobs.
Danny Buerkli, programme director at the Centre for Public Impact said: "When it comes to AI in government we either hear hype or horror; but never the reality. AI in public services will not become a reality if it doesn't have legitimacy.
"As data collection becomes easier and computing power increases, now is the right time to improve policymaking and service delivery with the help of AI but the process needs to be introduced responsibly. Our strong advice is to start using AI in government gradually, look at where it can really help and build trust as we learn.
"AI in government services could create dramatic improvements in people's lives. But AI also has the potential to drive a wedge between citizens and government and ultimately fail if not introduced with care."
The report was launched at the Tallinn Digital Summit in Estonia.
PublicTechnology editor Sam Trendall picks out the topics and trends that will dominate the year ahead, and revisits the predictions of a year ago to see any of them came to pass
Quartet of projects will receive backing from the Open Data Institute
Use of search engine’s imagery forms part of the era of ‘data abundance’, according to the UK’s national statistician
Department tells select committee that goal is ‘challenging, but not out of the question’