Government told to establish effective regulation on AI

The government needs to get the ethical framework and regulation of artificial intelligence right and invest in skills training if it wants to make the most of the technology, the Government Office for Science has said.

Regulation and ethics must top government’s AI agenda – Photo credit: Flickr, Victory of the People, CC BY 2.0

In a paper on the topic published this week, the office – which is led by government chief scientific adviser Mark Walport – set out the opportunities and implications for decision-making related to the rise of AI.

It said that there was great potential for increasing productivity – for instance by improving efficiency and streamlining interaction with large datasets – but that the ethical implications needed proper consideration.


Related content

Government told to up its game on artificial intelligence
iCouncil: Do robots have what it takes for local government?
Digital transformation ‘struggling to meet ambition’, as automation threatens public sector jobs
Changing the culture: What government must do to make the most of AI


AI refers to computational analysis of data that is then used to make predictions or find patters – by setting a defined outcome, a computer can then use that algorithm to find the outcome on its own.

At a basic level, this is used to make film recommendations on Netflix or sort junk mail, but work is now focusing on ‘deep learning’. This involves using a labelled dataset to train a model on the right answers and then asks the computer to locate those answers in a new, unlabelled dataset.

The idea is that eventually the computer will be able to make decisions without human input, and adapt its own algorithms based on the answers it finds – driverless vehicles are an example of this as they make decisions based on their surroundings without human direction.

For government, such processes can be used at a basic level to automate repetitive tasks, or to help officials use data and machine learning to make better-informed decisions and tailor services to people’s needs.

HMRC recently revealed that it has 30 robotic automated projects underway, while recent surveys have shown that many councils are looking at automated learning processes to help them make cost savings.

However, the paper said that using these machine learning technologies come with wide ethical implications.

Allowing machines to carry out statistical profiling, which involves using past data to predict likely actions, could lead to unjustly stereotyping individuals on the basis of their ethnicity or lifestyle, the report said.

This is particularly true of deep learning systems learning from historic data that has been created in an environment of human bias and could perpetuate biases present in society.

For instance, a machine learning about university admissions will base its algorithms on historical admissions data that reflects the conscious or unconscious biases of early processes.

To mitigate this risk, the report recommended technologists identify biases in their data, and take steps to assess their impact. Other options could be to strip information of criteria that could be used in this way or by using these techniques to locate districts that might benefit from early intervention, rather than individuals.

Jobs and skills

Critics are also concerned that these moves will end up replacing people’s jobs – a recent survey by Deloitte suggested 35% of UK jobs would be affected by automation over the next two decades.

However, many counter this by saying the idea is to free up people’s time to take on more customer-focused, complex roles that cannot be done by machines.

The paper echoed these comments, saying that it is likely automation will change the types of jobs available pointing to research by the US group Pew Research Centre that said 53% of experts think it will actually create more jobs.

However, the paper said that it was likely that in the future people would need to have the skills to complement the technology, and that rapid technological changes could mean a decline in job-specific skills and people having to change roles more often.

“This emphasises the need for reskilling over the course of a career and the need to be pro-active, open to change and resilient,” the paper said.

“It also means that ‘general purpose’ skills, like problem solving and mental flexibility, that are transferrable across different domains could be increasingly valuable.”

In addition, the paper noted that, particularly for government, there may be a need to have humans involved.

For instance, it said, it would be deemed unsuitable to hand over many of the decisions government makes entirely to machines and so it is likely there will need to be a “human in the loop”.

However, the report said that this role would not be straightforward.

“If they never question the advice of the machine, the decision has de facto become automatic and they offer no oversight,” it said.

“If they question the advice they receive, however, they may be thought reckless, more so if events show their decision to be poor.”

The report also noted that teams using machine learning or AI must be aware of the legal framework behind them.

The use of data is closely protected and regulated and the report said that, if analysts are to use new technologies, government must make sure this takes place “in a safe and controlled environment”.

Finally, the report urged the government to be open, transparent and accountable – including openly discussing mistakes made through AI – to encourage the public to trust that it is making best use of the new technology and public data.

“Trust is underpinned by trustworthiness,” the report said. “Whilst this can be difficult to demonstrate in complex technical areas like artificial intelligence, it can be engendered from consistency of outcome, clarity of accountability, and straightforward routes of challenge.”

Rebecca.Hill

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere