As automated technology becomes more widely used by public bodies, and for a broader range of purposes, Louise Crow of mySociety outlines why governance rules and processes must keep pace
As the government makes a significant investment into AI in public services, we need to update our transparency and accountability mechanisms to keep pace with the automation of state decision making.
Artificial Intelligence is increasingly being deployed in government – from assessing fraud in benefits, to drafting communications to the public. For AI use in the public sector to be successful, it needs to be demonstrably effective, and carefully implemented with input from those affected, with outcomes well understood, monitored and responded to, and explicit human accountability for decision making.
Adequate transparency is a key enabler of those capabilities.
It is important not least because significant amounts of money are being spent – the government has already awarded contracts for more than £500m worth of AI projects this year, with an equally large expectation of cost savings as a result.
Microsoft’s market cap is of the same order of magnitude as the UK’s GDP – there’s a power imbalance in terms of knowledge and resources between public authorities and big tech companies
The changing nature of the systems being procured also means that there is a genuine risk of gaps opening up in understanding, transparency and accountability.
At a technology level, the direction of travel is not towards openness. We’re looking at a new generation of digital systems in which the rules of operation are not explicit. Large language models and generative AI systems like ChatGPT are built from generic neuron-like processing architectures called neural networks and huge amounts of training data.
In these systems, the rules by which inputs are transformed into outputs are implicit. The way they produce results is inherently more opaque. So much so that the process of reverse engineering neural networks to produce rules that humans can understand is a discipline in itself, and an emerging one at that.
What data the systems have been trained on is, by default, not being disclosed by most companies – so, to the extent that these systems are used in government, there’s a move from a standing government technology principle of ‘be open and use open source code’ to a much more closed model, where the rules are implicit, having been derived from the training data, and the data itself is often not available.
Some of the systems will be developed in-house but, particularly in smaller authorities, they will be procured, and procured from tech companies that are now powerful entities in their own right – for example Microsoft’s market cap is of the same order of magnitude as the UK’s GDP. So there’s a power imbalance in terms of knowledge and resources between public authorities and big tech companies in procurement of these systems.
The implicit decision making is not necessarily, in itself, a problem. We have lots of mechanisms for getting the original ‘neural networks’ – the ones inside our heads – to be accountable and transparent with respect to decision making. For example, we require the rationales for decisions to be documented and then made available via transparency mechanisms like Freedom of Information for people to form an opinion on.
We don’t yet have those mechanisms sufficiently established for the new generation of AI systems.
We strongly need to build evaluative capabilities both inside and outside government around this new technology as it is being used in decision making, and transparency rights are a key tool for doing so.
Insights from FOI
At mySociety, we run WhatDoTheyKnow, a digital service that helps people submit Freedom of Information requests and maintains an archive of published responses. And through WhatDoTheyKnow, we can see that FOI is already providing a unique window into the current generation of government decision making systems and what impact they are having. In the last few years, flexibility of FOI means that we can come at this question in multiple ways.
A few examples:
- The Public Law project’s Tracking Automated Governance database has used WhatDoTheyKnow and other sources to show that there is a hinterland of algorithmic decision making in government that is not being declared in the official Algorithmic Transparency register.
- The Bureau of Investigative Journalism has used FOI to look specifically at the procurement of government data systems, finding many authorities unwilling or unable to specify how and why they purchased these services.
- The Data Justice Lab at Cardiff University has investigated what we can learn from failed and cancelled systems across fraud detection, child welfare and policing.#
- Most importantly, because of the accessibility of FOI, in the Post Office Horizon scandal, one of the biggest miscarriages of justice in British history which has ruined lives due to the supposed infallibility of an IT system, Eleanor Shaikh – not a journalist or professional investigator, but someone who knew one of the subpostmasters involved – got documents disclosed through FOI that weren’t previously disclosed to the high court.
Strengthening laws for the era of AI
As more generative AI and machine learning applications are procured in the public sector, it’s time to revisit the ICO’s proposals for strengthening the transparency requirements that go with contracts to perform public services. The government committed to this in 2024 as part of their Make Work Pay policy paper, but almost a year on there has been no movement on it.

For a strong version of what that would look like, we can turn to Former MEP and AI policy expert Marietje Schaake, who says in her book, The Tech Coup: “Any law that applies to governmental organisations to maintain transparency and accountability should be applied equally to technology companies that execute tasks on behalf of the government. When technology solutions are paid for by the public, they should be publicly accessible; with public money must come public code.”
How might that change things? In the Horizon scandal Inquiry, one of the Fujitsu bosses testified that Fujitsu staff knew of bugs, errors and defects in the system as far back as 1999. If there had been a mechanism for bringing that knowledge, held by a private contractor, to light, that scandal might have unfolded very differently over the last 25 years.
In an era of increasingly complex AI automation, the extension of FOI obligations to the contractors providing the systems used in government decision-making would be a significant step in preventing further injustices.

Louise Crow (pictured above right) is chief executive of mySociety, a non-profit group dedicated to promoting the use of digital technologies to empower citizens to engage in democratic and civic participation. You can sign up for their newsletter here.

