For the first time in half a decade, GDS has updated a framework which provides advice and proposals to ensure that ethics are embedded in information and artificial intelligence schemes
A major revamp of government guidance for data use has more than doubled the number of principles departmental teams are asked to adhere to – while also advising that inherent tension between these tenets is likely to necessitate “ethical trade-offs”.
Just before Christmas, the Government Digital Service published the Data and AI Ethics Framework, which is intended to provide “guidance for public sector organisations on how to use data and data-driven technologies responsibly”. The document renames and replaces the previous Data Ethics Framework which was first introduced in 2018, and then updated in 2020.
Alongside the addition of ‘AI’ to the scope of the framework, the most notable change in the new and expanded version is that the number of the core principles that form the bedrock of the guidance has been increased from three to seven.
The incumbent principles of transparency, accountability, and fairness remain in place, but now also included are: privacy; safety; societal impact; and environmental sustainability.
However, accompanying this growing list of considerations, the document now also features a new section headed “trade-offs”, which opens with a warning that “there may be instances where you find that two or more of the ethical principles are in tension with one another”.
“For example, using more demographic information may help to identify biases and promote fairness, but this may come at the expense of an individual’s privacy,” it adds. “Here, you might need to consider trade-offs. Ethical trade-offs are inherently complex and you must treat them with care.”
The advice provides readers with examples of “some approaches” to help manage such compromises.
This includes making sure to “include stakeholder perspectives by engaging with affected communities to understand values and impacts”, as well as undertaking efforts to “consult legal and human rights frameworks to ensure that decisions align with national and international standards”.
Teams engaged in data and AI projects are also advised to “balance proportionality and necessity… ensure meaningful and inclusive deliberation [and] establish mechanisms to monitor outcomes, and adjust if necessary”.
Practise makes perfect
The updated framework contains extensive guidance on the meaning and context of each of the seven principles, as well as what each of them “means in practice” and some “recommended actions”.
Transparency is characterised as being “foundational to all other ethical principles”, with departments encouraged to take steps including maintaining “detailed records” and releasing information publicly.
Accountability “involves setting clear roles and responsibilities and being transparent about decisions”, the guidance says. Proposed actions include establishing clear project leadership roles, and making clear the inherent “legal responsibilities across the data and AI supply chain”.
Related content
- ‘Often companies want to do the right thing – but don’t know what that looks like’ – inside government’s data ethics hub
- New government chief data officer Aimee Smith starts role
- Government signs £650k deal for design of National Data Library
Fairness, meanwhile, is described as “a diverse concept [which] concerns both the way people are treated and described, and how opportunities and resources are distributed in society”. At the end of what is the longest section of the document, the recommended steps to ensure this principle is embedded include taking measures to “define fairness… understand your users [and] identify the groups most likely to be positively or negatively impacted by your project, and how this relates to the protected characteristics described in the Equality Act 2010”.
The first of the newly added principles, privacy, “is a complex ethical, legal, technological and social principle, [which] protects individual autonomy, meaning people should have the freedom to live, think, choose, socialise and more without undue observation or interference”, the guidance says.
The new framework outlines two main components of the core tenet: “privacy by design – making design and development decisions from the very start that enhance privacy objectives; [and] privacy by default – ensuring that default settings for the product or service are the most privacy-preserving options”.
The principle of “environmental sustainability means thinking about the wider impact of your project on people and the planet, and being mindful about the resources you use every day”.
It is recommended that teams should “consider reuse of existing models… raise awareness about the environmental and social impact of digital technologies”.
Societal impact “refers to the effects that data and AI can have on individuals, communities, social structures and economies, [and] includes both positive and negative impacts”.
To best embody this principle, data teams should “write down project objectives in non-technical terms”, while also taking steps to “involve diverse stakeholders in the design and testing process… [and[ use feedback mechanisms, such as surveys and advisory boards, to enable stakeholders to continuously shape project outcomes”.
Referring to the final tenet “in the context of data and data-driven technologies, safety refers to themes such as accuracy, security, reliability and robustness”, according to the framework.
Teams are advised to “make sure researchers and research participants are physically and emotionally safe – consider how a research topic could cause distress or discomfort, and reduce these risks by providing clear information, gaining informed consent, and allowing participants to withdraw at any time”.
This can be supplemented by work to “comply with security requirements and standards… [and] apply data minimisation to limit the potential harms of a data breach”.

