Policymakers must start asking difficult questions on the ethics of AI in healthcare

Government needs to begin working with citizens and industry to address the risks created by the use of new technology, according to Jessica Morley and Luciano Floridi of the Oxford Internet Institute

Data has always been crucial to providing cost-efficient and clinically-effective healthcare. 

However, historically, the importance of high-quality analytics has been underestimated by policymakers and necessary investment in infrastructure, strategy and skills has lagged. This has started to change in recent years, following the increasing excitement about the potential of analytics techniques that fall under the umbrella-heading of artificial intelligence.

As researchers have shown that AI can be used to identify breast cancer, predict risk of hospitalisation, identify new drugs, predict staffing needs, predict which patients are unlikely to turn up for their appointment, policymakers – and politicians – have become much more interested in the technology.

AI now forms a key part of the NHS Long-Term Plan (2019) in England, the US National Institutes of Health Strategic Plan for Data Science (2018), and China’s Healthy China 2030 strategy (2016). The crucial part played by data analytics in responding to the COVID-19 crisis has only increased this interest. 

It is possible to protect against risks. But, in order to do so, policymakers, regulators, healthcare providers, researchers and developers must be proactive and accept relevant responsibilities for keeping society safe.

Interest in AI is reasonable. All opportunities to save lives, and improve access to and quality of care should at least be investigated to avoid ethically unjustifiable opportunity costs. 

What is problematic is the potential lack of awareness, not least among policymakers, of the ethical risks posed by the use of AI in healthcare. Often, the only potential ethical dangers highlighted by policymakers are those associated with data protection, such as privacy infringement, reidentification, or lack of consent, and these are related only to individual people.  

These risks are known because policymakers have already had to deal with the fallout of data protection failures in dealings between AI companies and healthcare institutions. 

Ethical foresight
However, ethical foresight analysis – asking what is likely to happen, rather than what has already happened – shows that there are other ethical risks that are less easily mitigated through technical innovation. 

These less obvious ethical risks relate to: misguided, inconclusive or inscrutable evidence (epistemic risks); unfair outcomes and transformative effects (normative risks); or difficulties in identifying problems, fixing them, and holding people accountable for any resultant harms (traceability risks). 

Risks in each of these categories can concern individuals, relationships between individuals, groups, institutions, sectors, and society. Unfortunately, there is insufficient awareness of these risks among policymakers and regulations.

Let us take an epistemic risk as an example. 

Suppose an algorithm used to diagnose skin conditions is deployed within an app that individuals wishing to see a dermatologist must use first before they are allocated an appointment with a clinician, with the app acting as triage. Suppose that it later transpires that the algorithm was trained on very poor-quality data, and it had only been tested on Caucasian individuals, but that this problem was not identified thus introducing the following possible risks:  

  • Individuals: individual people may have been misdiagnosed or undiagnosed
  • Relationships: there may have been a loss of trust between healthcare providers and patients, resulting in a de-personalisation of care 
  • Groups: misdiagnosis or missed diagnosis may have happened at scale, affecting some groups more than others and leading to poorer health outcomes for those groups 
  • Institutions: (such as the NHS) may have wasted funds and directed resources away from greater areas of need – because one condition is seen to be more prevalent than another, for instance 
  • Sectoral: the need to integrate the results from the algorithm, which is privately designed, with an individual’s healthcare record may have resulted in excessively broad data sharing between public and private entities. 
  • Society: overall society may have suffered from poorer public healthcare provision and worsening health outcomes. 

Fortunately, it is possible to protect against these, and other, risks. But in order to do so policymakers, regulators, healthcare providers, researchers and developers must be proactive and accept relevant responsibilities for keeping society safe. They must ask difficult questions, in consultation with the public, such as ‘which tasks should be delegated to AI health solutions, and which should not? or ‘what evidence is needed to ‘prove’ clinical effectiveness of an AI health solution?’ 

And to then take appropriate action, such as introducing a minimum required standard of externally valid evidence of accuracy and clinical efficacy for AI into medical device law.

This process will need to be repeated at regular intervals to account for technological advances and changes in social context. Taking such proactive measures can help ensure that healthcare systems are able to capitalise safely and mindfully on the opportunities that AI presents for healthcare, while avoiding the chilling effects and opportunity costs that would come from significant loss of trust if people, institutions and society are not appropriately protected from harm. 

 

Clck here to read the full paper, The ethics of AI in healthcare: A mapping review, co-authored by Jessica Morley, Caio C.V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo, Luciano Floridi. 

 
 

Sam Trendall

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to our newsletter
ErrorHere