Article Text

Download PDFPDF

Eight human factors and ergonomics principles for healthcare artificial intelligence
  1. Mark Sujan1,2,
  2. Rachel Pool3 and
  3. Paul Salmon4
  1. 1Human Factors Everywhere, Woking, UK
  2. 2Chartered Institute of Ergonomics and Human Factors, Birmingham, UK
  3. 3NHS England, Redditch, UK
  4. 4Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast, Maroochydore DC, Queensland, Australia
  1. Correspondence to Dr Mark Sujan; mark.sujan{at}humanfactorseverywhere.com

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

The COVID-19 pandemic dramatically accelerated the digital transformation of many health systems in order to protect patients and healthcare workers by minimising the need for physical contact.1 A key part of healthcare digital transformation is the development and adoption of artificial intelligence (AI) technologies, which are regarded a priority in national health policies.2 3 Since 2015, there has been an exponential growth in the number of regulatory approvals for medical devices that use machine learning,4 with British standards currently under development in conjunction with international standards. In addition, there are an even larger number of healthcare AI technologies that do not require such approvals, because they fall outside of the narrow definition of medical devices.

The scope of healthcare AI appears seemingly boundless, with promising results being reported across a range of domains, including imaging and diagnostics,5 prehospital triage,6 care management7 and mental health.8 However, caution is required when interpreting the claims made in such studies. For example, the evidence base for the effectiveness of deep learning algorithms remains weak and is at high risk of bias, because there are few independent prospective evaluations.9 This is particularly problematic, because the performance, usability and safety of these technologies can only be reliably assessed in real-world settings, where teams of healthcare workers and AI technologies co-operate and collaborate to provide a meaningful service.10 To date, however, there have been few human factors and ergonomics (HFE) studies of healthcare AI.11 There is a need for AI designs and prospective evaluation studies that consider the performance of the overall sociotechnical system, with evidence requirements proportionate to the level of risk.12 Reporting guidelines have been developed both for small-scale early clinical intervention trials (DECIDE-AI)13 as well as for large-scale clinical trials evaluating AI (SPIRIT-AI)14 to enhance the quality and transparency of the evidence.

In order to support developers, regulators and users of healthcare AI, the Chartered Institute of Ergonomics and Human Factors (CIEHF) developed a white paper that sets out an HFE vision and principles for the design and use of healthcare AI.15 Development of the white paper was an international effort bringing together over 30 contributors from different disciplines and was supported by a number of partner organisations including British Standards Institution, the Australian Alliance for AI in Healthcare, the South American Ergonomics Network (RELAESA), US-based Society for Healthcare Innovation, the UK charity Patient Safety Learning, Assuring Autonomy International Programme hosted by the University of York, Human Factors Everywhere and the Irish Human Factors & Ergonomics Society.

HFE principles

HFE as a discipline is concerned with the study of human work and work systems. It is a design-oriented science and field of practice that seeks to improve system performance and human well-being by understanding and optimising the interactions between people and the other elements of the work system, for example, technologies, tasks, other people, the physical work environment, the organisational structures and the external professional, political and societal environment.16

Current implementations of healthcare AI typically adopt a technology-centric focus, expecting healthcare systems (including staff and patients) to adapt to the technology. In this technology-centric focus, the function, performance and accuracy of AI are optimised, but these aspects are considered in isolation. This perspective raises various critical considerations that are often overlooked in the design and implementation of advanced technologies, sometimes with catastrophic consequences. From an HFE point of view, the design of healthcare AI needs to transition from the technology-centric focus towards a systems perspective. Applying a systems focus, AI should be designed and integrated into clinical processes and healthcare systems meaningfully and safely, with a view to optimising overall system performance and people’s well-being. Understanding how a sociotechnical system works comes from taking time to look at the elements of the system and how they interact with each other. HFE provides several frameworks and methods to achieve this, including Systems Engineering Initiative for Patient Safety17 and Cognitive Work Analysis.18 These frameworks usually involve the use of observation or ethnography for data collection in order to provide a rich contextual description of how work is carried out (‘work-as-done’19) and of people’s needs.

The CIEHF white paper identifies eight core HFE principles, see table 1. Some of these are very familiar from the wider literature on automation and date back to the 1970s and 1980s but retain their importance in the novel context of healthcare AI. For example, the potentially adverse impact of highly automated systems on user situation awareness and workload, along with the potential for over-reliance and automation bias, became apparent decades ago in a series of transportation accidents and incidents.20 21 These ‘ironies of automation’22 can arise when technology is designed and implemented without due consideration of the impact on human roles or the interaction between people and the technology, which can result in inadequate demands on the human, such as lengthy periods of passive monitoring, the need to respond to abnormal situations under time pressure and difficulties in understanding what the technology is doing and why. Alarm fatigue, that is, the delayed response or reduced response frequency to alarms, is another phenomenon associated with automated systems that has been identified from major industrial accidents, such as the 1994 explosion and fires at the Texaco Milford Haven refinery. In intensive care, it has been suggested that a healthcare professional can be exposed to over 1000 alarms per shift, contributing to alarm fatigue, disruption of care processes and noise pollution, with potentially adverse effects on patient safety.23 Developers of AI need to be mindful of these phenomena and not create technologies that add additional burden to healthcare professionals.

Table 1

Eight human factors and ergonomics principles for healthcare AI

However, the use of more advanced and increasingly autonomous AI technologies also presents novel challenges that require further study and research. AI technologies can augment what people do in ways that were not possible when machines simply replaced physical work, but in order to do this effectively the AI needs to able to communicate and explain to people its decision-making. This can be very challenging when using machine learning algorithms that produce complex and inscrutable models. Many approaches to explainable AI simply focus on providing detailed accounts of how an algorithm operates, but for explanations to be useful they need to be able to accommodate and be responsive to the needs of different users across a range of situations, for example, a patient might benefit from a different type of explanation compared with a healthcare professional. In this sense, rather than providing a description of a specific decision, explanation might be better regarded as a social process and a dialogue that allows the user to explore AI decision-making by interacting with the AI and by interrogating AI decisions.24

It is also important to build trust among staff to report any safety concerns with the AI. Many safety incidents are not currently reported and recorded in incident reporting systems.25 While an AI system can potentially log every piece of data and every one of its actions to provide an auditable history, healthcare professionals require assurance and reassurance of how these data would be used during a safety investigation. If clinicians are held accountable for incidents involving AI unless they can prove otherwise, then this might reduce their willingness to trust and accept AI systems.

Many applications of healthcare AI will be used within teams of healthcare workers and other professionals, as well as patients. The computational capabilities of AI technologies mean that AI applications will have a much more active and dynamic role within teams than previous IT systems and automation, in effect potentially becoming more like a new team member than just a new tool. Effective human–AI teaming will become increasingly critical when designing and implementing AI to ensure that AI capabilities and human expertise, intuition and creativity are fully exploited.26

Part of effective human–AI teaming is handover from the AI to the healthcare professional when it becomes necessary.10 To achieve this, the AI needs to recognise the need for handover and then execute the handover effectively. Handover between healthcare professionals is a recognised safety-critical task that remains surprisingly challenging and error prone in practice.27 The use of structured communication protocols (eg, age–time–mechanism–injuries–signs–treatments) could improve the quality of handover even if challenges remain in their practical application.28 Consideration should be given to the development of comparable approaches for the structured handover between AI and healthcare professionals.

While the intention of designers is to use AI to improve efficiency of workflows by taking over tasks from healthcare professionals, there is a danger that staff might get pulled into other activities instead or that the healthcare professional spends more time interacting with the AI. Lessons should be learnt from the introduction of other digital technologies, such as electronic health records, where it has been suggested that, for example, in emergency care physicians spend more time on data entry than on patient contact.29 The impact of integrating AI into an already computer-focused patient encounter needs to be carefully considered.

The use of healthcare AI also raises significant ethical issues. Technical challenges, including the potential for bias in data, have been highlighted, and have been incorporated into international guidelines and reporting standards.30 However, it is also important to address wider issues around fairness and impact on different stakeholder groups.31 At European level, the High-Level Expert Group on AI published ‘Ethics Guidelines for Trustworthy AI’.32 The guidelines are based on a fundamental rights impact assessment and operationalise ethical principles through seven key requirements: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being and accountability. HFE approaches can support addressing these ethical requirements through understanding stakeholders and their diverse needs and expectations.

Building HFE capacity

The systems perspective on healthcare AI set out in the CIEHF white paper is going to be instrumental in realising national AI strategies and delivering the benefits for patients and health systems. The digital transformation needs to be underpinned by HFE capacity within the health sector. Until very recently, there was no formal career structure for healthcare professionals with an interest in HFE. In the UK, this is changing with the recent introduction of both academic and learning-at-work routes towards accredited status of technical specialist or TechCIEHF (healthcare).33 Enhancing the professionalisation of HFE knowledge among those with responsibility for quality improvement, patient safety and digital transformation can support healthcare organisations in making better informed AI adoption and implementation decisions.

There is also a need for funding bodies and regulators to require evidence that suitable HFE expertise is included in the design and evaluation of healthcare AI. Funding specifications frequently reflect only the technology-centric perspective of AI rather than reinforcing a systems approach. While inclusion of qualitative research to support scaling of healthcare AI from the lab to clinical environments is useful, it cannot replace the benefits of early inclusion of HFE expertise already during the design stage of AI technologies. Human behaviour is highly context dependent and adaptive as people navigate complexity and uncertainty and this needs to inform the design of AI to ensure that the use of AI in health and care systems is meaningful and safe. Regulators are trying to catch up on the technical AI expertise required, but the effective regulation of these technologies should also be supported through the recruitment of suitably qualified HFE professionals to establish appropriate interdisciplinary expertise in the advancement of AI technologies in healthcare.

Ethics statements

Patient consent for publication

References

Footnotes

  • Twitter @MarkSujan

  • Contributors All authors contributed equally to the idea and drafting of the manuscript and reviewed and approved the final version.

  • Funding This work was supported in part (MS) by the Assuring Autonomy International Programme, a partnership between Lloyd’s Register Foundation and the University of York.

  • Competing interests All authors are coauthors of the Chartered Institute of Ergonomics and Human Factors white paper referred to in the manuscript. MS is a member of the Governing Council of the Chartered Institute of Ergonomics and Human Factors.

  • Provenance and peer review Not commissioned; externally peer reviewed.