Article Text

Download PDFPDF

Integrating digital health technologies into complex clinical systems
  1. Mark Sujan1,2
  1. 1Investigation Education, Health Services Safety Investigation Body, Poole, UK
  2. 2Human Factors Everywhere, Woking, UK
  1. Correspondence to Dr Mark Sujan; mark.sujan{at}gmail.com

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Modern health systems must embrace digital technologies to address challenges like ongoing shortages in the global health and care workforce, significant diagnostic backlogs and the requirements of diverse and ageing populations. The COVID-19 pandemic and the exceptional advances in artificial intelligence (AI) and machine learning (ML) have accelerated the drive towards digitalisation of health systems.1 However, making digital health technologies work in practice remains challenging in terms of how these technologies are designed, how their performance and safety in operation are assured and how their impact on staff and on patients is assessed.2

A key problem undermining the successful implementation of digital health technology is the persistent focus on technology in isolation, which is at odds with the realities of complex health and care systems. The shortcomings of this technology-centric focus can be seen, for example, when reviewing the lack of successful clinical deployment of the multitude of ML algorithms developed during the pandemic to support the diagnosis and management of COVID-19.3 More broadly, the apparent success of ML algorithms found in retrospective evaluation studies is frequently not replicated in subsequent prospective studies.4 5 The difficulty of translating successful retrospective evaluation of algorithms into useful clinical practice has been referred to as the challenge of the last mile.6

Arguably, consideration of the challenge of the last mile, that is, of the realities of complex clinical systems, cannot be left to the end, but needs to inform the design of AI and, more generally, digital health technologies from the outset. The design of digital health technologies needs to be based on a systems perspective. A systems perspective considers how technology fits into the wider clinical system, where success depends on interactions with people, other IT systems, the physical environment and the organisation of clinical and administrative processes.7 8 The two ‘editor’s choice’ articles illustrate the importance of considering the sociotechnical nature of digital health technology implementation.

Hong and colleagues studied an ML tool to identify at-risk patients who are undergoing outpatient cancer treatment in order to reduce their acute care needs.9 The ML tool had been developed and implemented as part of a randomised controlled quality improvement project. The authors describe using a survey instrument on completion of the implementation phase to elicit perceptions from healthcare staff about the impact of the adoption of the ML tool and to identify practical implementation challenges. While, generally, feedback about the ML tool was positive and encouraging, the results highlight that the introduction of ML into a complex clinical system might affect different stakeholders in different and unevenly distributed ways. The need for prospective and mixed methods evaluation of algorithms has been recognised in the literature, but to date, there are few documented examples.10 Through their findings, Hong and colleagues demonstrate the importance of such empirical studies of AI in complex clinical settings.

In the second article, Richter and Ammenwerth11 aim to support practitioners with the implementation of risk management for networked medical devices in hospitals based on the international standard IEC 80001. While the principles of risk management have been long established and documented in several standards, these principles are often expressed in abstract and conceptual terms. This leaves practitioners facing a challenging implementation gap.12 Richter and Ammenwerth attempt to bridge this gap by providing a catalogue of 49 specific steps and things that practitioners must put in place, along with 18 indicators to assess the impact of risk management activities. This practical guidance has been developed through a consensus exercise with experts and practitioners. The findings were then validated in a case study in one Austrian hospital, where parts of the catalogue were implemented and evaluated for effectiveness, complexity and satisfaction based on practitioner feedback. This approach can serve as an illustration and a blueprint for making best practice guidance practically relevant and meaningful in complex clinical environments.

Successful integration of digital health technologies into complex clinical systems requires a move away from a narrow and limiting technology focus towards a systems perspective, which needs to be reflected in the design, operation and evaluation of the technology. Practitioners need to be provided with meaningful tools and guidance to enable them to manage and to assess the operation of digital health technologies, and to ask the right questions of developers. The recent British Standard BS 30440, which outlines an auditable validation framework for healthcare AI, is another example of this.13 Finally, we need to continue efforts to build capacity and capability within health and care organisations to enhance their readiness to deploy such technologies meaningfully, for example, in the case of the National Health Service in England by extending training opportunities with NHS England (the former NHS Digital team) on digital clinical safety and AI safety or with the Health Services Safety Investigation Body on system-based investigation methods. Other health and care systems should develop similar education and training frameworks and opportunities.

Ethics statements

Patient consent for publication

Ethics approval

Not applicable.

References

Footnotes

  • Twitter @MarkSujan

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; externally peer reviewed.