Review

Digital health and care: emerging from pandemic times

Abstract

In 2020, we published an editorial about the massive disruption of health and care services caused by the COVID-19 pandemic and the rapid changes in digital service delivery, artificial intelligence and data sharing that were taking place at the time. Now, 3 years later, we describe how these developments have progressed since, reflect on lessons learnt and consider key challenges and opportunities ahead by reviewing significant developments reported in the literature. As before, the three key areas we consider are digital transformation of services, realising the potential of artificial intelligence and wise data sharing to facilitate learning health systems. We conclude that the field of digital health has rapidly matured during the pandemic, but there are still major sociotechnical, evaluation and trust challenges in the development and deployment of new digital services.

Introduction

It is often blithely noted that the pandemic accelerated the uptake of digital capabilities that had unnecessarily languished in pilot status for many years, almost as though the smashing of cultural and organisational inertia was a ‘silver lining’ of the pandemic cloud. However, cautions and challenges remain to be considered, and we should not regard technology as a ‘silver bullet’ that can magic away the fundamental and long-standing issues in global healthcare. Our review takes a primarily UK focus, but we believe that many of the principles have wider application. Also, while we focus on the National Health Service (NHS), it is pertinent to note that the entire health and care sector stands to benefit from meaningful digital transformation.

Further digital transformation is needed to make the NHS future-proof

The COVID-19 pandemic catalysed rapid adoption of digital technology in the NHS1 and resulted in significant changes to service delivery, primarily to enable remote working and reduce the risk of infection transmission but also to free up capacity in acute hospitals.2 Primary care in particular saw a huge increase in remote consultations. There was also a surge in patients’ uptake of the NHS App, NHS login and e-prescription services. Initially, these changes were positively perceived by the public, equating these changes with progress and improved efficiency and safety, in a service that was overdue for modernisation. As the pandemic progressed, however, there were growing concerns that remote consultations could lead to missed diagnoses, create challenges to therapeutic relationships and exacerbate health inequalities.3

In a recent review of 63 studies on primary care online consultation systems (11 of which were conducted during the pandemic), there was no quantitative evidence for the negative impact of online consultations on patient safety, but qualitative studies suggested varied perceptions of their safety.4 Online consultations increased access to care and decreased patient costs but were also sometimes found to have negative impacts on provider costs, staff and patient workloads, patient satisfaction and care equity. For instance, some primary care staff have indicated that they believe that patients seek help more readily via online consultations than they would have done via office-based consultations, and this leads to increasing staff workload.

Also, several remote monitoring models were widely implemented during the pandemic, such as COVID Oximetry @home5 and COVID virtual wards.6 Remote monitoring models ask patients to record health readings at home while these readings are reviewed and responded to by professionals elsewhere. There is typically an increased responsibility for patients to self-manage, for example, in the COVID Oximetry @home programme patients were expected to escalate care if their oxygen level dropped below certain thresholds.7 Large-scale evaluations of these programmes are still ongoing, but a rapid mixed-methods study found that many patients required support and preferred human contact, especially for identifying problems.8

Going forward a key challenge for the NHS is to clear the backlog of elective care that already existed before the COVID-19 pandemic but was strongly exacerbated by it. For instance, an independent review of diagnostic services, commissioned by NHS England in 2019, revealed that diagnostic capacity (in terms of equipment and workforce) was much lower in England than in other developed countries.9 This is now hampering recovery from the pandemic. A major programme of work is underway to improve access to a wide range of diagnostic tests, with the establishment of community diagnostic centres being a key component of this programme.10 These centres have the potential to move routine diagnostic services closer to patients and reduce unnecessary hospital visits, but they risk exacerbating the existing workforce crisis in the NHS. It is, therefore, important to consider the wider sociotechnical system, allowing workforce investment to be focused in those places that can have the most impact and will, in turn, improve job satisfaction and retention.11 The NHS also aims to improve the efficiency of follow-up in outpatient care. Long waiting times, delayed appointments and rushed consultations had already become common before the pandemic, but the number of patients waiting for a first appointment with a specialist is now more than seven million.12 NHS England has set the ambition that 5% of outpatient attendances will be moved to patient-initiated follow-up pathways by March 2023—a target that is likely to increase in the future.13 Patient-initiated follow-up pathways allow patients to initiate outpatient follow-up appointments on an ‘as required’ basis compared with the traditional ‘physician-initiated’ model. Evidence on these pathways is still scarce but there are indications that they result in fewer overall outpatient appointments while maintaining equivalent if not better patient satisfaction, quality of life and clinical outcomes.14 There is ample opportunity to integrate artificial intelligence (AI) tools into these pathways, but this is an area that still needs development. We elaborate on this topic in the next section.

Making AI work in practice requires a systems approach

The pandemic has surfaced structural and cultural problems that persist with the development and deployment of AI and machine learning (ML) in healthcare more widely. Healthcare is a complex sociotechnical system, and the current data and technology-centric focus needs to be complemented by a systems perspective. A systems perspective considers from the outset the impact of integrating AI tools into the wider clinical system, where interactions with people, other information systems, the physical environment and the organisation of clinical and administrative processes will be determinants of success.15 16

During the pandemic, we saw an explosion in the number of ML algorithms to support the diagnosis and treatment of COVID-19. Examples include the use of Deep Learning to develop algorithms for the identification of COVID-19 from chest X-rays and CT scans,17 for the identification of patients at risk of critical COVID-19-related disease progression18 and for the rapid triage of patients with COVID-19.19 However, in retrospect, there were few, if any, examples of successful clinical deployments of these algorithms.20 Hence, we need to be cautious with the claims being made.

The tremendous push towards the development of AI during COVID-19 likely had several drivers, including the urgency to deal with the impact of COVID-19 as well as the collective research focus of the worldwide community, including funding sources, on COVID-19. But arguably, another key driver was sheer data availability. As the number of people infected with COVID-19 continued to grow, so did the number of data points that could be used to train algorithms. This was facilitated further by national efforts, such as the national COVID-19 chest imaging database established in the UK by NHSX.21 Developers can access this national database for performance and fairness testing of algorithms on a dataset representative of the UK population. While in principle, the availability of such national datasets is helpful to reduce the risk of bias of algorithms and to assess their performance, we need to be mindful that we do not create situations where developers simply train algorithms based on datasets that happen to be available rather than based on the need for and intended use of their models. The starting point for the development of algorithms should be an identified clinical need and an understanding of the associated clinical system to ensure that algorithms address clinically meaningful purposes. Then, suitable and high-quality data can be procured.

As the field of healthcare AI matures, we have seen welcome developments around reporting guidelines for ML algorithms, such as STARD-AI22 and PROBAST-AI23 for reporting on the development and testing of diagnostic and prognostic prediction models, and SPIRIT-AI24 and CONSORT-AI25 for clinical trials of healthcare AI technologies. An important gap was addressed recently with the DECIDE-AI guideline,26 which addresses early-stage, small-scale clinical evaluation of ML algorithms. The apparent lack of successful clinical deployment of the multitude of COVID-19 algorithms is a case in point—we cannot assume that retrospective evaluation of ML algorithms translates smoothly into successful adoption and deployment in clinical systems. We require a suitable empirical evaluation of AI and ML tools, which considers how these tools are integrated and used in specific clinical contexts. Developers can draw on recent guidance, such as the British Standard BS 30440, which formulates a comprehensive auditable validation framework for healthcare AI.27

Sharing data wisely builds trust and supports learning health systems

Emergency expansion of data sharing was a pivotal part of the pandemic response, crucial to the unparalleled collaborative open science that made such remarkable and rapid progress. Deidentified data linkage at the national level by programmes such as CVD-COVID-UK28 29 has enabled truly population-based analysis in ways that had previously been imagined but seldom realised. Data analytics has contributed to policy decisions, operational efficiencies and public health outcomes. Achieving this required innovative legislation, appropriate information governance and capable data infrastructure.

In Taiwan, for example, post-SARS legislation in 2007 introduced powers for governmental access to personal data in the event of emerging infectious diseases.30 This empowered a task force to analyse diverse sources including COVID-19 test results, mobile device geolocation and hospital respiratory illness diagnosis tracking to provide remarkably powerful contact tracing and surveillance. Other countries that had rapid success deriving important insights from national data sharing during the worst of the pandemic were Scotland, Iceland, Israel and Qatar.31

However, data alone do not save lives.32 The best exemplars of data sharing are in fact forms of learning health system, where virtuous cycles comprising ‘practice to data’, ‘data to knowledge’ and ‘knowledge to practice’ have operated.33 All this has required coherent policy support and adequate infrastructure, in terms of connectivity, storage, analytics and workforce. The countries that were most successful were able to build on existing foundations. The lesson here is that public health requires an ‘always-on’ infrastructure, ready to support the next inevitable pandemic.

‘Big data’ in health and care continues to have serious data quality issues, necessitating extensive cleansing, and often translation between heterogeneous data structures and coding schemes.34 In many health systems, even fundamentals like patient matching between disparate data sets remain problematic.35 Some health services, such as primary care in England, have financial incentive schemes that motivate standardised recording and coding36 but despite this, the practice of clinical coding remains highly variable.37 This poor data quality is one aspect of the problem of being ‘data rich, but information poor.38

How has the public reacted to more ‘open’ use of their health data? This seems to relate to how actually ‘open’ the data re-use is perceived to be, in the sense of transparency about who has access to what, under what rules, in what form and for what purpose. In the UK, a major data science corporation that was awarded significant NHS contracts in the pandemic is still regarded as ‘contentious’,39 no doubt partly due to its involvement in past scandals about racist profiling in US law enforcement algorithms.40 A proposed national extraction of data from general practice patient records in England, repeatedly delayed for various reasons, generated serious concerns from professional bodies due to its poor engagement with citizens about risks and benefits. On the other hand, citizens’ juries have proved to be a powerful method to enable genuine dialogue with the public and obtain specific measures of relative trust in a range of data-sharing initiatives.41

Preparing for the next pandemic

We suggest the following strategies to improve preparedness for future pandemics. First of all, further integration of telehealth services and remote patient monitoring technologies would enable seamless continuation of services with minimal physical contact during the next pandemic. We have only just started on this journey. In particular, a thorough evaluation of these services on care processes and patient outcomes is still needed. Second, a robust and interconnected data infrastructure, enabling real-time collection, analysis and sharing of health data across NHS and social care providers would facilitate early detection of outbreaks, rapid response coordination and effective resource allocation. The NHS is making progress on this front through the Secure Data Environment programme,42 but there is still a long way to go. Third, learning from mistakes during the COVID-19 pandemic, AI researchers should form multidisciplinary collaborations with provider organisations, social scientists and applied health researchers to define meaningful scenarios where ML algorithms can have added benefit during a pandemic, and develop methods and tools to address those scenarios when the time comes. Fourth and finally, building sufficient trust from both the public and the care professions is essential to become a truly data-driven and knowledge-driven learning health system that is prepared for the next pandemic.