Digital health and care: emerging from pandemic times
•,,.
...
Abstract
In 2020, we published an editorial about the massive disruption of health and care services caused by the COVID-19 pandemic and the rapid changes in digital service delivery, artificial intelligence and data sharing that were taking place at the time. Now, 3 years later, we describe how these developments have progressed since, reflect on lessons learnt and consider key challenges and opportunities ahead by reviewing significant developments reported in the literature. As before, the three key areas we consider are digital transformation of services, realising the potential of artificial intelligence and wise data sharing to facilitate learning health systems. We conclude that the field of digital health has rapidly matured during the pandemic, but there are still major sociotechnical, evaluation and trust challenges in the development and deployment of new digital services.
Introduction
It is often blithely noted that the pandemic accelerated the uptake of digital capabilities that had unnecessarily languished in pilot status for many years, almost as though the smashing of cultural and organisational inertia was a ‘silver lining’ of the pandemic cloud. However, cautions and challenges remain to be considered, and we should not regard technology as a ‘silver bullet’ that can magic away the fundamental and long-standing issues in global healthcare. Our review takes a primarily UK focus, but we believe that many of the principles have wider application. Also, while we focus on the National Health Service (NHS), it is pertinent to note that the entire health and care sector stands to benefit from meaningful digital transformation.
Further digital transformation is needed to make the NHS future-proof
The COVID-19 pandemic catalysed rapid adoption of digital technology in the NHS1 and resulted in significant changes to service delivery, primarily to enable remote working and reduce the risk of infection transmission but also to free up capacity in acute hospitals.2 Primary care in particular saw a huge increase in remote consultations. There was also a surge in patients’ uptake of the NHS App, NHS login and e-prescription services. Initially, these changes were positively perceived by the public, equating these changes with progress and improved efficiency and safety, in a service that was overdue for modernisation. As the pandemic progressed, however, there were growing concerns that remote consultations could lead to missed diagnoses, create challenges to therapeutic relationships and exacerbate health inequalities.3
In a recent review of 63 studies on primary care online consultation systems (11 of which were conducted during the pandemic), there was no quantitative evidence for the negative impact of online consultations on patient safety, but qualitative studies suggested varied perceptions of their safety.4 Online consultations increased access to care and decreased patient costs but were also sometimes found to have negative impacts on provider costs, staff and patient workloads, patient satisfaction and care equity. For instance, some primary care staff have indicated that they believe that patients seek help more readily via online consultations than they would have done via office-based consultations, and this leads to increasing staff workload.
Also, several remote monitoring models were widely implemented during the pandemic, such as COVID Oximetry @home5 and COVID virtual wards.6 Remote monitoring models ask patients to record health readings at home while these readings are reviewed and responded to by professionals elsewhere. There is typically an increased responsibility for patients to self-manage, for example, in the COVID Oximetry @home programme patients were expected to escalate care if their oxygen level dropped below certain thresholds.7 Large-scale evaluations of these programmes are still ongoing, but a rapid mixed-methods study found that many patients required support and preferred human contact, especially for identifying problems.8
Going forward a key challenge for the NHS is to clear the backlog of elective care that already existed before the COVID-19 pandemic but was strongly exacerbated by it. For instance, an independent review of diagnostic services, commissioned by NHS England in 2019, revealed that diagnostic capacity (in terms of equipment and workforce) was much lower in England than in other developed countries.9 This is now hampering recovery from the pandemic. A major programme of work is underway to improve access to a wide range of diagnostic tests, with the establishment of community diagnostic centres being a key component of this programme.10 These centres have the potential to move routine diagnostic services closer to patients and reduce unnecessary hospital visits, but they risk exacerbating the existing workforce crisis in the NHS. It is, therefore, important to consider the wider sociotechnical system, allowing workforce investment to be focused in those places that can have the most impact and will, in turn, improve job satisfaction and retention.11 The NHS also aims to improve the efficiency of follow-up in outpatient care. Long waiting times, delayed appointments and rushed consultations had already become common before the pandemic, but the number of patients waiting for a first appointment with a specialist is now more than seven million.12 NHS England has set the ambition that 5% of outpatient attendances will be moved to patient-initiated follow-up pathways by March 2023—a target that is likely to increase in the future.13 Patient-initiated follow-up pathways allow patients to initiate outpatient follow-up appointments on an ‘as required’ basis compared with the traditional ‘physician-initiated’ model. Evidence on these pathways is still scarce but there are indications that they result in fewer overall outpatient appointments while maintaining equivalent if not better patient satisfaction, quality of life and clinical outcomes.14 There is ample opportunity to integrate artificial intelligence (AI) tools into these pathways, but this is an area that still needs development. We elaborate on this topic in the next section.
Making AI work in practice requires a systems approach
The pandemic has surfaced structural and cultural problems that persist with the development and deployment of AI and machine learning (ML) in healthcare more widely. Healthcare is a complex sociotechnical system, and the current data and technology-centric focus needs to be complemented by a systems perspective. A systems perspective considers from the outset the impact of integrating AI tools into the wider clinical system, where interactions with people, other information systems, the physical environment and the organisation of clinical and administrative processes will be determinants of success.15 16
During the pandemic, we saw an explosion in the number of ML algorithms to support the diagnosis and treatment of COVID-19. Examples include the use of Deep Learning to develop algorithms for the identification of COVID-19 from chest X-rays and CT scans,17 for the identification of patients at risk of critical COVID-19-related disease progression18 and for the rapid triage of patients with COVID-19.19 However, in retrospect, there were few, if any, examples of successful clinical deployments of these algorithms.20 Hence, we need to be cautious with the claims being made.
The tremendous push towards the development of AI during COVID-19 likely had several drivers, including the urgency to deal with the impact of COVID-19 as well as the collective research focus of the worldwide community, including funding sources, on COVID-19. But arguably, another key driver was sheer data availability. As the number of people infected with COVID-19 continued to grow, so did the number of data points that could be used to train algorithms. This was facilitated further by national efforts, such as the national COVID-19 chest imaging database established in the UK by NHSX.21 Developers can access this national database for performance and fairness testing of algorithms on a dataset representative of the UK population. While in principle, the availability of such national datasets is helpful to reduce the risk of bias of algorithms and to assess their performance, we need to be mindful that we do not create situations where developers simply train algorithms based on datasets that happen to be available rather than based on the need for and intended use of their models. The starting point for the development of algorithms should be an identified clinical need and an understanding of the associated clinical system to ensure that algorithms address clinically meaningful purposes. Then, suitable and high-quality data can be procured.
As the field of healthcare AI matures, we have seen welcome developments around reporting guidelines for ML algorithms, such as STARD-AI22 and PROBAST-AI23 for reporting on the development and testing of diagnostic and prognostic prediction models, and SPIRIT-AI24 and CONSORT-AI25 for clinical trials of healthcare AI technologies. An important gap was addressed recently with the DECIDE-AI guideline,26 which addresses early-stage, small-scale clinical evaluation of ML algorithms. The apparent lack of successful clinical deployment of the multitude of COVID-19 algorithms is a case in point—we cannot assume that retrospective evaluation of ML algorithms translates smoothly into successful adoption and deployment in clinical systems. We require a suitable empirical evaluation of AI and ML tools, which considers how these tools are integrated and used in specific clinical contexts. Developers can draw on recent guidance, such as the British Standard BS 30440, which formulates a comprehensive auditable validation framework for healthcare AI.27
Sharing data wisely builds trust and supports learning health systems
Emergency expansion of data sharing was a pivotal part of the pandemic response, crucial to the unparalleled collaborative open science that made such remarkable and rapid progress. Deidentified data linkage at the national level by programmes such as CVD-COVID-UK28 29 has enabled truly population-based analysis in ways that had previously been imagined but seldom realised. Data analytics has contributed to policy decisions, operational efficiencies and public health outcomes. Achieving this required innovative legislation, appropriate information governance and capable data infrastructure.
In Taiwan, for example, post-SARS legislation in 2007 introduced powers for governmental access to personal data in the event of emerging infectious diseases.30 This empowered a task force to analyse diverse sources including COVID-19 test results, mobile device geolocation and hospital respiratory illness diagnosis tracking to provide remarkably powerful contact tracing and surveillance. Other countries that had rapid success deriving important insights from national data sharing during the worst of the pandemic were Scotland, Iceland, Israel and Qatar.31
However, data alone do not save lives.32 The best exemplars of data sharing are in fact forms of learning health system, where virtuous cycles comprising ‘practice to data’, ‘data to knowledge’ and ‘knowledge to practice’ have operated.33 All this has required coherent policy support and adequate infrastructure, in terms of connectivity, storage, analytics and workforce. The countries that were most successful were able to build on existing foundations. The lesson here is that public health requires an ‘always-on’ infrastructure, ready to support the next inevitable pandemic.
‘Big data’ in health and care continues to have serious data quality issues, necessitating extensive cleansing, and often translation between heterogeneous data structures and coding schemes.34 In many health systems, even fundamentals like patient matching between disparate data sets remain problematic.35 Some health services, such as primary care in England, have financial incentive schemes that motivate standardised recording and coding36 but despite this, the practice of clinical coding remains highly variable.37 This poor data quality is one aspect of the problem of being ‘data rich, but information poor.38
How has the public reacted to more ‘open’ use of their health data? This seems to relate to how actually ‘open’ the data re-use is perceived to be, in the sense of transparency about who has access to what, under what rules, in what form and for what purpose. In the UK, a major data science corporation that was awarded significant NHS contracts in the pandemic is still regarded as ‘contentious’,39 no doubt partly due to its involvement in past scandals about racist profiling in US law enforcement algorithms.40 A proposed national extraction of data from general practice patient records in England, repeatedly delayed for various reasons, generated serious concerns from professional bodies due to its poor engagement with citizens about risks and benefits. On the other hand, citizens’ juries have proved to be a powerful method to enable genuine dialogue with the public and obtain specific measures of relative trust in a range of data-sharing initiatives.41
Preparing for the next pandemic
We suggest the following strategies to improve preparedness for future pandemics. First of all, further integration of telehealth services and remote patient monitoring technologies would enable seamless continuation of services with minimal physical contact during the next pandemic. We have only just started on this journey. In particular, a thorough evaluation of these services on care processes and patient outcomes is still needed. Second, a robust and interconnected data infrastructure, enabling real-time collection, analysis and sharing of health data across NHS and social care providers would facilitate early detection of outbreaks, rapid response coordination and effective resource allocation. The NHS is making progress on this front through the Secure Data Environment programme,42 but there is still a long way to go. Third, learning from mistakes during the COVID-19 pandemic, AI researchers should form multidisciplinary collaborations with provider organisations, social scientists and applied health researchers to define meaningful scenarios where ML algorithms can have added benefit during a pandemic, and develop methods and tools to address those scenarios when the time comes. Fourth and finally, building sufficient trust from both the public and the care professions is essential to become a truly data-driven and knowledge-driven learning health system that is prepared for the next pandemic.
Twitter: @MarkSujan
Contributors: All authors contributed equally to conceptualisation, writing and reviewing.
Funding: The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests: None declared.
Provenance and peer review: Not commissioned; externally peer reviewed.
Hutchings R. The impact of Covid-19 on the use of digital technology in the NHS. London, Nuffield Trust2020; Google Scholar
Mroz G, Papoutsi C, Rushforth A, et al. Changing media Depictions of remote consulting in COVID-19: analysis of UK newspapers. Br J Gen Pract2021; 71:e1–9. doi:10.3399/BJGP.2020.0967•Google Scholar
Darley S, Coulson T, Peek N, et al. Understanding how the design and implementation of online consultations affect primary care quality: systematic review of evidence with recommendations for designers, providers, and researchers. J Med Internet Res2022; 24. doi:10.2196/37436•Google Scholar
National Health Service. COVID-19 guidance NOTE: COVID Oximetry @ home. 2022; Google Scholar
National Health Service. Standard operating procedure: COVID virtual ward. 2022; Google Scholar
Vindrola-Padros C, Singh KE, Sidhu MS, et al. Remote home monitoring (virtual wards) for confirmed or suspected COVID-19 patients: a rapid systematic review. EClinicalMedicine2021; 37. doi:10.1016/j.eclinm.2021.100965•Google Scholar
Walton H, Vindrola-Padros C, Crellin NE, et al. Patients' experiences of, and engagement with, remote home monitoring services for COVID‐19 patients: A rapid Mixed‐Methods study. Health Expect2022; 25:2386–404. doi:10.1111/hex.13548•Google Scholar
Richards M. Diagnostics: Recovery and Renewal - Report of the Independent Review of Diagnostic Services for NHS England. London, NHS England2020; Google Scholar
Combes J, Crumpton E, Sujan M, et al. Building better care the Ergonomist: chartered Institute of Ergonomics and human factors. 2022; Google Scholar
QualityWatch. NHS performance Tracker. Nuffield trust and the health foundation2022; Google Scholar
Reed S, Crellin N. Patient-initiated follow-up: will it free up capacity in outpatient care? Nuffield Trust2022; Google Scholar
Taneja A, Su’a B, Hill AG, et al. Efficacy of Patient‐Initiated Follow‐Up clinics in secondary care: a systematic review. Intern Med J2014; 44:1156–60. doi:10.1111/imj.12533•Google Scholar
Sujan M, Furniss D, Grundy K, et al. Human factors challenges for the safe use of artificial intelligence in patient care. BMJ Health Care Inform2019; 26. doi:10.1136/bmjhci-2019-100081•Google Scholar
Sujan M, Pool R, Salmon P, et al. Eight human factors and Ergonomics principles for Healthcare artificial intelligence. BMJ Health Care Inform2022; 29. doi:10.1136/bmjhci-2021-100516•Google Scholar
Li L, Qin L, Xu Z, et al. Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology2020; Google Scholar
Ustebay S, Sarmis A, Kaya GK, et al. A comparison of machine learning Algorithms in predicting COVID-19 Prognostics. Intern Emerg Med2023; 18:229–39. doi:10.1007/s11739-022-03101-x•Google Scholar
Roberts M, Driggs D, Thorpe M, et al. Common pitfalls and recommendations for using machine learning to detect and Prognosticate for COVID-19 using chest Radiographs and CT scans. Nat Mach Intell2021; 3:199–217. doi:10.1038/s42256-021-00307-0•Google Scholar
Jacob J, Alexander D, Baillie JK, et al. Using imaging to combat a pandemic: rationale for developing the UK national COVID-19 chest imaging database. Eur Respir J2020; 56. doi:10.1183/13993003.01809-2020•Google Scholar
Sounderajah V, Ashrafian H, Golub RM, et al. Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: the STARD-AI protocol. BMJ Open2021; 11. doi:10.1136/bmjopen-2020-047709•Google Scholar
Collins GS, Dhiman P, Andaur Navarro CL, et al. Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and Prognostic prediction model studies based on artificial intelligence. BMJ Open2021; 11. doi:10.1136/bmjopen-2020-048008•Google Scholar
Rivera SC, Liu X, Chan A-W, et al. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. BMJ2020; 370. doi:10.1136/bmj.m3210•Google Scholar
Liu X, Rivera SC, Moher D, et al. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. BMJdoi:10.1136/bmj.m3164•Google Scholar
Vasey B, Nagendran M, Campbell B, et al. Reporting guideline for the early stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. BMJ2022; 377. doi:10.1136/bmj-2022-070904•Google Scholar
Sujan M, Smith-Frazer C, Malamateniou C, et al. Validation framework for the use of AI in Healthcare: overview of the new British standard Bs30440. BMJ Health & Care Informatics2023; 30. Google Scholar
Wood A, Denholm R, Hollings S, et al. Linked electronic health records for research on a nationwide cohort of more than 54 million people in England: data resource. BMJ2021; 373. doi:10.1136/bmj.n826•Google Scholar
Chen C-M, Jyan H-W, Chien S-C, et al. Containing COVID-19 among 627,386 persons in contact with the diamond princess cruise ship passengers who Disembarked in Taiwan: big data Analytics. J Med Internet Res2020; 22. doi:10.2196/19540•Google Scholar
Scott PJ, Dunscombe R, Evans D, et al. Learning health systems need to bridge The'Two cultures' of clinical Informatics and data science. BMJ Health Care Inform2018; 25:126–31. doi:10.14236/jhi.v25i2.1062•Google Scholar
Guardiolle V, Bazoge A, Morin E, et al. Linking BIOMEDICAL data warehouse records with the National mortality database in France: large-scale matching algorithm. JMIR Med Inform2022; 10. doi:10.2196/36711•Google Scholar
NHS Digital. Quality and outcomes framework. 2022; Google Scholar
Martin PM, Sbaffi L. Electronic health record and problem lists in Leeds, United kingdom: variability of general practitioners’ views. Health Informatics J2020; 26:1898–911. doi:10.1177/1460458219895184•Google Scholar
Bergerum C, Petersson C, Thor J, et al. We are data rich but information poor’: how do patient-reported measures stimulate patient involvement in quality improvement interventions in Swedish hospital departments. BMJ Open Qual2022; 11. doi:10.1136/bmjoq-2022-001850•Google Scholar
Carding N. Contentious’ US Tech firm to harvest patient data in NHSE waiting list push. HSJ2022; Google Scholar
Crawford K. Atlas of AI, The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press2021; doi:10.12987/9780300252392•Google Scholar
NIHR applied research collaboration greater Manchester, Citizens’ Juries on Health Data Sharing in a Pandemic. 2021; Google Scholar
Department of Health and Social care. Secure data environment for NHS health and social care data - policy guidelines. 2022; Google Scholar