Article Text

Download PDFPDF

Information technology for patient safety
  1. Christopher Huckvale1,
  2. Josip Car1,
  3. Masanori Akiyama2,
  4. Safurah Jaafar3,
  5. Tawfik Khoja4,
  6. Ammar Bin Khalid5,
  7. Aziz Sheikh6,
  8. Azeem Majeed1
  1. 1Department of Primary Care & Public Health, Imperial College, London, UK
  2. 2Todai Policy Alternatives Research Institute, University of Tokyo, Tokyo, Japan
  3. 3Family Health Development Division, Ministry Of Health, Kuala Lumpur, Malaysia
  4. 4Executive Board, Council of Health Ministers for Gulf Cooperation Council, Riyadh, Saudi Arabia
  5. 5Kabot International, Lahore, Pakistan
  6. 6Centre for Population Health Sciences, The University of Edinburgh, Edinburgh, UK
  1. Correspondence to Professor Azeem Majeed, Department of Primary Care & Public Health, Reynolds Building, Imperial College, London W6 8RP, UK; a.majeed{at}imperial.ac.uk

Abstract

Background Research on patient care has identified substantial variations in the quality and safety of healthcare and the considerable risks of iatrogenic harm as significant issues. These failings contribute to the high rates of potentially avoidable morbidity and mortality and to the rising levels of healthcare expenditure seen in many health systems. There have been substantial developments in information technology in recent decades and there is now real potential to apply these technological developments to improve the provision of healthcare universally. Of particular international interest is the use of eHealth applications. There is, however, a large gap between the theoretical and empirically demonstrated benefits of eHealth applications. While these applications typically have the technical capability to help professionals in the delivery of healthcare, inadequate attention to the socio-technical dimensions of their use can result in new avoidable risks to patients.

Results and discussion Given the current lack of evidence on quality and safety improvements and on the cost–benefits associated with the introduction of eHealth applications, there should be a focus on implementing more mature technologies; it is also important that eHealth applications should be evaluated against a comprehensive and rigorous set of measures, ideally at all stages of their application life cycle.

  • health policy
  • information technology
  • patient safety

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

World Health Organization (WHO) Patient Safety established the Information Technology for Patient Safety Expert Working Group to examine the role of Information Technology (IT) in improving patient safety in healthcare. The Working Group included representatives from high-, middle- and low-income countries with expertise from clinical medicine, academia, government, health services management and industry. This report by the Working Group provides an overview of the interplay between IT and issues of patient safety in healthcare, maps out the boundaries of knowledge in this area and makes recommendations for future research and development. It builds on a recent systematic literature review commissioned by the English National Health Service (NHS) Connecting for Health Evaluation Programme, which included a review of research papers from across the world.

We identified priority areas through a consultation process involving all members of the Working Group. Although there are concerns about the variable methodological quality and completeness of the evidence base in the field—particularly in the evaluation of the impact of new technology in healthcare on patient safety—there is a specific lack of information on the experience in developing countries. The majority of published research has been carried out in high-income countries such as the UK and USA. This paper is therefore most applicable to economically developed countries; however, where possible, we have also drawn lessons for economically developing countries and illustrated the key points from the paper with a number of case studies.

Information technology in healthcare

The US government has defined IT as ‘…any equipment or interconnected system or subsystem of equipment that is used in the creation, conversion or duplication of data or information.’1 This paper focuses on the information transacting role, considering those applications where there is a transformative or integrative element involving information. The focus is on the role of software and platforms that integrate information from these and other sources (eg, electronic patient records). We consider the requirements that might be made at a software–hardware-systems level to address issues of patient safety, but in-depth exploration of issues around technical implementations of software and hardware lay outside the scope of this paper. Reflecting this higher level view, we consider issues of patient safety relating to the key applications for information tools in healthcare delivery rather than considering each component separately.

The rationale for this selective approach reflects the priorities identified by the Working Group and the focus of IT implementation in healthcare internationally. There is significant interest in the potential for IT to address some of the current challenges facing healthcare systems, specifically:

  • the resource and cost implications of serving populations with increasing life expectancy and improved survival in chronic illness;

  • continuing deficiencies in the provision of healthcare that result in iatrogenic harm;

  • opportunities to use IT to improve access to information by healthcare workers in developing countries to enable them to deliver safe, effective care.

New technologies have the capacity to both extend and replace existing clinical and administrative processes in health. The term eHealth is increasingly used to acknowledge that technological innovation is simply a component of a larger process of change, which ideally represents ‘a new way of working, an attitude and a commitment for networked, global thinking to improve healthcare locally, regionally and worldwide.…’2 The conceptual map of eHealth (figure 1) developed recently by Pagliari et al identifies three top-level domains: storing and managing data; informing and supporting decisions; and delivering expertise and care at a distance.

Figure 1

Conceptual map of eHealth showing how the different domains integrate to support professionals, patients and the public.

Information technology may reinforce existing barriers and introduce new barriers to error; for example, by preventing specific unsafe actions (active failures). Similarly, in ensuring that certain information is uniformly available or in reducing the time required to complete certain tasks, tools can actively address latent failures. Conversely, the introduction of a tool can disrupt an existing process in a way that introduces a new source of risk, perform incorrectly under certain conditions or facilitate unsafe behaviours by workers.

IT and patient safety at the point of care

Supporting care decisions

Every patient journey involves multiple decisions made by the team of healthcare professionals responsible for the patient's care. Each team member has the potential for active error, as well as contributing to an environment in which the scope for future errors might be enhanced. IT should therefore be used to ensure that optimal choices are made at every step of a patient's care pathway. IT must also be used to limit decisions that are clearly wrong and where there is a significant risk of iatrogenic harm. Although the scope for harm varies by decision type, deciding not to do something can be as harmful as an incorrect active treatment intervention.

Computerised Decision Support Systems (CDSS) are ‘…active knowledge systems which use… patient data to generate case-specific advice.’3 These systems aim—with varying degrees of sophistication—to support the care decision process. There are good theoretical reasons to believe that CDSS can contribute to patient safety. First, they can guarantee consistency of decision-making; thus, the risk of violation or omission is mitigated. Second, by incorporating specific contingencies for unusual presentations conferring specific risks, errors associated with cognitive lapses or bias can be controlled.

These aims can only be achieved if the outputs of such tools are themselves correct and applied in clinical practice. Concerns about the former stem partly from the lack of regulation in this area and because the evidence on which such tools are based has significant gaps in some areas of practice, while expanding rapidly in others. There is a risk that increasing sophistication (eg, in tools that use dynamic inference) abstracts the decision-making process in a way that such gaps are not made visible. New risks also accrue from the information requirements that such tools introduce: parameters must be supplied accurately and for the right patient and relevant cofactors accurately specified. In one series, for example, omitted data resulted in 77% (95% CI 71 to 83%) of recommendations made by a CDSS being rated as potentially inappropriate and unsafe.4

The impact of incorrect clinical recommendations by CDSS extends beyond the risk of iatrogenic harm. Negative perceptions about IT tools are an important determinant of their continued use: if CDSS are perceived to produce unreliable outputs, they may not find use in routine clinical practice, and all the potential benefits are therefore negated. To combat this, an active approach to quality assurance is advocated that explicitly aims to mitigate the risks of covert error associated with problems in the underlying knowledge base—for example, by including algorithmic validation of consistency and completeness.5 Effective user interface design and active validation of input can serve to limit the scope for user-related error; for example, user interfaces that are confusing or illogical can induce errors even by the most skilled users. Good user interface design requires a detailed understanding of how a technology will be used and of the work environment to gauge the types of errors that could arise in use and thereby eliminate or mitigate their impact.

Data completeness is an important safety issue in its own right. A given care decision might be inappropriate only in specific contexts. Although redundancy is desirable, opportunities to capture the information that defines these contexts are typically discrete; for example, a relevant medical history might only be captured during the initial assessment of a patient. Accurate medication history, details of any allergies and significant comorbidities are obvious examples that have the potential to have a recurring effect on future care decisions. Such information, once captured, should be accessible for all future healthcare encounters and, where care involves multiple providers, should be shared efficiently and securely between providers.

Electronic Patient Records (EPR) underpin CDSS and many other eHealth applications. One of the aims of the EPR is to tackle issues of data completeness. Structured inputs can mandate information recording, while the electronic format minimises the risk that information is subsequently mislaid. For example, in the UK, the EPRs used in primary care include recording templates for many chronic diseases (eg, diabetes, hypertension, coronary heart disease, etc) to ensure that key demographic, clinical, physiological, biochemical and pharmaceutical variables are collected systematically and in a standardised manner for all patients. However, a current issue in many health systems, particularly in developing countries, is the low level of uptake of EPR systems.

An electronic system can be self-diagnosing in terms of measures of accuracy and completeness, and can motivate specific remedial actions. Where interfaces are poorly designed, however, or system reliability and performance effects clinical practice and workload, such systems can introduce new clinical risks. Empirical evidence for benefit is currently limited and compromised by poor methodological design: for example, there is currently no strong evidence for a reduction in adverse drug events with EPR implementation.6

Combating medication error

Mistakes in prescribing are one of the most common types of medical error and can be potentially serious, sometimes leading to death or disability for the patient.7 8 The initial prescription order (decision-calculation) typically carries the greatest risk of serious harm, but mistakes can occur at each stage of the prescribing process:

  • decision errors: failure to account for relevant comorbidities, polypharmacy, previous reactions, incorrect decision;

  • calculation errors: failure to calculate the appropriate dosage;

  • communication errors: dosage written incorrectly, illegible handwriting, wrong patient, ambiguous information on prescription, medication not given in a timely fashion;

  • monitoring error or incorrect length of treatment: failure to track drugs with risk of accumulation of toxicity or where time-limited treatment is desirable;

  • slips: incorrect drugs or dose packaged at dispensation, drugs given to wrong patient.

Electronic systems to support prescribing (ePrescribing solutions) typically combine structured capture of prescription requests with a varying degree of CDSS support. ePrescribing is one facet of Computerised Provider Order Entry (CPOE), which uses computer-based tools to record and communicate specific clinical actions (eg, prescriptions, tests, interventions). The potential impact on safety in terms of decision and calculation error is similar to CDSS: aggregation and presentation of relevant details at the point of decision-making reduces the scope for such mistakes. With increasing sophistication, tools can integrate relevant history (including recent laboratory results) and specific medication-related risks. Interaction with the ordering clinician can range from flagging up potential errors to placing more active constraints on what may be requested. Well-designed software can tackle issues of interpretation by ensuring that prescription information is presented unambiguously and is rapidly transmissible. The integration of patient and pharmaceutical identification schemes using barcode and radio frequency identification (RFID) technology with electronic prescriptions holds the potential to reduce slips around patient identification and physical dispensation of drugs (box 1). Automated flagging-up of missed prescriptions and monitoring tests is also possible with ePrescribing systems.

Box 1 Application of retail point-of-sale and back-office technologies to improve patient safety in Japan

Integrated point-of-sale and back-office operations that hinge on common identification and tracking technologies (typically barcodes) are pervasive in commercial settings, but their possible benefits in healthcare have only recently begun to be realised. Tackling patient identification error through the use of bar-coded or RFID chipped tags worn by the patient is increasingly advocated as an easy way to address the first of the ‘five rights’ of medication safety (right patient, drug, dose, administration route and time). Pilot programmes at the International Medical Centre in Tokyo and Red Cross Hospital in Morioka combine these technologies with a ‘Point of Act’ system that tracks both patient activity and consumable use. Every clinical contact represents a discrete event triggered—like the checking out of goods at a cashier till—by the scanning or entry of an identifier tag and captured by the system with relevant contextual information. This information details what was done at what time and where, to whom, why and by what means. This event-driven approach provides a robust method of exploring process flows, as well as inherently providing stock management capabilities. Safer care is anticipated from the constraints that are placed on patient and medication selection (the universal use of barcodes should guarantee the identity of both), the traceable provenance of the latter (managing the risk of counterfeiting and batch quality issues). In addition, the automatic capture of care data—and the unambiguous nature of the associated contextual information—makes this a useful resource that can be mined prospectively for unreported adverse events and as a forensic tool to reconstruct the care journey before any incident.9

Interpretation of evidence for the efficacy of such tools is limited by the variety of outcome measures and construction of what represents a medication error (objective errors versus events that result in actual harm).10 Where empirical benefits, measured in terms of reductions in preventable adverse events, have been shown with in-patient care, these studies have generally been carried out in centres of excellence using home-grown applications.11 12 A 2000 Cochrane review suggested that dosage advice can be effective in preventing adverse drug reactions, as well as improving performance in situations where drug levels must be monitored to prevent toxicity.13 The efficacy of interaction and allergy flagging is also unclear.14 CDSS that include drug-management systems appear to improve clinical performance, but without concomitant benefits in terms of patient outcomes.15 Where flagging systems are optional, they appear to be used infrequently, while routine flagging may come to be viewed as an unwelcome distraction. In a 2002 survey of UK general practitioners, 28% admitted to frequently or very frequently dismissing medication alerts without reading them.16 Dismissing flags without consideration clearly defeats the purpose of such tools. Consequently, a synergistic role for the various components of medication-error-reducing solutions (information to support decision-making, CPOE, integrated pharmacy management and automated dispensation) has been suggested.17

Electronic prescribing tools can introduce new sources of risk that fall into two broad categories. First, there is information-related risk, where failure to integrate information sources means that the expected benefits from ensuring relevant information is presented are not realised. Second, there are failures of human–machine interaction. Such failures relate to both the way information is presented and requested and, more generally, the way the tool fits into clinical work patterns.18 Structuring input can have unintended consequences: for example, the use of lists for medication dosages could facilitate slips that would not have happened had the user entered information manually.19

Because of the integration at critical points of care, and because introducing CPOE is typically with the intention to supplant existing mechanisms for decision capture, a safety impact at the organisational level is inevitable. The impact may be covert; for example, users may make unreasonable assumptions about the capacity of the tool to control certain types of error. Explicit understanding of the limitations of tools is necessary at the level of process design to avoid this problem. There is also an onus on software designers to ensure consistency: for example, in ensuring that prompts for a given type of medication error are displayed against all relevant drugs, rather than as a subset. Other organisational impacts with safety implications include occupying clinical time that would previously have been spent on other activities and duplication of work.20

Specific types of error could increase where the negative effect of process change is not fully appreciated. Spencer et al demonstrated an increase in dispensation duplication and inappropriate dose-related error associated with the use of a new CPOE system: the system failed to accommodate the need to transmit updated prescription information to the dispensary whenever a clinical decision was made to amend the dose, instead requiring that a new prescription be issued each time.21 Concern at this level is further supported by the observation that mortality can increase after the implementation of an ePrescribing system and that organisational factors seem to have a significant role.22 As with CDSS, there is a current lack of regulatory oversight. For example, ePrescribing systems are exempted from federal oversight in the USA and UK. There is also a need to develop systems to identify potential adverse drug reactions prospectively (box 2), rather than relying on manual reporting systems that have very low reporting rates.23

Box 2 Prospective identification of adverse drug reactions using electronic health records, data mining and signal detection

Current systems for the detection of Adverse Drug Reactions (ADRs) have serious limitations. For example, the associations between Cyclo-Oxygenase Type 2 (COX-2) inhibitors used in the treatment of arthritis and increased risk for myocardial infarction and stroke were only discovered after these drugs had been used for 7 years by hundreds of thousands of users. Even where an association has been described, underestimation of the magnitude of risk could delay withdrawal of a drug; for instance, with the eventual withdrawal of thioridazine in June 2005, many years after the association with long QT syndrome had been described. Using information from electronic health records, it is now possible to consider identifying ADRs prospectively using data mining and signal detection. This raises a number of technical challenges, including the large size of the data sets (sometimes including records from millions of patients) and the difficulty in sifting out false-positive signals from true positives. However, successful developments in this area could radically transform the speed with which ADRs are detected, thus leading to the opportunity to withdraw a drug or limit its prescribing and hence improve patient safety.

Delivering patient-centred care

An emerging theme in healthcare in countries like the UK and USA is that patients should have greater involvement in the care that they receive and be more informed about their own health and the treatments available to them. Technological developments are facilitating the sharing of information between patients and clinicians through online services, and such online access to medical records (figure 2) results in new opportunities for self-monitoring and for convenient care delivery (eg, email consultations).24 Each of these developments can be designed with safety issues explicitly in mind (eg, patient validation of information in an electronic record could contribute to reduced error) while also conferring new risks.

Figure 2

Online patient access to electronic patient records in the UK.

Health systems across the world are now focussing on health promotion, disease prevention and optimising the management of chronic diseases. To help achieve this, there is considerable scope for collecting and utilising information from patients about lifestyle (eg, exercise, diet, smoking, alcohol consumption, etc) without the involvement of clinicians. These systems can be used to collect information directly from the patient, for example, at a preconsultation interview (remotely via the internet or in the clinic waiting room). Computerised history-taking systems can be used in many clinical settings and, when eliciting data directly from patients, could prove particularly useful in identifying potentially sensitive information such as alcohol consumption, sexual health and mental health, which patients might be otherwise reluctant to divulge.25 26 Computer-based questionnaires are also useful for gathering important background data before the consultation, which can then allow more time for focussing on key aspects of the health problems within the consultation. As well as improving patient safety, these systems can also reduce administrative costs, thus releasing funds for other areas of healthcare.

Mobile telephones have also gained recent attention as a way of delivering care in developing countries. Mobile phone use is widespread in both developing and developed nations, and offers potential benefits over other forms of communication that rely on infrastructure (eg, postal systems, land-line telephones); people carry their phone wherever they go and, importantly, consider it an acceptable route through which to receive private information. Tailored alerts and prompts facilitate medication and condition monitoring, thus offering an avenue by which potential problems can be detected and acted upon early.27 For patients in remote areas, mobile telephony offers a route to access care advice when no local clinical staff are available (box 3).28 Future developments include the ability to perform simple laboratory tests using chip technology, the results of which are transmitted to clinicians using mobile connectivity. However, many of the programmable features on which more advanced systems depend are not available in the first- or second-generation handsets that are commonly in use in developing countries. There is also a current lack of guidance around how to ensure that healthcare interaction conducted using mobile telephony is safe.29

Box 3 A consolidated care architecture can help to deliver safer care in inaccessible locations: the Malaysian example

Healthcare agencies in developing and newly industrialised nations face common challenges in providing high-quality, safe healthcare. These challenges include: variable coverage and quality of transport, utility and healthcare specific infrastructures that affect their ability to provide care, particularly outside urban settings; infrastructural and income-related constraints that limit the ability of patients to access services; and resource-related constraints affecting both staffing and equipment. A particular challenge for the Malaysian Ministry of Health is the limited access to remote areas and the reliance on boat travel that can incur significant delays in transferring patients who need urgent care to secondary centres. One solution, developed from 2003 onwards, is the TelePrimary Care (TPC) project that combines elements of electronic patient records, Computerised Provider Order Entry (CPOE), teleconsultation and data-quality-improvement programmes in a single system. Patients in remote locations benefit from their clinicians having access to expert advice to guide diagnosis and initiate early treatment as well as gaining case-specific feedback as part of Continuing Medical Education conducted through the system. When patients are transferred between centres, their entire record is available through a common system. Medication errors relating to illegible handwriting and drug interactions and contraindication have been reduced. A formal programme of evaluation of TPC was initiated in 2008 and is planned to include evaluation of the impact on patient safety.

IT and patient safety at the organisational and system level

Capturing adverse events

Adverse events are important in healthcare because of the scope for significant harm to patients. Worryingly, they appear to be extremely common.30 In the UK, around 850 000 errors occur annually in hospitals, contributing towards 40 000 deaths.31 Most events have a mixture of latent and active contributory causes. This complicates their analysis and can make it hard for responsible organisations to identify the most effective strategy for their prevention. Each ‘fix’ has a cost–benefit profile and also represents a potential process change that has its own safety implications.

Currently, the capture of adverse event in healthcare is generally through voluntary reporting. Although high rates of reporting have been successful in reducing serious events in some industries (eg, aviation), there is significant under-reporting in healthcare. For example, fewer than 10% of adverse drug reactions are reported to regulatory authorities by clinicians. The reasons for this are complex and include fear of blame, organisational culture, lack of reminders and time effects.

Automated post-hoc identification of adverse events holds significant scope to address under-reporting issues; for example, deaths due to substandard care (box 4). A key requirement for event identification is synthesis of information from disparate sources in searching for the ‘fingerprint’ of an incident. Interoperability is therefore an important requirement for progression in this area. Specific prospective monitoring strategies for new-to-market products that would be amenable to IT-based tools have also been suggested. Signal detection and data-mining techniques can also be used to identify other threats to patient safety, such as clusters of adverse events or deaths following healthcare interventions.

Box 4 Auditing safety at a national level; applying signal detection to routine outcome statistics to identify failing care

In March 2009, the UK's Care Quality Commission (CQC), which is responsible for monitoring the quality of care in England's National Health Service (NHS), completed its report into standards of care at the accident and emergency department of a small acute urban hospital. The CQC believed that poor-quality care directly resulted in over 400 excess deaths over the period 2005–2008. Service availability, ward configuration and staffing levels were ultimately identified as key contributory factors to this critical failure, but it was notable that routine monitoring of outcome statistics played a role in highlighting the potential problem and triggering the subsequent investigation. Between 2007 and 2008, six ‘outlier’ alerts were generated for this hospital by a monitoring system that compares condition-specific mortalities against national figures. Although the hospital had already been alerted to a possible problem by an elevated all-cause standardised mortality generated by the same system in 2007, the outlier alerts acted as the trigger for involvement by the CQC. The data required to calculate these metrics are collated automatically as anonymised care-episode statistics and processed for the NHS by the Dr Foster Unit at Imperial College London.

The monitoring solution adopted in the UK combines automated routine reporting with national coverage, with a statistical methodology that is robust to false alerts; for example, the increased uncertainty associated with measurements involving very small numbers of patients. Prerequisite for the implementation of this kind of solution is reliable capture of event data in electronic form and an infrastructure for aggregation of these data nationally. In the UK, this is achieved in secondary care by electronically coding the main reason for each admission through a standard form and the use of a common national IT infrastructure to aggregate the data for analysis. Coded data extracted from primary care systems could be used for similar monitoring work. Beyond the IT components of the solution, responsible authorities must have processes in place that guarantee appropriate action when outlying data are generated. The judicious use of specific alerts to highlight salient issues can be advantageous, particularly where the perceived reliability of such alerts is high.

Aggregation of incidents at organisational and national levels is also desirable, because the rarity of many events makes it hard to identify underlying systematic causes. Many countries now operate central collections (eg, the National Patient Safety Agency in England) to which events are submitted.32 Automated analysis of these submissions poses significant technical challenges concerning semantic interpretation of event reports. Techniques that are likely to yield greater benefits in the near future are those that facilitate human operators in matching events and aggregating evidence in ways that can then be shared. Specific software tools already exist for certain types of safety exploration (eg, root cause analysis).

Standards

The current scope for standards in health informatics focuses on two main areas: data capture and data exchange. From a safety perspective, both hold the potential to address a number of issues. Data completeness is essential for many of the tools with potential safety gains, such as CDSS and ePrescribing.33 Facilitating information exchange has direct safety implications: systematic transfer reduces the scope for transcription errors and physical loss of data, and can help to ensure that information is available when needed. Furthermore, when patients are receiving care from many different providers, ensuring that relevant parts of their clinical record are available, particularly in emergencies, has clear safety benefits.

Disease and intervention taxonomies are now common, driven partly by their role in remuneration in many health systems but also by the need to collect information required for public health surveillance. Similar work on information exchange has resulted in a number of standards. Syntactic interoperability relates to the ability of systems to exchange information about care and requires both a common message statement and a model of the care process involved. The principal international standard is HL7 (Health Level 7).

Semantic interoperability requires the use of common (or appropriately mapped) terminologies. Terminologies can also be related to classification systems based on an underlying ontology, as an ontology is required to map concepts in different terminologies. However, clinical coding (using a taxonomy to classify relevant parts of a patient's medical history) introduces new risks. The coding must be accurate, especially if the coded data will have clinical uses. Minimising errors of miscoding and—importantly—omission, requires well-designed taxonomies with adequate coverage that are applied systematically. Semantic ambiguities in some coding systems (eg, where a particular diagnosis can be coded in several ways, as in the Read code system used in UK primary care) limit the scope for automated interpretation and introduce a source of risk where systems cannot handle these variations. The shift away from paper-based management also introduces new requirements for system reliability. Systems must be robust to random failure and have contingencies in place to ensure that clinical work is not disrupted.

A recurring theme in eHealth is the lack of regulation for medical software. There is significant scope for regulatory authorities to exact the same demands for reliability that are used in other industries where software tools are mission-critical (eg, aviation). The complexity of medical systems is often cited as a barrier to this regulation; however, simple parameters like system uptime are easily measurable. Organisations such as the European Committee for Standardisation (CEN) could also help to develop international standards.

Implementation issues

The impact of IT tools on clinical processes can be significant but appears to be frequently underestimated both by system designers and implementing organisations. Risks can occur as a result of the explicit changes to existing processes that the tool introduces. Changes in the behaviour of end users can also occur—for example, cultural factors, attitudinal elements including resistance, assumptions (eg, assuming that the tool offers certain functions) and changes in the proportion of time allocated to different tasks. Failure to understand these possible effects carries the risk of patient harm. Introduction of a tool (and reversion where tools subsequently fail) is disruptive and itself carries a safety burden. Multidisciplinary working is the hallmark of modern healthcare; this imposes an additional burden on IT platforms in meeting the requirements of a team of users in which each member might have a different set of priorities.

Understanding safety in all contexts, including IT, requires a holistic approach considering elements that partition into those that are specific to IT (reliability, ergonomics, standards compliance, etc) and those relating to any process of organisational change (process redesign, culture, training and competence). A recent review of UK adverse incident reporting specifically identifies training and process integration as specific causes in relation to IT.34 Adequate monitoring of implementation requires systematic planning and oversight throughout the lifespan of each tool as risks shift from implementation to ongoing training, sustainability and service level issues. Every implementation will have a combination of beneficial and detrimental effects. These may be intended, unintended but predictable or unintended and unpredicted. Discussion in the context of IT tools too often focuses on the beneficial, intended effects. A robust strategy can help identify possible risks (and devise mitigating strategies) and, postimplementation, identify those unpredicted consequences that can then be addressed locally. Captured at an organisational level and beyond, these analyses can be used to feed into the future design of both systems and implementation strategies. Examination of patient safety issues should be a recurring and explicit programme of work throughout the life cycle of every relevant IT tool.

Issues specific to developing countries

In developing countries, issues surrounding the use of IT to improve patient safety are often very different to those in developed countries. Healthcare workers in developing countries often lack access to information that could help them provide safe, effective care to their patients.35 This often results in substandard medical practice. Improving access to relevant, reliable and up-to-date information has great potential to improve the safety of healthcare in such settings.

The Health InterNetwork Access to Research Initiative (HINARI) Programme, established by WHO in collaboration with major publishers, enables healthcare workers in developing countries to gain access to a large collection of biomedical journals and health literature.36 With improved access to the internet in many parts of the world, information access initiatives such as HINARI could have a major impact on the safety of healthcare in resource-poor countries. For example, the first undersea cable to bring high-speed Internet access to East Africa went live in 2009, substantially increasing the number of people in the area with access to the internet, while at the same time reducing the cost of access.

A research agenda for IT and patient safety

Efforts to provide a robust commentary about patient safety in the context of IT are impeded by ongoing issues of methodological quality of research and evaluation in the field.37 These issues can be summarised as:

  • a fragmented theoretical framework that limits the scope for consistency in approach and stepwise evolution of the field—efforts to establish taxonomies for patient safety are an important first step in tackling this;

  • parallel fragmentation in primary and secondary methodologies for the evaluation of IT tools, which includes all stages of design and implementation.

The diversity of outcome measures available and the quality with which investigations are reported are often cited as issues in the field. Both impact critically on the ability to perform effective synthesis of the literature. The critical gap between the benefits anticipated from theoretical work and those realised in clinical practice can only be addressed through well-designed evaluation programmes around technology implementation. Lessons can also be drawn from the health informatics literature on human factors research and human factors engineering.

Most evaluations of IT tools are currently described in the context of a single product being implemented in a live environment. This reflects a lack of confidence in developing more complex study designs that combine robust implementation evaluation with process, cost-effectiveness and impact analyses rather than ignorance of the possible organisational and financial consequence of technology implementation. The complexity of organisational impact can only be well explored with ethnographic approaches, but dedicated between-technology comparisons in controlled situations are clearly desirable. Efforts to standardise the approach to evaluation are continuing. Recent progress includes the introduction of guidelines for evaluation and reporting of IT interventions (GEP-HI, STARE-HI).38 Consideration of harm as a specific outcome measure has also recently been advocated.39

Specific current topic areas where further research work is indicated include:

  • methods of improving data quality in electronic patient records;

  • identification of threats to patient safety using prospective methods and signal detection—for example, pharmacovigilance and postoperative mortality to identify adverse drug reactions;

  • the use of mobile-phone technology to provide prompts and reminders to patients and to store key medical information on people with chronic illnesses;

  • the use of IT for home-based care delivery and the role of pervasive sensing and remote monitoring;

  • evaluation of information access initiatives such as HINARI and their effect on patient safety in developing countries.

Conclusions

This review outlines the potential of IT solutions to improve patient safety. Although it is developed countries that will benefit from such technological interventions in the short term, the rapidly falling cost of IT means that middle-income and eventually lower-income countries will also eventually benefit. A key lesson from health systems that have been successful in implementing IT in healthcare is that a commitment from the funders of healthcare (whether these are governments, national insurance schemes or third parties) to meet the costs of IT solutions is essential to ensuring their rapid and effective take-up. Countries with health systems where this is not the case, such as the USA, have had a much lower uptake of essential technologies, such as electronic patient records, than countries like the UK and Netherlands, where funders have shown greater commitment.40 41

Although IT solutions do have considerable potential to improve patient safety, there is currently a gap between the theoretical and empirically demonstrated benefits. Given the lack of evidence on quality and safety improvements and on cost–benefits, future eHealth applications should be evaluated against a comprehensive and rigorous set of measures, ideally at all stages of the application life cycle. Attention must also be paid to socio-technical factors to maximise the likelihood of successful implementation and adoption.42 Finally, most of the research in this area has been carried out in affluent, developed countries. Detailed case studies and rigorous research are also needed from middle- and lower-income countries if eHealth solutions are to be developed that can benefit public health and improve patient safety across the world.

How, then, can funders and providers of healthcare take forward the use of IT to improve patient safety? A key step is introducing the use of electronic patient record systems; these systems lie at the heart of many eHealth technologies, such as electronic prescribing and computerised test ordering, as well as providing data for the identification of potential threats to patient safety. However, the introduction of electronic patient records can bring its own threats to patient safety, particularly in the early stages, when healthcare providers could be using electronic and paper-based records in parallel. One consequence of this dual usage is that the data held in electronic patient record systems can be inaccurate or incomplete, with the potential to compromise patient safety because key data items (eg, drug allergies or important comorbidities) might not be recorded. Other key steps are to ensure the full engagement of clinicians and other professionals, and to provide adequate training to allow them to use eHealth solutions appropriately. It is also important that methods for effective data interchange between IT systems are in place if the full benefits are to be realised, and to limit the workload and errors that can arise from duplicate and unnecessary data entry. Finally, the implementation of IT solutions in healthcare should be linked to an effective research, development and evaluation agenda to allow appropriate lessons to be learnt and to ensure that only systems that have a real impact on patient safety, quality and healthcare efficiency are disseminated more widely.

Acknowledgments

We are very grateful to G-Z Yang and A Darzi of Imperial College London for their guidance and advice and to V Allan for her comments on the paper. We thank S Rawaf at Imperial College London for his help in expanding the membership of the Working Group; O Mytton (Department of Health, England), R Aggarwal (Department of Surgery, Imperial College London) and E Kelley (WHO Patient Safety) for their role in coordinating the working groups and in recommending avenues for exploration by the working groups. Thanks also to R Alshamsan from Imperial College for suggestions about the inclusion of content around mobile phone technology. The Working Group was chaired by A Majeed.

References

Footnotes

  • Funding The project was funded by WHO Patient Safety. The Department of Primary Care & Public Health at Imperial College London receives funding from the NIHR Biomedical Research Centre scheme and the NIHR Collaboration for Leadership in Applied Health Research & Care scheme.

  • Competing interests JC, AS and AM have received funding for a systematic review on the use of IT to improve patient safety from the NHS Connecting for Health Evaluation Programme (NHS CFHEP 001). AM's department has received funding from Dr Foster Intelligence to develop software tools to help improve patient safety. The other authors report no conflict of interest.

  • Provenance and peer review Not commissioned; externally peer reviewed.