Introduction
Systematic reviews show that clinical decision support systems (CDSSs) can improve the quality of clinical decisions and healthcare processes1 and patient outcomes2; although caution has been expressed as to balancing the risks of using CDSSs (eg, alert fatigue) when only small or moderate improvements to patient care have been shown.3 Yet, despite the potential benefits, studies indicate that uptake of these tools in clinical practice is generally low due to a range of factors.4–7 The well-funded National Health Service (NHS) PRODIGY programme is an example of a carefully developed CDSS - commissioned by the Department of Health to support GPs - which failed to influence clinical practice or patient outcomes, with low uptake by clinicians in a large-scale trial.8 A subsequent qualitative study revealed that, among other issues—such as the timing of the advice—trust was an issue: ‘I don't trust … practising medicine like that … I do not want to find myself in front of a defence meeting, in front of a service tribunal, a court, defending myself on the basis of a trial of computer guidelines’ [quote from GP].9
Another qualitative study exploring factors hindering CDSSs' uptake in hospital settings found that clinicians perceive that CDSSs ‘may reduce their professional autonomy or may be used against them in the event of medical-legal controversies’.10 Thus, CDSSs may be ‘perceived as limiting, rather than supplementing, physicians’ competencies, expertise and critical thinking’, as opposed to a working tool to augment professional competence and encourage interdisciplinary working in healthcare settings.10 Similarly, a recent survey carried out by the Royal College of Physicians revealed that senior physicians had serious concerns about using CDSSs in clinical practice, with trust and trustworthiness being key issues (see examples below).11
Trust is an important foundation for relationships between the developers of information systems and users, and is a contemporary concern for policymakers. It has, for example, been highlighted in the House of Lords Select Committee on Artificial Intelligence (AI) report12; the Topol Review13; a number of European Commission communications,14–16 reports17–20 and most recently a White Paper on AI21; and investigated in the context of knowledge systems, for example, for Wikipedia.22 Although it is an important concept, it is not always defined; rather, its meaning may be inferred. For example, the House of Lords Select Committee used the phrase ‘public trust’ eight times,12 but the core concern appeared to be about confidence over the use of patient data, rather than patient perceptions regarding the efficacy (or otherwise) of the AI in question. Such documents appear to take an implicit or one-directional approach to what is meant by ‘trust’.
Notably, the Guidelines of the High-Level Expert Group on AI outline seven key requirements that might make AI systems more trustworthy17; whereas the White Paper focuses on fostering an ‘ecosystem of trust’ through the development of a clear European regulatory framework with a risk-based approach.21 Therefore, in keeping with the drive for promoting clinical adoption of AI and CDSSs while minimising the potential risks,13 here we apply Onora O’Neill’s23 24 multidirectional trust and trustworthiness framework25 to explore key issues underlying clinician (doctor, nurse or therapist) trust in and the use (or non-use) of AI and CDSS tools for advising them about patient management, and the implications for CDSS developers. In doing so, we do not seek to examine particular existing CDSSs’ merits and flaws in-depth, nor do we address the merits of the deployment process itself. Rather, we focus on generic issues of trust that clinicians report having about CDSSs’ properties, and on improving clinician trust in the use and outputs of CDSSs that have already been deployed.
Two points merit attention at this stage. First, O’Neill’s25 framework is favoured as—in the words of Karen Jones—O’Neill ‘has done more than anyone to bring into theoretical focus the practical problem that would-be trusters face: how to align their trust with trustworthiness’.26 Second, some nuance is required when determining who or what is being trusted. For example, Annette Baier makes clear that her own account of trust supposes:
that the trusted is always something capable of good or ill will, and it is unclear that computers or their programs, as distinct from those who designed them, have any sort of will. But my account is easily extended to firms and professional bodies, whose human office-holders are capable of minimal goodwill, as well as of disregard and lack of concern for the human persons who trust them. It could also be extended to artificial minds, and to any human products, though there I would prefer to say that talk of trusting products like chairs is either metaphorical, or is shorthand for talk of trusting those who produced them.27
Similarly, Joshua James Hatherley ‘reserve[es] the label of ‘trust’ for reciprocal relations between beings with agency’.28 Accordingly, our focus is on the application of O’Neill’s25 framework to CDSS developers as ‘trusted’ agents, and measures they could adopt to become more trustworthy.
O’Neill’s trust and trustworthiness framework: A summary
O’Neill notes that ‘trust is valuable when placed in trustworthy agents and activities, but damaging or costly when (mis)placed in untrustworthy agents and activities’.25 She usefully disaggregates trust into three core but related elements:
Trust in the truth claims made by others, such as claims about a CDSS’s accuracy made by its developer. These claims are empirical, since their correctness can be tested by evaluating the CDSS.29
Trust in others’ commitments or reliability to do what they say they will, such as clinicians trusting a developer to maintain and update their CDSS products. This is normative: we use our understanding of the world and the actors in it to judge the plausibility of a specific commitment, such as our bank honouring its commitment to send us statements.
Trust in others’ competence or practical expertise to meet those commitments. This is again normative: we use our knowledge of the agent in whom we place our trust and our past experience of their actions to judge their competence, such as trust in our dentist’s ability to extract our tooth and the ‘skill and good judgement she brings to the extraction’.25
This approach utilises two ‘directions of fit’: the empirical element (1) in one direction (does the claim ‘fit’ the world as it is?), and the two normative elements (2-3) in another (does the action ‘fit’ the claim?).25 Relatedly, O’Neill has written on the concept of ‘judgement’; drawing a distinction between judgement in terms of looking at the world and assessing how it measures up (or ‘fits’) against certain standards (normative), versus an initial factual judgement of what a situation is, which ‘has to fit the world rather than to make the world fit or live up to’ a principle (empirical).30
In deciding whether to trust and use a CDSS, a user is similarly also making judgements about it. O’Neill’s threefold framework may therefore provide a helpful way to examine the issues in this context. In the following sections we discuss how CDSS developers can use each component of this framework to increase their trustworthiness, and conclude with suggestions on how informaticians might fruitfully apply this framework more widely to understand and improve user–developer relationships. Inevitably, this theoretical approach cannot address every potential issue, but it is used here as a means of organising diverse concerns around trust issues into a coherent framework.