Standfirst
New artificial intelligence (AI)-enabled technologies for augmenting clinical decision-making are proliferating but clinicians will only use them if convinced of their worth. Dr Ian Scott and colleagues outline 10 principles and 5 enabling system strategies that could promote wider adoption by clinicians.
AI-enabled computerised decision support (CDS) tools seek to augment the accuracy and efficiency of clinician decision-making at the point of care. Currently, conventional, task-specific models developed using supervised machine learning (ML) underpin most current clinician-facing AI-enabled CDS tools. These are dominated by diagnostic imaging and risk prediction tools.1 However, large language models (LLMs) and generative AI, such as ChatGPT, are poised to revolutionise care given their ability to converse with clinicians and perform multiple tasks, ranging from clinical documentation to multidomain decision support. However, despite hundreds of regulator-approved ML tools internationally,2 large-scale uptake into routine clinical practice has proved elusive.3 While many non-clinical factors may partly account for this adoption gap,4 ambivalence of frontline clinicians towards using AI tools may also contribute, principally due to a lack of understanding of and trust in, AI applications.5 6 We propose a set of principles and strategic enablers for achieving broad clinician acceptance of AI tools embedded within electronic medical records (EMRs). As no LLM has yet received regulator approval in clinical care, our focus is on approved conventional ML tools, although we would contend all the principles discussed will pertain equally to LLMs. This work builds on previous experience with digitally-enabled rule-based CDS systems7 and is informed by recent research into AI implementation barriers and enablers.3 8 9 There was no patient or public involvement in writing this article as our focus was clinician-facing tools.