Article Text

Exploring stakeholder attitudes towards AI in clinical practice
  1. Ian A Scott1,2,
  2. Stacy M Carter3 and
  3. Enrico Coiera4
  1. 1Internal Medicine and Clinical Epidemiology, Princess Alexandra Hospital, Woolloongabba, Queensland, Australia
  2. 2School of Clinical Medicine, University of Queensland, Brisbane, Queensland, Australia
  3. 3Australian Centre for Health Engagement Evidence and Values, School of Health and Society, University of Wollongong, Wollongong, New South Wales, Australia
  4. 4Centre for Clinical Informatics, Macquarie University, Sydney, New South Wales, Australia
  1. Correspondence to Professor Ian A Scott; ian.scott{at}health.qld.gov.au

Abstract

Objectives Different stakeholders may hold varying attitudes towards artificial intelligence (AI) applications in healthcare, which may constrain their acceptance if AI developers fail to take them into account. We set out to ascertain evidence of the attitudes of clinicians, consumers, managers, researchers, regulators and industry towards AI applications in healthcare.

Methods We undertook an exploratory analysis of articles whose titles or abstracts contained the terms ‘artificial intelligence’ or ‘AI’ and ‘medical’ or ‘healthcare’ and ‘attitudes’, ‘perceptions’, ‘opinions’, ‘views’, ‘expectations’. Using a snowballing strategy, we searched PubMed and Google Scholar for articles published 1 January 2010 through 31 May 2021. We selected articles relating to non-robotic clinician-facing AI applications used to support healthcare-related tasks or decision-making.

Results Across 27 studies, attitudes towards AI applications in healthcare, in general, were positive, more so for those with direct experience of AI, but provided certain safeguards were met. AI applications which automated data interpretation and synthesis were regarded more favourably by clinicians and consumers than those that directly influenced clinical decisions or potentially impacted clinician–patient relationships. Privacy breaches and personal liability for AI-related error worried clinicians, while loss of clinician oversight and inability to fully share in decision-making worried consumers. Both clinicians and consumers wanted AI-generated advice to be trustworthy, while industry groups emphasised AI benefits and wanted more data, funding and regulatory certainty.

Discussion Certain expectations of AI applications were common to many stakeholder groups from which a set of dependencies can be defined.

Conclusion Stakeholders differ in some but not all of their attitudes towards AI. Those developing and implementing applications should consider policies and processes that bridge attitudinal disconnects between different stakeholders.

  • artificial intelligence
  • decision making
  • computer-assisted
  • machine learning
  • patient-centered care

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information. Not applicable.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Summary

What is already known?

  • Very little is known about the attitudes of different stakeholders towards artificial intelligence (AI) applications in healthcare.

  • While the AI industry see their applications as promising for improving healthcare, the views of clinicians, patients and other groups directly involved in delivering or receiving care may not be so favourable.

What does this paper add?

  • This paper provides an exploratory analysis of published reports of the attitudes and perceptions of different stakeholder groups towards AI applications in healthcare.

  • Stakeholder groups hold similar attitudes towards AI on some attributes but differ in their attitudes towards others.

  • In general, attitudes towards AI in healthcare were positive, more so for those with direct experience of AI in care delivery, but with the proviso that certain safeguards were met.

  • Those developing and implementing AI applications should consider policies and processes that bridge attitudinal disconnects between different stakeholders.

Introduction

Artificial intelligence (AI) refers to advanced computer programs that mimic intelligent human behaviours and assist humans with different tasks. Medical AI applications span a spectrum, from diagnosis and disease screening to treatment selection and prognostication,1 and aim to optimise care, improve efficiency and enhance clinician and consumer experience. Despite scores of AI applications having received regulatory approval for use in clinical settings in recent years, and many more having passed the proof-of-concept stage, relatively few that purport to directly assist decision-making have been adopted at scale into clinical practice.2 This limited uptake may be due, at least partly, to misperceptions of what the term AI actually means and negative attitudes towards AI held by key players in the healthcare ecosystem. Multiple stakeholders share interest in the performance and outcomes of AI applications, comprising clinicians, consumers, managers, researchers, regulators and industry. Their perceptions and expectations of AI may differ, and need to be understood and considered by AI developers and implementers if AI applications are to be designed and operationalised in ways acceptable to all parties.

Methods

We undertook an exploratory analysis of articles whose titles or abstracts contained the terms ‘artificial intelligence’ or ‘AI’ and ‘medical’ or ‘healthcare’ and ‘attitudes’, ‘perceptions’, ‘opinions’, ‘views’, ‘expectations’. Using a snowballing strategy, we searched PubMed and Google Scholar for articles published 1 January 2010 through 31 May 2021. Reference lists of retrieved articles were perused for additional studies. We excluded articles that did not employ a formal survey or interview tool and/or did not report quantified response measures for individual questions among respondents. We only selected articles dealing with non-robotic AI applications used to support clinician-mediated care-related tasks or decision-making, and excluded mobile or wearable applications that were exclusively consumer facing. Key findings were extracted and summarised in narrative form according to four categories of participants. We used these results to derive a thematic synthesis of stakeholder expectations and corresponding requirements (or dependencies) for developers of AI applications to consider.

Results

A total of 27 articles were included3–29 of which most (16, 59%) targeted clinicians,3–18 8 (30%) focused on consumers (including patients),19–26 1 (4%) on health executives27 and 2 (7%) on industry stakeholders comprising AI vendors, researchers and regulators.28 29 Detailed study descriptions are provided in the online supplemental appendix and summary results are listed in table 1. Most studies (23; 85%) used online surveys,3–20 22–24 27 28 of which only three (11%)15 17 24 were designed using the Checklist for Reporting Results of Internet E-Surveys.30 Three (11%) studies undertook face to face interviews,25 ,26 ,29 and one used a paper-based questionnaire.21 A specific definition or example of AI was provided to participants in only 10 (37%) studies,3 8 17 19 22–27 with generic descriptors (eg, ‘computers’ or ‘machines’) used in 6 (22%)5 13 14 16 28 29 and none in 11 (41%).4 6 7 9–12 15 18 20 21 Survey response rates were reported in 11 (41%) studies,5 6 9 12 13 15 17 18 21 23 28 ranging from <0.1% to 66%, with 6 (22%)7 8 10 11 14 16 reporting no response rates and the remainder using convenience samples19 20 22–27 29 of which one calculated a required sample size.19

Supplemental material

Table 1

Stakeholder perceptions of clinical AI applications

Clinicians

Clinicians practising in imaging-based disciplines, where deep machine learning is most advanced, featured in several surveys. In an Australian survey of 632 specialists (ophthalmology (n=305), radiology/radiation oncology (n=230), dermatology (n=97)),3 most had never actually used any AI application in practice (81%), but predicted AI would improve their field (71%) and impact future workforce needs (86%). Most considered AI had to perform better than specialists for disease screening (64%) and diagnosis (80%). The top three perceived AI benefits were improved patient access to screening, greater diagnostic confidence and reduced specialist time spent on mundane tasks. The top three concerns were outsourcing application development to large commercial AI companies, clinician liability due to AI errors and decreased reliance on specialists (‘do-it-yourself’ medicine). Most respondents (86%) felt their professional colleges were ill prepared for introducing AI into practice, citing need for training curricula, guidelines and working groups with AI expertise.

Radiologist attitudes towards AI were mostly positive. Most surveyed Italian radiologists (n=1032) favoured adopting AI (77%), did not fear job loss due to AI (89%) and anticipated fewer diagnostic errors (73%) and optimised workflows (68%), although at the expense of some reputational loss and decreased demand for their services (60%).4 Among 270 French radiologists, most anticipated fewer errors (81%), reduced time spent on image interpretation (74%) and more time spent with patients (52%), with most wanting ongoing education in AI (69%).5

Trainees and medical students with an interest in radiology expressed more mixed views, with a third of 69 US radiology residents stating, with hindsight, they may have chosen a different career because of AI.6 Among 484 UK medical students, half (49%) were disinclined towards a radiology career, despite most (89%) seeing expertise in AI as benefitting them (89%) and wanting AI education included in medical degrees (78%).7 In Germany, 263 medical students thought AI will improve radiology (86%), not replace radiologists (83%), and desired further training in AI (71%).8 Canadian students (n=322) expressed similar views, but also voiced concerns about reduced radiologist demand (67%).9

Clinicians in pathology and dermatology also tended to view AI positively. Among 487 survey respondents in pathology from 59 countries, 73% expressed interest or excitement in AI as a diagnostic tool for improving workflow efficiency and quality assurance.10 Fewer than 20% feared displacement or negative career impacts, with most (73%) stating diagnostic decision-making should remain a predominantly human task or one shared equally with AI. While only 25% were concerned about AI errors, opinions about medico-legal responsibility were split, with 44% believing the AI vendor and pathologist should be held equally liable and 50% believing the pathologist should bear prime responsibility. Most (93%) pathologists supported AI if it resulted in more time being spent on academic or research efforts in answering questions previously not possible. Similarly, among 1271 dermatologists from 92 countries, 77% saw AI as improving diagnostic accuracy, particularly in regards to dermatoscopic images, and 80% thought AI should be part of medical training.11 Less than 6% saw dermatologists being replaced by AI, although 18% held non-specified fears of negative impacts. In contrast, being replaced by AI was of great concern to 27% of laboratory workers and non-clinical technicians in a survey of 1721 subjects, although most (64%) expressed support for AI projects within their organisation and 40% believed AI could reduce errors and save time in their routine work.12

Clinicians from non-imaging-based disciplines considered the potential of AI to be more limited. Among 720 UK general practitioners, most (>70%) thought human empathy and communication could not be emulated by AI, that value-based care required clinician judgement, and that benefits of AI would centre on reducing workflow inefficiencies, particularly administrative burdens.13 Similarly, most psychiatrist respondents (n=791) from 22 countries felt AI was best suited to documenting and updating medical records (75%) and synthesising information to reach a diagnosis (54%).14 Among 669 Korean doctors, most (83%) considered AI useful in analysing vast amounts of clinical data in real time, while more than a quarter (29%) thought AI would fail in dealing with uncommon scenarios owing to inadequate data.15 Respondents felt responsibility for AI-induced errors lay with doctors (49%), patients consenting to use of AI (31%) or AI companies that created the tools (19%). Most Chinese clinicians (82% of 191) were disinclined to use an AI diagnostic tool they did not trust or could not understand how it would improve care.16 Among 98 UK clinicians (including 34 doctors, 23 nurses, 30 allied health professionals), 80% expressed privacy concerns and 40% considered AI potentially dangerous (indeed as bad as nuclear weapons, although this response was primed by reference to a film in which Elon Musk expressed similar sentiments).17 However, 79% also believed AI could assist their field of work and 90% had no fear of job loss. In a survey of 250 hospital employees from four hospitals in Riyadh, Saudi Arabia (nurses=121; doctors=70; technicians=59), the majority stated AI could reduce errors (67%), speed up care processes (70%) and deliver large amounts of high-quality, clinically relevant data in real time (65%).18 However, most thought AI could replace them in their job (78%) despite AI limitations in being unable to provide opinions in every patient (66%) or in unexpected situations (64%), unable to sympathise with patients (67%) and developed by computer specialists with little clinical experience (68%).

Consumers

Consumer surveys of AI in healthcare are few and yield mixed views depending on who was surveyed and what AI functions were considered. Most clinical trials of AI tools also omit assessment of patient attitudes.31 In general, patients view AI more favourably than non-patients, but only if AI is highly trustworthy and associated with clinician oversight.

An online US survey of 50 individuals revealed dehumanisation of clinician–patient relations, low trustworthiness of AI advice and lack of regulatory oversight as significant risks which predominated over potential benefits, although privacy breaches or algorithm bias were not expressed as major concerns.19 In an online survey of 6000 adults from various countries, only 27% respondents expressed comfort with doctors using AI to influence clinical decisions.20

In a survey of 229 German patients, most (≥60%) favoured physicians over AI for history taking, diagnosis and treatment plans, but simultaneously acknowledged AI could help integrate the most recent scientific evidence into clinician decision-making.21 Most (>60%) preferred physician opinion to AI where the two disagreed, and were less accepting (≤45%) of AI use in cases of severe versus less severe disease. In a UK case-based questionnaire study involving 107 neurosurgery patients, most accepted using AI for image interpretation (66%), operative planning (76%) and real-time alert of potential complications (73%), provided the neurosurgeon was in control at all times.22 Among 1183 mostly female patients with various chronic conditions who were considering biometric monitoring devices and AI, only 20% considered benefits (such as improved access to care, better follow-up, reduced treatment burden) greatly outweighed risks and 35% would decline the use of AI-based tools in their care.23 The majority (>70%) of parents of paediatric patients (n=804) reported openness to AI-driven tools if accuracy was proven, privacy and shared decision-making were protected and care using AI was convenient, of low cost, and not in any way dehumanised.24 Among 48 US dermatology patients, most (60%) anticipated earlier diagnosis and better care access, while 94% saw the main function of AI as offering second opinions to physicians, and perceived AI as having both strengths (69% believed AI to be very accurate most of the time) and weaknesses (85% expected rare but serious misdiagnoses).25 A small study found 18 patients with meningioma wanted assurance that use of AI to allocate treatment was fair and equitable, that AI-mediated mistakes would be disclosed and reparations to patients forthcoming and that patient consent was obtained for any sharing of health data.26

Healthcare executives

In a global survey of 180 healthcare executives, 40% of respondents overall favoured increased use of AI applications, although this figure varied according to jurisdiction, with Australian executives (23%) being least in favour.27 Perceived AI benefits comprised improved cybersecurity (56%) operational efficiency (56%), analytics capacity (50%) and cost savings (43%). However, fewer respondents thought there would necessarily be improvements in patient satisfaction (13%), access to care (10%) or clinical outcomes (6%). Respondents cited success factors for AI implementation as comprising adequate staff training and expertise (73%), explicit regulator legislation (64%) and mature digital infrastructures (62%).

Industry professionals

Information technology (IT) specialists, technology and software vendors, researchers and regulators—the ‘insiders’ of AI—may harbour attitudes different to those of AI users such as clinicians, consumers and healthcare executives.

In one German survey (n=123; 42 radiologists, 55 IT specialists, 26 vendors), all three groups mostly agreed (>75%) that AI could improve efficiency of care, provided AI applications had been validated in clinical studies, were capable of being understood by clinicians and were referenced in medical education.28 However, only 25% of participants would advocate sole reliance on AI results, only 14% felt AI would render care more human and 93% required confirmation of high levels of accuracy. In interviews involving 40 French subjects (13 physicians, 7 industry representatives, 5 researchers, 7 regulators, 8 independent observers), all agreed reliable AI required access to large quantities of patient data, but such access had to be coupled with confidentiality safeguards and greater transparency in how data were gathered and processed to protect the integrity of physician–patient relationships.29 On other matters there were notable differences. Physicians highlighted many tools lacked proof of efficacy in clinical settings and they would not assume criminal liability if a tool they could not understand produced errors. Industry representatives wanted greater access to more high-quality data, while wanting to avoid injury liability as they believed this would hinder tool development. Regulators were urgently searching for robust procedures for assessing safety of constantly evolving AI tools, and resolving liability for AI error which would otherwise discourage clinicians and patients from using AI. Researchers with no commercial sponsors wanted more funding and more rapid translation of their findings into practice.

Expectations and dependencies

Our analysis identified certain stakeholder expectations of AI (table 2), with the most frequently cited being a need for accurate and trustworthy applications that improve clinical decision-making, workflow efficiencies and patient outcomes, but which do not diminish professional roles. These expectations, which varied in strength of expression across studies, reflect the dominance of clinician surveys in existing studies. The corresponding self-explanatory dependencies were extrapolated by the authors, and are aligned with those expressed in authoritative reports from the National Academy of Medicine32 and the WHO.33 According to these bodies, understanding stakeholder views is essential in formulating clinical AI policy and that AI designers should focus on education, communication and collaboration in bridging attitudinal disconnects between different stakeholders.

Table 2

Expectations and dependencies

Discussion

Overview of findings

The diversity in attitudes towards AI of different stakeholders and the cautionary sentiments expressed by many suggest AI applications should be seen as complex sociotechnical systems with many interacting components.34 However, stated positive or negative perceptions of AI may not consistently translate into adoption or resistance, or necessarily track what is possible or even probable in a still-developing technology. The failure of many survey studies to cite concrete examples of AI applications in the prelude to questionnaires (some justifying this as a way of avoiding the conjuring up of negative ‘Terminator’ or ‘cyborg’ images) may have caused confusion among respondents as to what they were being asked to conceptualise and respond to. Response rates were either low (<50%) or incalculable, with respondents more likely than non-respondents to hold strong attitudes. Priming effects in how AI was introduced and questions were worded may have biased some responses. Finally, responses in some studies appeared internally inconsistent in that, for example, radiology residents and students acknowledged AI would improve their discipline and wanted more AI training, but, at the same time, feared loss of professional status and held concerns about career choice.

Individuals without direct experience of AI who perceived it in the abstract tended to be more guarded in their views compared with the more optimistic views of direct users or recipients of AI. However, this optimism was more often grounded in views of workflow improvements and error minimisation, rather than perceptions of improved clinical outcomes, greater fairness of access or less risk to patient autonomy compared with current clinical practice. All stakeholders voiced concern about potential harm to patients from AI that lacks human oversight in its design, development and deployment, that the expected benefits of AI were by no means guaranteed, and that explicit regulatory standards must be formulated.

Applications which automate image interpretation and data synthesis were regarded more favourably by clinicians than those directly influencing clinical decisions or having potential to negatively impact clinician–patient relationships or clinician autonomy. Repetitive tasks using digitised data, such as radiological or dermatological diagnosis, are seen as more amenable to being performed by AI applications than interactive or procedural tasks such as consultations or surgical operations.35 Privacy breaches and inability to understand or control AI applications worried clinicians, while loss of clinician oversight and inability to properly share in decision-making worried consumers. There was a common desire to ensure humans remained at the centre of decision-making and preserve empathetic, contextualised communication in clinical encounters.36 Case studies have confirmed consumers prefer human advisers who can appreciate their unique circumstances, and see AI assisting, rather than replacing, clinician advice.37

All stakeholders wanted reassurance that AI-generated advice was trustworthy, and that this level of trust was context-dependent, with clinician opinion trumping AI advice where the two were discordant or where decisions relating to serious illness were being made. As others have also shown,38 stakeholders tend to be less forgiving towards error made by AI than error made by humans. Who should bear liability for error was much more contentious, both between and within stakeholder groups, and subject to considerable ongoing debate.39 In a very recent US survey study of 750 physicians and 1007 members of the public, the majority of both groups believed the physician should be held responsible for AI error, although more of the public held this view than did physicians (66% vs 57%; p=0.02).40 In contrast, more physicians believed the AI vendor (44% vs 33%; p=0.004) should share liability, while equal proportions of both groups conferred liability on regulatory authorities (23% vs 23%) or healthcare organisations purchasing the application (29% vs 23%).

Despite their reservations, clinicians overall were keen to receive further education in AI in recognition of its potential to increase diagnostic accuracy and workflow efficiencies, and this need is increasingly recognised.41 While some clinicians in imaging specialties were worried about potential negative impacts on job prospects and professional status, most clinicians felt AI could enhance professional satisfaction.

Perceptions and expectations

Understanding what drives stakeholder perceptions of AI is important as they critically influence predisposition towards accepting AI.42 Further in-depth research into why differing views of AI are held should assist in formulating operational solutions that accommodate such diversity of views. We note few studies considered the extent to which age, sex, clinical setting, level of expertise in computing or mathematics, personal beliefs and values, or other attributes of individuals impacted on their perceptions of AI in healthcare, which some investigators suggest as being important.43

Notwithstanding these considerations, certain expectations were inherent to many studies from which dependencies can be defined. While these dependencies are not necessarily unique to AI applications, being relevant to other computer-based technologies, the rapid evolution and potentially huge scope of AI magnifies the imperative for these dependencies to be enshrined in governance and ethics policies of government and industry.

Conclusion

A wide range of stakeholders have interest in how AI applications can be used in delivering better healthcare. In general, attitudes towards AI are positive, provided certain safeguards are met. While some concerns about AI are common to most groups, others are unique to a more select few. The challenge for AI developers and implementers is to understand these various concerns and respond appropriately if their applications are to be adopted at scale.

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information. Not applicable.

Ethics statements

Patient consent for publication

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Contributors IAS conceived the study design, undertook initial literature search and data analysis and wrote first draft of the manuscript. SC critically appraised the manuscript and contributed to additional text and table 2. EC critically appraised the manuscript and provided further text under Methods.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; internally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.