Article Text

Download PDFPDF

Artificial intelligence in healthcare: Opportunities come with landmines
  1. Usman Iqbal1,2,3,
  2. Yi-Hsin Elsa Hsu4,5,6,7,
  3. Leo Anthony Celi8,9,10 and
  4. Yu-Chuan (Jack) Li3,11,12,13
  1. 1School of Population Health, Faculty of Medicine and Health, University of New South Wales (UNSW), Sydney, NSW, Australia
  2. 2Global Health and Health Security Department, College of Public Health, Taipei Medical University, Taipei, Taiwan
  3. 3International Center for Health Information and Technology, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
  4. 4Biotechnology Executive Master's Degree in Business Administration (BioTech EMBA), Taipei Medical University, Taipei, Taiwan
  5. 5School of Healthcare Administration, College of Management, Taipei Medical University, Taipei, Taiwan
  6. 6International Ph.D. Program in BioTech and Healthcare Management, College of Management, Taipei Medical University, Taipei, Taiwan
  7. 7Department of Humanities in Medicine, College of Medicine, School of Medicine, Taipei Medical University, Taipei, Taiwan
  8. 8Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
  9. 9Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
  10. 10Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, USA
  11. 11Graduate Institute of Biomedical Informatics, College of Medical Science & Technology, Taipei Medical University, Taipei, Taiwan
  12. 12Department of Dermatology, Taipei Municipal Wanfang Hospital, Taipei, Taiwan
  13. 13The International Medical Informatics Association (IMIA), Zürich, Switzerland
  1. Correspondence to Prof Yu-Chuan (Jack) Li; jack{at}

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In the ever-evolving landscape of healthcare, the convergence of artificial intelligence (AI) within breast cancer screening and the transformative potential of natural language processing (NLP) in ensuring patient safety stands as a testament to ground-breaking progress.1 2 The seamless integration of AI technologies in radiology is reshaping diagnostic precision while NLP’s capacity to decipher and enhance safety protocols heralds a new era in healthcare innovation.3 4

Contrary to common assumptions, the presence of AI does not necessarily guarantee improved efficiency or accuracy in interpreting medical images. It is concerning that AI’s identification of potential errors can paradoxically lead some radiologists to make more mistakes and spend more time analysing images, highlighting the dangers of developing AI systems in isolation.5 This underscores the crucial need for designing collaborative human-AI systems rather than standalone AI solutions, as the full extent of AI’s influence on human behaviour remains unpredictable. Moreover, there is also a critical concern regarding patient safety as a matter of health equity, shedding light on the disparities in medical errors and treatment injuries exacerbated by social determinants of care. It calls for a holistic approach to healthcare delivery that prioritises equity and inclusivity, ensuring that all patients receive the highest standard of care irrespective of their social circumstances.

The two ‘editor’s choice’ articles highlight how crucial it is to embrace AI in breast cancer screening and NLP in enhancing patient safety in healthcare’s dynamic landscape. Högberg et al6 studied an insightful exploration into the potential and challenges associated with AI in breast radiology. The Swedish breast radiologists’ perspective on AI in mammography screening revealed an overwhelmingly positive attitude towards its incorporation, highlighting the potential to enhance efficiency in diagnostic processes. However, alongside this optimism, the study uncovered a labyrinth of uncertainties and diverse viewpoints. Concerns loomed over potential risks ranging from medical outcomes to the reshaping of working conditions and crucial uncertainties regarding the assignment of responsibility in AI-mediated medical decision-making.7 The complexity of delineating accountability between AI systems, radiologists and healthcare providers emerged as a pivotal issue demanding resolution.

Addressing these intricacies is paramount for harnessing AI’s potential while upholding the integrity of patient care and professional practice in the evolving landscape of breast radiology.8–10 Most professionals favoured AI as a supportive tool, but divergent opinions arose regarding its optimal integration into the screening workflow. The authors delineated varied views on AI’s impact within the profession, stressing the absence of consensus on the extent of change and the consequent transformation of breast radiologists’ roles.6 Collaboration between human radiologists and AI assistance in radiology, expected to heavily impact the field, is under investigation. While AI tools show promise, biases in human use of AI limit potential gains. Radiologists should either solely rely on AI or work independently, rather than collaboratively.5 Additionally, optimal delegation policies are proposed, considering time costs and suboptimal use of AI information. Future research should explore AI-specific training for radiologists and organisational factors influencing human–AI collaboration. A pressing need exists to address multifaceted challenges, particularly in establishing clear ethical, legal and social frameworks governing AI integration in radiology.

In the second study by Tabaie et al,11 uncovered crucial contributing factors from patient safety event reports, showcasing the transformative potential of NLP algorithms in healthcare insights. This study’s findings involved identifying and categorising contributing factors within a decade’s worth of self-reported patient safety events from a multihospital healthcare system. These contributing factors pivotal in precipitating or permitting patient safety incidents, often remain concealed within the intricate narratives of these reports. The authors introduced a method to extract ‘information-rich sentences’ from reports, unveiling hidden contributing factors and refining their categorisation using NLP, leveraging unstructured data in patient safety event reports to isolate crucial sentences defining contributing factors.11 Automating the identification and categorisation of contributing factors empowers healthcare systems to proactively address safety concerns, fostering quicker responses and continuous improvement. However, the study’s reliance on data from a singular health system prompts inquiries about its generalisability. As healthcare increasingly embraces data-driven decision-making, harnessing NLP emerges as a pivotal strategy in safeguarding patient well-being.12–14 The findings call for further exploration and adoption of NLP-driven approaches to enhance patient safety initiatives globally.

While both studies mark significant strides in healthcare, certain considerations arise.6 11 The study on AI integration in breast radiology highlights uncertainties and the need for collaborative efforts in establishing clear governance frameworks. The retrospective nature of the NLP study calls for real-time validation and raises concerns about generalisability beyond a singular healthcare system.

Nonetheless, these studies underscore the transformative potential of technology in reshaping healthcare paradigms. Embracing AI in breast cancer screening and leveraging NLP for patient safety initiatives open avenues for proactive, data-driven decision-making. Further evaluation, exploration and widespread adoption of these technologies throughout their life cycle are pivotal in promoting patient safety and elevating healthcare quality, emphasising the central focus on integrating fairness and equity globally within healthcare.15 16

Ethics statements

Patient consent for publication



  • X @MITCriticalData

  • Contributors UI drafted the initial manuscript. Supervision was provided by Y-HEH, LAC and Y-CL. All authors approved the final manuscript as submitted and agreed to be accountable for all aspects of the work.

  • Funding LAC is funded by the National Institute of Health through R01 EB017205, DS-I Africa U54 TW012043-01 and Bridge2AI OT2OD032701, and the National Science Foundation through ITEST #2148451.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.