Article Text

Download PDFPDF

Healthcare artificial intelligence: the road to hell is paved with good intentions
  1. Usman Iqbal1,2,
  2. Leo Anthony Celi3,4,5,
  3. Yi-Hsin (Elsa) Hsu6,7 and
  4. Yu-Chuan (Jack) Li8,9,10
  1. 1Global Health and Health Security Department, College of Public Health, Taipei Medical University, Taipei, Taiwan
  2. 2HealthICT, Department of Health, Tasmania, Australia
  3. 3Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
  4. 4Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
  5. 5Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, USA
  6. 6BioTech EMBA Program, International PhD Program in Biotech and Healthcare Management, School of Healthcare Administration, College of Management, Taipei Medical University, Taipei, Taiwan
  7. 7School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
  8. 8Graduate Institute of Biomedical Informatics, College of Medical Science & Technology, Taipei Medical University, Taipei, Taiwan
  9. 9Dermatology Department, Wan-Fang Hospital, Taipei, Taiwan
  10. 10International Association of Medical Informatics (IMIA), Geneva, Switzerland
  1. Correspondence to Prof. Dr Yu-Chuan (Jack) Li; jack{at}tmu.edu.tw

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

The BMJ Health & Care Informatics presented two editors’ choice papers highlighting artificial intelligence (AI) and the challenges to properly evaluating AI-driven implementation tools associated with healthcare improvement at the system level.

The study from Kueper et al1 focused on AI challenges in the primary care setting in Ontario, Canada. They provided lessons learnt and guidance for future opportunities to improve primary care using AI for resource management. The authors engaged multistakeholders in collaborative consultations. Nine priorities were identified that centred on system-level considerations, such as practice context, organisation and a performance domain devoted to health service delivery and quality of care. The paper highlighted concerns around equity and the digital divide, system capacity and culture, data accessibility and quality, legal and ethical considerations, user-centred design, patient-centredness, and appropriate assessment of AI application.

The role of AI within the learning health system framework is reviewed. AI models should be developed and applied to healthcare processes safely and meaningfully to optimise system performance and the society’s well-being.2 Moreover, AI provides preventive and pre-emptive medicine opportunities that are most valuable when they are prompt, accurate, personalised and acted upon expeditiously.3

Sikstrom et al4 analysed a broad range of literature and investigated the bias and disparities that emerge from the application of AI in medicine. In this study, the authors proposed three pillars (transparency, impartiality and inclusion) for health equity and clinical algorithms. In addition, they proposed a multidimensional conceptual framework to evaluate AI fairness in healthcare. This framework is designed to ensure that decision support tools that provide predictions promote health equity.

A crucial problem facing AI research is data focused on specific regions and diseases that are then used to validate and train the algorithms, resulting in lack of generalisability over the global AI research landscape.5 6 There is growing evidence that AI tools that perpetuate or even magnify inequities and disparities are often due to design and development misspecifications. Standards and classification system for AI-based healthcare technologies are required to facilitate research and evaluation to mitigate unintended harm and maximise patient and systems benefits.7 8 All stakeholders need to be involved in validating the feasibility and effectiveness of AI.

The application of AI in medicine faces several challenges. It requires a development lifecycle framework that prioritises health equity and social justice.9 10 Ultimately, AI systems must be continuously monitored to ensure that it does not contribute to outcome disparities across patient demographics.

Ethics statements

Patient consent for publication

References

Footnotes

  • Twitter @UsmanIqbal85, @MITCriticalData, @jaak88

  • Contributors Initial conception and design: UI, LA C, Y-HH, Y-C L. Drafting the manuscript: UI, LAC, Y-HH, Y-CL. Critical revision of the paper: UI, LAC, Y-HH, Y-CL.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; internally peer reviewed.