The BMJ Health & Care Informatics presented two editors’ choice papers highlighting artificial intelligence (AI) and the challenges to properly evaluating AI-driven implementation tools associated with healthcare improvement at the system level.
The study from Kueper et al1 focused on AI challenges in the primary care setting in Ontario, Canada. They provided lessons learnt and guidance for future opportunities to improve primary care using AI for resource management. The authors engaged multistakeholders in collaborative consultations. Nine priorities were identified that centred on system-level considerations, such as practice context, organisation and a performance domain devoted to health service delivery and quality of care. The paper highlighted concerns around equity and the digital divide, system capacity and culture, data accessibility and quality, legal and ethical considerations, user-centred design, patient-centredness, and appropriate assessment of AI application.
The role of AI within the learning health system framework is reviewed. AI models should be developed and applied to healthcare processes safely and meaningfully to optimise system performance and the society’s well-being.2 Moreover, AI provides preventive and pre-emptive medicine opportunities that are most valuable when they are prompt, accurate, personalised and acted upon expeditiously.3
Sikstrom et al4 analysed a broad range of literature and investigated the bias and disparities that emerge from the application of AI in medicine. In this study, the authors proposed three pillars (transparency, impartiality and inclusion) for health equity and clinical algorithms. In addition, they proposed a multidimensional conceptual framework to evaluate AI fairness in healthcare. This framework is designed to ensure that decision support tools that provide predictions promote health equity.
A crucial problem facing AI research is data focused on specific regions and diseases that are then used to validate and train the algorithms, resulting in lack of generalisability over the global AI research landscape.5 6 There is growing evidence that AI tools that perpetuate or even magnify inequities and disparities are often due to design and development misspecifications. Standards and classification system for AI-based healthcare technologies are required to facilitate research and evaluation to mitigate unintended harm and maximise patient and systems benefits.7 8 All stakeholders need to be involved in validating the feasibility and effectiveness of AI.
The application of AI in medicine faces several challenges. It requires a development lifecycle framework that prioritises health equity and social justice.9 10 Ultimately, AI systems must be continuously monitored to ensure that it does not contribute to outcome disparities across patient demographics.