Introduction
Advancements in computing power have heightened the prominence of artificial intelligence (AI) in healthcare, thanks to its vast array of applications.1 From handling patient questions to assisting in surgeries and pushing forward pharmaceutical innovations, AI is offering notable advantages to both patients and the overall healthcare infrastructure.1 According to Statista, the AI healthcare market, which was valued at US$11 billion in 2021, is forecasted to reach a staggering US$187 billion by 2030.2 The neural networks and derived deep learning-based AI algorithms lack clarity and transparency, leading clinicians to hesitate or feel uncertain when making prognosis and diagnosis decisions. The key question is how a clinician can convince by technologist with the evidence of the responses. The gap between the AI algorithms and understanding of humans is known as the ‘black box’ transparency. It is challenging to decide how users can trust that the outcomes of algorithms are correct and appropriate in respect to the analysis, in view of a particular medical situation. There is a common agreement that it is important to consider the explainability of AI seriously, ensuring user’s trust and confidence.2 Although the research and development of AI in healthcare has been ongoing for several decades, the current situation of the AI hype is way different than previous studies.3 After 2018, there was a sudden increase in the domain of explainable AI (EXAI), with 600 articles per year. However, there were only 92 devices approved by the Food and Drug Administration (FDA) in 2022.4 The development of deep learning technologies has an impact on the way we look at AI tools and is one of the reasons behind the excitement surrounding AI applications.1 Healthcare costs are skyrocketing, and the development of costly new therapies contributes to the development of new AI technologies.5 AI promises to alleviate the impact of this development by the improvement of healthcare and making it more cost-effective.6
AI comes with a novel element to healthcare and its relationships.7 But revolutions rarely come without side effects. There are various concerns related to the use of AI in healthcare. Due to the massive use and advancement of AI technologies worldwide, questions have arisen regarding its impact on societal and individual issues.8 Over the last 5 years, private companies, research institutions and public sector organisations have issued ethical AI principles and guidelines. It needs to be stressed that AI should be used appropriately to ensure ethics, transparency and accountability. Despite an apparent consensus that AI should be ‘ethical’, there is a disagreement about what constitutes ‘ethical AI’, as well as which ethical requirements, technical standards and best practices are required for its realisation. Calls for regulations and policies are getting louder, which has led to the introduction of the concept of responsible AI.9
In clinical practice, AI already often plays a major role in clinical decision support systems to assist the clinicians to make better and faster decisions in the diagnosis and treatment of the patients.10 These applications improve the quality of life of patients and healthcare providers including clinicians. Currently, the healthcare industry has aligned its operations with the vision of Healthcare 4.0, but it is soon approaching the dawn of another paradigm shift, termed Healthcare 5.0. This upcoming shift in healthcare will be more analytical and involve smart controls, virtual reality, and three-dimensional modelling.11 Thus, healthcare will be smarter, more personalised and dynamic, which includes more reason-based analytics with innovative business solutions. Advanced 5G network and IoT-based sensors integrated with mobile communications will make the healthcare technologies easier to deliver the remote communities.11 The development in healthcare promises to produce vast amounts of medical data including electronic patient records, images, as well as wearable and other sensor data. AI algorithms such as neural networks will perform complex analytics to process all the healthcare data to enable accurate disease prediction, detection and remote healthcare treatment.12
The incorporation of AI support in general practice is increasingly essential, particularly under time pressures that can lead to diagnostic oversights. In the UK, instances of missed measles cases and misdiagnosed appendicitis during winter months have highlighted the need for improved diagnostic precision. AI systems, as evidenced by studies such as Miotto et al13 and Rajkomar et al,14 offer enhanced diagnostic accuracy by identifying subtle patterns and recognising specific symptoms that may be overlooked by human practitioners.
In this article, we adopt a multidisciplinary view of the major healthcare AI challenges: explainability, trustworthiness, transparency and usability. We refer to the challenges of AI in healthcare throughout the manuscript and provide the necessary context for understanding.