Introduction
The British standard ‘BS30440: Validation Framework for the Use of AI in Healthcare’ will be published in the second quarter of 2023.1 It details the evidence required by technology developers to assess and validate products using artificial intelligence (AI) in healthcare settings. Healthcare providers can mandate that their suppliers’ products be certified against BS30440 to assure themselves and their service users that the AI product is effective, fair and safe.
For a decade now, there has been growing interest in healthcare AI, especially applications using machine learning approaches, such as deep neural networks.2 This interest has grown exponentially over the past 5 years, with government bodies and regulatory authorities, non-governmental think tanks, professional associations and academic institutions developing a multitude of relevant guidance to address their local contexts.3 In the United Kingdom (UK) this includes, for example, the National Institute for Health and Care Excellence Evidence Standards Framework for digital health technologies, NHSX guidance on ‘Artificial Intelligence: how to get it right’, and guidance on algorithmic impact assessment published by the Ada Lovelace Institute. In addition, there are several international reporting guidelines, including SPIRIT-AI4 (The Standard Protocol Items: Recommendations for Interventional Trials - Artificial Intelligence) and CONSORT-AI5 (Consolidated Standards of Reporting Trials - Artificial Intelligence) for clinical trials of healthcare AI technologies.
As a result, the landscape of guidance on how to develop safe and effective AI systems for healthcare is fragmented across hundreds of documents, largely with a focus on products that would be regulated as medical devices. This has led to a lack of formalised guidance for healthcare AI technologies that are out of remit of medical device regulations, such as those with a focus on healthcare resource planning, logistics or general health and well-being support. While regional regulations for AI (such as the European Union AI act) are in development, and national regulators (eg, the UK Medicines and Healthcare Products Regulatory Agency) develop their own regulatory strategies, there is a clear space for well designed and auditable standards to ensure safety, effectiveness and equity. Such standards do not replace legislation but can form the basis for novel regulatory approaches.
Against this backdrop of a multitude of guidance and frameworks, BS30440 is unique in two ways. First, BS30440 has been developed from an extensive review, which synthesises the fragmented healthcare AI landscape into a single, comprehensive framework. It has received additional input from a multidisciplinary panel of experts, two rounds of public consultations, as well as a community and patient engagement panel.
Second, BS30440 represents a fully auditable standard for the assessment of healthcare AI products. Auditing is critical to ensure that healthcare AI products offer demonstrable clinical benefits, that they reach sufficient levels of performance, that they successfully and safely integrate into the health and care environment, and that they deliver inclusive outcomes for all patients, service users and practitioners. Any healthcare AI product that is successfully certified against BS30440, has passed a broad and substantial evaluation across these properties.
This thorough process of synthesis and stakeholder consultation, coupled with the introduction of clear assessment criteria for auditing, offers significant potential to suppliers who wish to navigate the complex AI guidance landscape by complying with a single framework.