Special Issue

Operationalising Fairness in Medical Algorithms

The world is abuzz with applications of data science in almost every field: commerce, transportation, banking, and more recently, healthcare.

Data is proliferating not only because of widespread digital health record adoption, but also because of the growing use of wireless technologies for ambulatory monitoring.

These breakthroughs are due to rediscovered and newly created algorithms, improved computing power and, most importantly, the availability of bigger and increasingly reliable data with which to train these algorithms. From machine learning to artificial intelligence, data science is expected to transform healthcare. Such technological progress offers paths towards discoveries and more precise diagnostic and treatment prescriptions not previously possible. However, numerous critical ethical issues have been identified, spanning privacy, data protection, transparency and explainability, responsibility, and bias.

It is widely recognised that many of the machine learning models and tools may have discriminatory impact thereby inadvertently encoding and perpetuating societal biases, thereby contributing to health inequities.

We propose that machine learning algorithms should not be focused solely on accuracy, but should also be evaluated with respect to how they might impact disparities in patient outcomes.

Aims and scope

This special issue aims to bring together the growing community of healthcare practitioners, social scientists, policymakers, engineers and computer scientists to design and discuss practical solutions to address algorithmic fairness and accountability.
We are inviting papers that explore ways to reduce machine bias in healthcare or create algorithms that specifically alleviate inequalities.

We particularly welcome articles of the following type:
(1) Computational methods that measure and mitigate bias in machine learning models for use in practical healthcare settings. Results should be validated on simulated settings and extended to real healthcare scenarios
(2) Biomedical application papers highlighting the importance of fairness and exposing ethical challenges, and proposing ways in which these biases can be mitigated
(3) Solutions highlighting ways in which knowledge from causal models can be used to create recommendation systems that are actionable in healthcare settings.

Topics of investigation include but are not limited to:

● How should we define, measure and deal with bias in health-related datasets?
Can we design practices that limit these bias effects?
● What are formal fairness criteria for medical algorithms? How should they be
● How can we use causal models for fairness and actionable change in healthcare
systems? How can we design and evaluate the effect of interventions
● What are the dangers of incorporating fairness into computational problems?
● What does human review of models entail if models are available for direct

Guest Editors

To discuss possible contributions please contact one of the guest editors:
Miguel Angel Armengol de la Hoz, Regional Ministry of Health of Southern Spain
Leo Anthony Celi, Harvard Medical School
Sonali Parbhoo, Harvard University
Judy Wawira Gichoya, Emory University

Submission information

Submit directly via the BMJ Health & Care Informatics submission site and select Operationalising Fairness in Medical Algorithms from the dropdown menu.
For information on article types, please see our information for authors.
Article processing charges (APCs) for this special issue will be 50% of the usual APC.


Deadline for submission: 30th September, 2021
Target date of publication: 15th January, 2022