Article Text

Download PDFPDF

Global disparity bias in ophthalmology artificial intelligence applications
  1. Luis Filipe Nakayama1,
  2. Ashley Kras2,
  3. Lucas Zago Ribeiro1,
  4. Fernando Korn Malerbi1,
  5. Luisa Salles Mendonça1,3,
  6. Leo Anthony Celi4,5,
  7. Caio Vinicius Saito Regatieri1 and
  8. Nadia K Waheed3
  1. 1São Paulo Federal University, São Paulo, SP, Brazil
  2. 2Retinal Imaging Lab, Harvard University, Cambridge, Massachusetts, USA
  3. 3Tufts Medical Center, New England Eye Center, Boston, Massachusetts, USA
  4. 4Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
  5. 5Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
  1. Correspondence to Luis Filipe Nakayama; nakayama.luis{at}gmail.com

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Machine learning (ML) is a branch of artificial intelligence (AI) that performs a classification, prediction and/or optimisation task. Similar to brain neurons, neural networks output a label after multiple information layers connection, resembling human thinking.1

AI is already influencing care in many areas, such as radiology, pathology, dermatology and ophthalmology. In ophthalmology, a variety of multimodal imaging examinations are fundamental in the screening, diagnosis and monitoring of diseases and provide data input for AI development.2 Some applications, such as the IDx Technologies (Coralville, USA) which was approved by the Food and Drug Administration 3 years ago, are already used in clinical practice as a screening tool.2 3 Surprisingly, algorithms can even predict gender, age and cardiovascular risk through retinal images.2 4 5 AI may reduce subjectivity and interobserver disagreement in clinical practice.1

Especially in low-income (LIC) and low-to-medium-income countries (LMIC), preventable blindness causes such as diabetic retinopathy (DR) and age-related macular degeneration could be prevented with screening programmes, home monitoring systems or telemedicine. AI-based screening could systematise screening and improve eye care in remote areas.6

ML requires high-quality, well-labelled, representative and large datasets, but at present, ophthalmological ML-ready datasets are only available in a few countries. One hundred seventy-two countries do not have representation in training and validation cohorts.7

Although data from all world countries are a distant goal, equivalent representation of all continents, ethnicities and the maximum number of countries is desired to reduce ML bias. Demographic information and other social determinants of health are typically not contained in these datasets, making it challenging to interrogate algorithms for bias.7 8 High-quality data are also fundamental for environmental-specific algorithm validation, which is essential before AI implementation.

Available automated DR algorithm performance varies considerably in performance in the real world due to limited training data, including heterogeneity in disease presentations and suboptimal image quality.9 In addition, diverse sociodemographic and ethnic representation are necessary if generalisability is a goal.8

In LICs and LMICs, there is a growing gap between the ophthalmologist workforce and the population size. Two-thirds of ophthalmologists live in only 17 countries and in those countries, most practice in the urban centres.10 AI applications can expand access to eye care and may reduce preventable blindness, which is currently 80% of cases.

In addition to diversifying datasets to build AI technology in healthcare, we must invest in building capacity for health informatics and data science across countries. International collaboration between research groups should be incentivised to narrow disparities in AI research in order to reduce world blindness.

Ethics statements

Patient consent for publication

Ethics approval

UNIFESP Ethics Institutional Review Board number: CAAE 33842220.7.0000.5505 / n:0698/2020.

References

Footnotes

  • Twitter @MITCriticalData

  • Contributors All authors designed and cowrote the draft of the paper and critically reviewed and edited the final draft. LFN and AK took responsibility for article concept.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.