Article Text

Download PDFPDF

COVID-19 pandemic and artificial intelligence: challenges of ethical bias and trustworthy reliable reproducibility?
  1. Casimir Kulikowski1 and
  2. Victor Manuel Maojo2
  1. 1Department of Computer Science, Rutgers University, New Brunswick, New Jersey, USA
  2. 2Artificial Intelligence, Universidad Politecnica de Madrid, Madrid, Spain
  1. Correspondence to Dr Casimir Kulikowski; kulikows{at}cs.rutgers.edu

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Rapid vaccine breakthroughs for the SARS-CoV-2 viral pandemic have been enabled by genomics-based designs and biomedical informatics-driven experimentation relying on many algorithmic and artificial intelligence (AI) methods. Great hopes expressed about informatics for the humanitarian amelioration of pandemics internationally depend on data analytics and AI for predicting COVID-19 spread and public health prevention measures, diagnoses and treatments. Yet, bioinformatics-enabled vaccine development has turned out to be the only truly indispensable technological work-around compensating for the tragic worldwide shortcomings in pandemic responses and insufficiencies in epidemiological genomics data infrastructures.1

Any AI in a healthcare informatics system must target recommendations and actions to individual patients and this requires high-quality relevant data to be extracted and prioritised from heterogenous mixes of statistics, for which much more sophisticated and reproducible methods of semantic annotation, knowledge-based design and cross-validation are needed than commonly used today. These need to build on experience with multiple methods of expert-knowledge representation and inference beyond purely data-driven machine learning. Especially important is to identify high-risk or vulnerable subpopulations to avoid biased misapplication of machine learning and other AI techniques that could exacerbate healthcare inequalities during the COVID-19 pandemic and beyond.2 Natural language analysis has become a major enabling breakthrough for extracting information from the literature and from big data sources, such as electronic health records, laboratory tests, public databases and others. Combined with image analysis, there are initial prototypes and great expectations reported for tracking the COVID-19 pandemic.3 Yet, unfortunately, machine learning methodologies for producing personalised diagnostics and therapeutics are still largely fragile, unexplainable and often insufficiently reproducible.4 Serious medical actions cannot be algorithmically and automatically taken without review and integration with final decision-making judgments of human experts, who draw not only on their experiences in interpreting statistical data subjectively but are also required to take clinical and legal responsibility for the ethical treatment of patients.5 Expert professionals cannot be totally replaced by algorithmic or AI ‘Chatbots’, so admired for efficiency in business or entertainment IT. And even in these less ethically challenged fields, automated software rarely truly satisfies the needs of customers. An extensive review of AI machine learning methods for predictive modelling of COVID-19 infections from lung CT images concluded that a majority of models were at risk of being biased, leading to unreliable results, noting that: ‘In their current reported form, none of the machine learning models included in this review are likely candidates for clinical translation for the diagnosis/prognosis of Covid-19’.6

The above conclusions coincide with the authors’ experience in biomedical AI over many decades.7 Better and thoroughly tested and evaluated models are needed to explain human–machine reasoning under risk and uncertainty. Because the rapid onset of the COVID-19 pandemic required correspondingly urgent responses, most COVID-related AI tools did not undergo comprehensive evaluations, including for those for ethical use, although history has shown this to be essential for clinical systems. An urgent undertaking to make the current predominantly data-driven AI methods (eg, deep learning) clinically usable is to develop innovative advanced cognitive models that are humanely explainable, and ethically driven knowledge-and-experience-based. The COVID-19 pandemic reinforces lessons that for AI to be effective, unbiased and reliably trustworthy for patient care in clinical epidemiological settings, novel AI approaches are urgently needed. These will have to be highly problem focused,8 so the best expert judgments can exploit specific clinical phenotypes from precision medicine developments to interactively, securely, and in clearly explained ways take advantage of the latest computational techniques of structured, indexed data and knowledge base design.

In summary, AI has been key in producing computational genomic analyses and techniques essential for the exceptionally rapid development of COVID-19 vaccines, but expectations that it will play a substantial role in clinically helping handle the current pandemic remain premature, largely based on inadequately tested early prototypes. Lessons learnt during the present COVID-19 pandemic will all have to be critically reviewed and completely new, human-interactive and humanely tested AI developed beyond current data-analytical insights, so the world can respond more effectively with unbiased ethical responsibility to pandemics in the future.

Ethics statements

Patient consent for publication

References

Footnotes

  • Contributors Each author contributed equally to the writing of this article.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Provenance and peer review Commissioned; externally peer reviewed.