Article Text

Download PDFPDF

The clinical artificial intelligence department: a prerequisite for success
  1. Christopher V. Cosgriff1,
  2. David J. Stone2,3,
  3. Gary Weissman4,5,
  4. Romain Pirracchio6 and
  5. Leo Anthony Celi7,8
  1. 1Deparment of Medicine, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania, USA
  2. 2Departments of Anesthesiology and Neurosurgery, University of Virginia, Charlottesville, Virginia, USA
  3. 3Center for Advanced Medical Analytics, University of Virginia School of Medicine, Charlottesville, Virginia, USA
  4. 4Division of Pulmonary and Critical Care Medicine, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania, USA
  5. 5Palliative and Advanced Illness Research Center, University of Pennsylvania, Philadelphia, Pennsylvania, USA
  6. 6Department of Anesthesia and Perioperative Care, University of California, San Francisco, San Francisco, California, USA
  7. 7Laboratory for Computational Physiology, Harvard-MIT Division of Health Sciences and Technology, Cambridge, Massachusetts, USA
  8. 8Division of Pulmonary Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, USA
  1. Correspondence to Dr Leo Anthony Celi; LCeli{at}mit.edu

Statistics from Altmetric.com

Is artificial intelligence (AI) on track to usurp the electronic health record (EHR) as the most disappointing application of technology within medicine? The medical literature is increasingly populated with perspective pieces lauding the transformative nature of AI and forecasting an unforeseen disruption in the way we are practising.1 2 However, the reality of the available evidence increasingly leaves little room for optimism. As a result, there is a stark contrast between the lack of concrete penetration of AI in medical practice, and the expectations set by the presence of AI in our daily life.3 But medical AI need not follow the path of the EHR as a clinical tool that to many led to more workflow woes than it was intended to fix.4–6

As Atul Gawande so eloquently put, “… we’ve reached a point where people in the medical profession actively, viscerally, volubly hate their computers”.7 If we are going to unavoidably add some disruption to workflow with AI, it should be as painless as possible to circumvent further, or perhaps even reduce, clinician burnout. We believe that this will require the combined and cross-disciplinary expertise of an organised and dedicated clinical AI department.

Historical precedents in radiology and laboratory medicine offer lessons for how to steward a new tool into the realm of safe and effective clinical use. Such accomplishments were due, in large part, to the gathering of relevant stakeholders under a single department. This approach ensured that the necessary clinical participants took the reins rather than ceding them to third-party developers. Thus, to secure AI’s place in the annals of successful medical technologies, we propose the establishment of the first departments of clinical AI.

This proposal is deeply rooted in the history of American medicine. In 1890, the first X-ray image was generated at the University of Pennsylvania, although unbeknownst to its creators, Goodspeed and Jennings.8 When the significance of this emerging technology was finally appreciated after the discovery of Roentgen rays, Goodspeed began informally collaborating with surgeons to deploy the technology clinically. This quickly led to the first division, and subsequently, department, of radiology. Under the auspices of this department, clinicians, researchers, engineers, managers and ethicists worked together on a shared mission to pioneer various technologies and methods that are intrinsic to the way medicine is practised today.

Within academic medicine, algorithms are currently developed in silos by researchers interested in the intersection of healthcare and machine learning. This has led to a panoply of published models trained on health data, yet only a handful have been prospectively evaluated on patients. In fact, when models have been prospectively evaluated on clinical outcomes, the results have frequently been unimpressive.9–12 In contrast, the same multibillion-dollar technology companies that exploit patterns in our digital behaviour to sell advertising have now founded entire research programmes around health AI. We would argue that the lack of clinical results is the byproduct of a lack of coherence, leadership and vision. Hence, unless we change course, we should expect that AI deployment in healthcare will progress much the way the EHR revolution did before it, that is, mainly based on corporate and administrative benefits without requiring any demonstrable improvements in processes or outcomes for our patients or ourselves. As in the development of other areas that required full departmental support, the decision to establish a department of clinical AI has several logistical and policy implications.

First, leveraging the premises of AI to improve healthcare represents challenges in a number of ways such as implementation issues and applied policies. Therefore, a chief mandate of department of clinical AI would be to make health centres AI Ready, a concept we illustrate in figure 1. These initiatives should lead to the development of models that will directly benefit the health of our patients, pioneer research that advances the field of clinical AI, focus on its integration into clinical workflows and foster educational programmes and fellowships to ensure we are training current practitioners as well as the next generation of leaders in this field. In addition to these traditional tripartite roles, AI departments should also play an essential role in the implementation, utilisation and enhancement of the infrastructures that underlie AI solutions. Central to this mission will be removing barriers to data access, and the proposed department would therefore assume partnered stewardship of the institution’s data as part of its mandate. While the role of information technology specialists in maintaining a health system’s computational infrastructure should not be subsumed, the department would be responsible for integration, research and production databases that can support its broader mission. By centralising this role, we would finally overcome the chasms among ideas, development and effective deployment.

Figure 1

Medical artificial intelligence (AI) departments will provide the structure by which institutions can become AI Ready.

Second, these new departments will be instrumental as our country’s financial and regulatory environments shift to acknowledge and incorporate AI’s potential to improve care. The tasks and benefits involved may require a modified model of reimbursement such as that in place for laboratory tests. But as has been the case for corporate (eg, Amazon) AI, demonstrated improvements in clinical and financial outcomes could provide financial incentives to support the clinical use of AI and drive the increased deployment of predictive models. Market incentives will no doubt promote the proliferation of companies seeking to sell models to health systems. However, the need for model re-calibration precludes simply buying and deploying third-party models.13 Clinical AI departments will work to ensure that health systems are poised for safe implementations that are tailored to their specific patient populations, and that the necessary data analytics will be readily available for negotiating with payers.

Third, the clinical utilisation of AI will require standardisation such as the establishment of best practice guidelines regarding workflow integration design, performance assessment and model fairness. Appropriate models should be tested on held-out current data to assess performance and safety, and only then prospectively evaluated first without, and then with, deployment in terms of accuracy and impact on clinical end points. From there, regular re-assessments of model calibration must occur to ensure the relationship between the inputs and the outputs has not changed, and to re-fit the model where it has. This requirement for re-assessment and recalibration in a specific clinical context has become evident when researchers have attempted to apply one site’s data sets across institutional, system or geographic boundaries: AI applications can be sensitive to small input changes, and this potential fragility must be carefully and expertly monitored.14 While AI intrinsically manifests some degree of ‘black box’ characteristics, the functionality and reasons for its results should be as transparent and explicable as possible so that clinicians can incorporate these modalities into their workflows.15 As the introduction of information technology in medicine has heretofore demonstrated, successful technical solutions, from both software and hardware aspects, are different and much more difficult to accomplish when all decisions are not black and white, and lives are at stake.

Twenty years now into the 21st century, there is little question that AI will be a defining technology for the foreseeable future. We need visionary clinicians working with expert technical collaborators to establish the organisational structures requisite to translate technological progress into meaningful clinical outcomes. With the innumerable ways in which medicine could be improved, the hype around AI in healthcare will only be realised when the scattered champions of this movement emerge from their silos and begin formally working as a team under the same roof. Our patients are waiting for us to make use of these advances to improve their care, and every day wasted is a missed opportunity. Therefore, we ask—who will establish the first department of clinical AI?

References

Footnotes

  • Twitter @cosgriffc, @MITCriticalData

  • Contributors CVC produced the original draft under guidance from LAC. DS, GW and RP then gave input and provided edits. LAC oversaw the incorporation of these edits, led the discussions around the principle concepts and approved the final draft.

  • Funding LAC is funded by the National Institute of Health through NIBIB R01 EB017205.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.