Article Text

Linking prediction models to government ordinances to support hospital operations during the COVID-19 pandemic
  1. Prem Rajendra Warde1,
  2. Samira S Patel1,
  3. Tanira D Ferreira2,
  4. Hayley B Gershengorn2,
  5. Monisha C Bhatia3,4,
  6. Dipen J Parekh5,6,
  7. Kymberlee J Manni7 and
  8. Bhavarth S Shukla8
  1. 1Department of Clinical Care Transformation, University of Miami Hospital and Clinics, Miami, Florida, USA
  2. 2Division of Pulmonary, Critical Care, and Sleep Medicine, Department of Medicine, University of Miami Miller School of Medicine, Miami, Florida, USA
  3. 3Department of Medicine, Jackson Memorial Hospital, Miami, Florida, USA
  4. 4Department of Medicine, University of Miami School of Medicine, Miami, Florida, USA
  5. 5Department of Urology, University of Miami Miller School of Medicine, Miami, Florida, USA
  6. 6University of Miami Health System, Miami, Florida, USA
  7. 7University of Miami Hospital and Clinics, Miami, Florida, USA
  8. 8Division of Infectious Diseases, Department of Medicine, University of Miami Miller School of Medicine, Miami, Florida, USA
  1. Correspondence to Prem Rajendra Warde; prw37{at}med.miami.edu

Abstract

Objectives We describe a hospital’s implementation of predictive models to optimise emergency response to the COVID-19 pandemic.

Methods We were tasked to construct and evaluate COVID-19 driven predictive models to identify possible planning and resource utilisation scenarios. We used system dynamics to derive a series of chain susceptible, infected and recovered (SIR) models. We then built a discrete event simulation using the system dynamics output and bootstrapped electronic medical record data to approximate the weekly effect of tuning surgical volume on hospital census. We evaluated performance via a model fit assessment and cross-model comparison.

Results We outlined the design and implementation of predictive models to support management decision making around areas impacted by COVID-19. The fit assessments indicated the models were most useful after 30 days from onset of local cases. We found our subreports were most accurate up to 7 days after model run.

Discusssion Our model allowed us to shape our health system’s executive policy response to implement a ‘hospital within a hospital’—one for patients with COVID-19 within a hospital able to care for the regular non-COVID-19 population. The surgical schedule is modified according to models that predict the number of new patients with Covid-19 who require admission. This enabled our hospital to coordinate resources to continue to support the community at large. Challenges included the need to frequently adjust or create new models to meet rapidly evolving requirements, communication, and adoption, and to coordinate the needs of multiple stakeholders. The model we created can be adapted to other health systems, provide a mechanism to predict local peaks in cases and inform hospital leadership regarding bed allocation, surgical volumes, staffing, and supplies one for COVID-19 patients within a hospital able to care for the regular non-COVID-19 population.

Conclusion Predictive models are essential tools in supporting decision making when coordinating clinical operations during a pandemic.

  • BMJ Health Informatics
  • medical informatics
  • information management
  • information science
  • information systems

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Summary

What is already known?

  • Susceptible, infected and recovered (SIR) models have been deployed to help hospital systems adjust to the COVID-19 pandemic.

What does this paper add?

  • Refitting an SIR model at expected pandemic inflection points based on government ordinances is an effective way to produce a realistic, reliable estimate of projected volume in the form of subepidemics.

  • Predictive modelling assists with maintaining safe levels of routine hospital activity including elective surgery.

Introduction

COVID-19 was identified in December 2019 as a case of pneumonia in Wuhan, China.1–3 The WHO declared COVID-19 a pandemic on 11 March 2020 and called for coordinated mechanisms to support preparedness and response efforts across health sectors.4 Predictive models can be effective support tools for a health system’s pandemic response.5 6 Susceptible, infected and recovered (SIR) modelling is a technique commonly used in epidemics due to its relative simplicity and versatility.7 Many published accounts of such models in COVID-19 literature involve similar inputs, including doubling time and census characteristics of a local region or hospital,8 but most assume a static condition within the population and do not account for dynamics created by local government interventions and corresponding public behaviours.

The spread of COVID-19 is likely best described as a series of subepidemics instead of a single epidemic whose onset, trajectory and offset are influenced by local policy action, commonly referred to as ‘waves’.9 Local policy action in our community consisted of periodic implementation, enforcement and relaxation of emergency orders regarding universal masking and closure of businesses, schools, public spaces, etc.10 Here we present University of Miami simulation (UM-SIM), a reporting tool that is composed of a series of simulations created using discrete event simulation (DES) and system dynamics (SD) theories applied to clinical operations in real time within a single urban academic health system.11 Using changes in emergency orders during the pandemic as key drivers of model inflection points, we also explain how the use of this reporting tool assisted our organisation with decisions relevant to deferrals and recommencement of elective surgical procedures. Finally, we provide comparison of our UM-SIM models to other commonly used simulation models.

Methods

Setting, data sources and computation

University of Miami Hospitals and Clinics (UMHC) is an academic health system encompassing three acute care hospitals and over 140 outpatient care clinics, offering primary and specialty medical and surgical care. Prior to the onset of the pandemic, our hospitals combined consisted of 466 beds, 53 of which were designated as intensive care unit (ICU) beds. In response to the pandemic, we converted 188 medical/surgical rooms to negative pressure, of which 146 could be converted to ICU level of care. The surgical schedule was modified according to the number of COVID-19 cases projected to require hospital and ICU admission. Models summarised here were generated from data from 11 March 2020 to 26 August 2020. UMHC uses UChart, an Epic electronic health record (Verona, Wisconsin; www.epic.com) to retrieve patient-level data. In addition, US Census data provided community demographic information,12 and Florida Department of Health (FDOH) COVID-19 case data supplied community prevalence.13 All computational techniques were completed using Python (2001–2020. Python Software Foundation).14–16

UM-SIM report

Our baseline (pre-emergency order) SIR model,7 a SD-based simulation model, incorporated the initial population of SIR patients, the infected population growth rate, median time to recovery and time period to model over. The model assumes that the reinfection rate is 0%.17 The initial infected population was the count of cases on 11 March (the start of the model) and the initial recovered population was zero. The median time to recovery was assumed to be 10 days.17

To model sequential subepidemics, we created a ‘chained’ series of SIR models triggered by the changes to local emergency orders by Miami-Dade County; we modelled these as impacting the epidemic on 6 May and 10 June 2020, 10 days after Miami-Dade Emergency Orders 21–40 and 24–20, respectively.18 Each subepidemic model defined the initial infected and recovered population based on the end output of the previous subepidemic model in sequence.14–16 The growth rate for each subepidemic was defined by running a logistic transformed linear regression on the cumulative cases up until the to-date maximum daily cases for each subepidemic. This approach resulted in growth rates that were adjusted at each model run until the true peak was met. Once the true peak was met, the growth rate became fixed for that subepidemic.

This Chained SIR Model was then used to create the county incidence, health system incidence, and hospital census subreports, which compose the UM-SIM report:

A county incidence subreport was created using FDOH daily positive case data and the Chained SIR Model to approximate the shape and volume of current and future positive cases. The county population estimates were defined by the Miami-Dade population that was obtained from US Census 2018 estimates17 multiplied by the Miami-Dade test positivity rate.

A health system incidence subreport was created using UMHC daily positive case data and the Chained SIR Model to approximate the shape and volume of current and future positive cases. The UMHC population estimates were defined by county population estimates multiplied by UMHC’s baseline county market share (2.6%) to approximate our case load.19

A hospital census subreport was created using UMHC daily inpatient admission positive case data and the Chained SIR Model to approximate the shape and volume of current and future positive inpatient admissions. The UMHC inpatient census population estimates were defined by the UMHC population estimates and our average COVID-19 admission rate of 19.5%. Weighted hospital length of stay (LOS) was calculated every time the report was run as the weighted average of UMHC’s inpatient non-ICU and ICU LOS among admitted patients year to date. Historical inpatient admissions and predicted inpatient admissions (provided by the subreport) are paired with weighted LOS to estimate daily census along the curve.

A surgical optimisation subreport was generated using DES techniques to run 50 simulations that lasted 30 days each. Scenarios were created by bootstrapping 6 months of historical patient data to approximate random patient inputs such as departmental flow, hospital LOS, interarrival time between admissions, interarrival time between surgeries, turnaround time between surgical cases, stay type (emergent, outpatient and inpatient), surgical specialty, any cancer diagnosis and rate of readmission. These relationships are illustrated in figure 1.

Figure 1

Patient discrete event simulation (DES) process flow. ER, emergency room; ICU, intensive care unit; IP, inpatient (non-ICU); OR, operating room.

Model fit assessment

We performed an assessment of model fit of the county, health system and hospital census subreports using data generated throughout the pandemic by comparing actuals and estimated values. We used the coefficient of determination regression score function (R2), where 1 represents that the model retrospectively fits the correct number of cases.16

Cross-model comparison

We compared performance of the hospital census subreport with the Cleveland Clinic Florida’s Beyond Limits Model20 (an SIR model) and the University of Pennsylvania’s CHIME Model V.1.1.5 (a modified SIR model).20 21 Both external models had subreports for health system hospital census. These were executed on the run dates found in table 2 based on their corresponding published version at the run time, 17 April, 30 June, 7 July and 14 July 2020, respectively. Assumptions used in applying these models are summarised in the online supplemental appendix. We devised a radar chart using Microsoft Excel (Microsoft Corporation, 2020) to depict differences between actual and predicted peak volume and dates at each individual run date. Radar charts were chosen due to lack of continuous data to perform an R2 analysis. We did not have any publicly available external models to compare against the county incidence or health system subreport.

Supplemental material

Table 2

Model comparisons

Surgical optimisation assessment

We performed a qualitative assessment of actual surgical cases vs recommended surgical volume to determine our ability to meet recommendations.

Results

Model fit assessment

Year-to-date estimates across all three subreports, as shown in table 1, were between an R2 of 0.67 and 0.89 after day 30 of the start of the model (11 March) because sufficient data are available thereafter to calibrate models. Health system incidence subreports’ coefficient of determination for predictions up to day 20 were negative because of an unanticipated spike in cases at the start of the epidemic.

Table 1

Coefficients of determination (R2)

UM-SIM report and models

The UM-SIM report was shared daily with leadership in the form of colour-coded graphs seen in figure 2 in a slide deck. They were discussed in weekly COVID-19 command and general staff meetings, biweekly projection emails, biweekly hospital leadership huddles, department meetings and through informal dissemination to frontline staff. Feedback from operations and medical leadership was collected and used to improve models over time. Improvement and changes to models were tracked through a methodology change history with each iteration as more data became available. Figure 2 and table 1 summarise the three models, plotted according to the fit of final model estimates to historic actuals.

Figure 2

Composite of three University of Miami simulation (UM-SIM) models. (A) County incidence. (B) Health system incidence. (C) Hospital census projections.

Cross-model comparison

We compared the UM-SIM model prediction of the hospital census subreports with estimates projected by the Beyond Limits and the CHIME models, summarised in table 2. Each of the subreports was run multiple times (17 April, 30 June, 7 July and 14 July 2020). The actual hospitalised census in subepidemic 1 peaked on 27 April 2020, at 49, and subepidemic 2 peaked on 20 July 2020, at 150. Figure 3 illustrates the absolute percent difference in the predicted peak and the actual number of cases on predicted peak date for each of the three models over the various run dates for the two outcomes. UMHC experienced a peak of 150 cases on 20 July 2020. UM-SIM’s hospital census subreport, which was run on 7 July, predicted a peak census of 110 patients on 14 July 2020 and produced the closest estimate of this true peak on 20 July 2020. When looking at the UM-SIM hospital census subreport, we observed that volume projections were most accurate up to 7 days after model run. We also observed that the Beyond Limits Model had the lowest absolute difference in predicted peak and actual peak date.

Figure 3

Hospital census model comparison.* *This radar chart illustrates the results of hospital census across the two external models along with UM-SIM model based on the absolute per cent difference in predicted and actual hospital census on each model’s predicted peak date. The dates in the figure represent when the models were run. Predictions plotted closest to the centre represent higher accuracy of the model. UM-SIM, University of Miami simulation.

Surgical optimisation assessment

Per table 3, we were only able to schedule enough cases to meet the surgical optimisation recommendations in June and July. We did not have enough patient demand in May, August and September (as of 15 September) to schedule enough cases to meet surgical optimisation recommendations.

Table 3

Weekly surgical volume

Discussion

The COVID-19 pandemic has created unique challenges, not only clinically, but from the perspective of hospital operations. The models described here, which can be adapted to other health systems, provide a mechanism to predict local peaks in cases and inform hospital leadership regarding bed allocation, surgical volumes, staffing and supplies. Our UM-SIM models fit actual data reasonably well and performed better than publicly available models. In our hospital system, the model projections were used to inform three important hospital resource management issues: (1) the health system incidence subreport allowed the supply chain team to evaluate surge volume and adjust personal protective equipment allocation/procurement in real time; (2) the health system incidence subreport allowed anticipation of how many beds to allocate to patients with COVID-19 and how to flex our capacity of negative pressure rooms between ICU and medical/surgical; and (3) the hospital census subreport was used to maximise surgical case volume and minimise the risk of having scheduled more surgeries than available hospital beds.

The most important outcome of the collection of UM-SIM subreports was providing hospital leadership with a roadmap to implement a ‘hospital within a hospital’—one for patients with COVID-19 within a hospital able to care for the regular non-COVID-19 population. We test all patients with a reverse transcriptase PCR assay for COVID-19 on admission and separate patients into COVID-19 and non-COVID-19 floors. All wards can provide negative room pressure, as recommended by the United States Centers for Disease Control (CDC) in their guidelines for healthcare personnel on COVID-19 infection prevention and control. For elective procedures that have already been scheduled, priority should be given to cases for which a short LOS is anticipated, cases that have same-day discharges or time-sensitive surgeries in which patients are likely to have adverse outcomes from further delays. Scheduling surgeries at atypical times (eg, on weekends) and expediting throughput and efficiency (eg, using a dedicated discharge team) are critical to maintaining adequate operating room and ICU capacity. This ensures flexibility between medical-surgical wards and ICU use. The surgical schedule is modified according to models that predict the number of new patients with COVID-19 who require admission.22 The qualitative accuracy of the subreports made it possible to predict the number of beds needed for COVID-19 along with the creation of a separate staffing model and ancillary support teams. The hospital was able to use the remaining units within the hospital to treat the acute care non-COVID-19 patients with a separate staffing model and ancillary support teams. Standard operating procedures were created to establish rapid discharge teams that worked 7 days a week to maximise throughput in both the COVID-19 and acute care sections of the hospital. The hospital incidence subreport also helped predict the volume and timing of patients with COVID-19 coming to the emergency department. In response, we created a separate ED holding unit where admitted patients waited pending COVID-19 results. This prevented the hospital from having to board patients in the ED and provided a safe space for patients (thereby reducing in-hospital COVID-19 transmission) to wait while we readied beds in the appropriate sections of the hospital.

On 6 May, anticipating when governmental ordinances allowed elective surgeries to commence, the surgical optimisation subreport provided confidence to the leadership team to allow scheduling of the appropriate surgical cases in line with predicted capacity. This ensured delivery of needed surgical care in a timely manner and avoidance of lost revenue. We also noted that we were not always able to meet the surgical optimisation recommendations due to a lack of patient demand. Lack of patient demand may be due to patient fears of seeking care and/or job loss that makes surgery unaffordable. Furthermore, decreased patient demand was potentially a result of patient self-delayed surgery. This is an area of risk from a hospital planning perspective because these delays could result in patients presenting later and requiring a higher level of care.23–30

Limitations

As with any model of COVID-19 disease burden, ours are limited by an incomplete understanding of the clinical course of the disease. For example, the UM-SIM model, which is a derivative of the SIR model, assumes no reinfection rate. Another limitation related to model assumptions is that, due to the pandemic, UMHC’s market share of patients in the Miami-Dade area may have changed when compared with baseline estimates. Additionally, prospective data collection was not done in a systematic manner that would enable robust performance evaluation since the primary goal of model development and testing was hospital operational planning and not academic research. Thus, additional data required for optimal comparison with other models (eg, data required for the Beyond Limits model was not collected on 30 June 2020) was not always available. Similarly, we could not quantitatively evaluate our surgical optimisation model’s performance due to reductions in patient demand. Finally, our models do not account for social distancing and differential social mixing patterns due to their simplicity.31 32 The CHIME model does account for social distancing; however, it only accounts for a single social distancing policy.21 Despite these limitations, the models proved invaluable in their ability to create rational plans for the health system to cope with the pandemic burden.33 They also helped leadership develop and adjust standard operating procedures and executive strategy.

Conclusion

The continuous utilisation and communication of our UM-SIM models enabled hospital operations personnel to provide appropriate threat-response remediation and support patient care; if we face subsequent waves of COVID-19 in the future, we expect their utility to remain. UM-SIM represents an example of how mechanistic modelling can be used at the health institution level to inform operational needs. This is in contrast with municipal, county and state models, which are able to inform more general public health interventions but have limited utility at the institution level, as pandemics are an aggregate of epidemics, each of which is distinctly local. Predictive modelling in this way can leverage data to support evidence-based decision making in a local context, when uncertainty is high and information is limited.

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information.

Ethics statements

Ethics approval

Institutional Review Board approval was obtained from the University of Miami (#20200739).

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Twitter @MonishaBhatia3

  • Correction notice This article has been corrected since it was published online. We have added missing middle initials from the author names.

  • Contributors All authors conceptualised and designed the model. PW performed the data collection, analysis and interpretation. PRW, SP, MB and BS drafted the manuscript. All authors contributed to revisions in preparation for submission and publication.

  • Funding This project was not supported by outside funding. Funding to all authors was provided by the University of Miami Hospital and Clinics to support the UHealth-DART Research Group.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.