FormalPara Take-home message

Retrospective studies demonstrate that machine learning models can accurately predict sepsis and septic shock onset. Prospective clinical studies at the bedside are needed to assess their effect on patient-relevant outcomes.

Introduction

Sepsis is one of the leading causes of death worldwide [1], with incidence and mortality rates failing to decrease substantially over the last few decades [2, 3]. While the Surviving Sepsis international consensus guidelines recommend starting antimicrobial treatment within 1 h from sepsis onset given the association between treatment delay and mortality [4,5,6,7,8], early recognition can be difficult due to disease complexity in clinical context [9, 10] and heterogeneity of the septic population [11].

In recent years, medicine has witnessed the emergence of machine learning as a novel tool to analyze large amounts of data [12, 13]. Machine learning models to diagnose sepsis ahead of time are typically left or right aligned (Fig. 1) [14]. Left-aligned models predict the onset of sepsis following a fixed point in time, with varying time points such as on admission [15] or preoperatively [16, 17]. Right-aligned models continuously predict whether sepsis will occur after a distinct period of time and are also known as real-time or continuous prediction models. From a clinical perspective, they are particularly useful as they could trigger direct clinical action such as administration of antibiotics. Given their potential of prospective implementation and the large variety of left-aligned models, we focus on right-aligned models in this paper.

Fig. 1
figure 1

Left versus right alignment. Left alignment (top) versus right alignment (bottom). Cases are aligned at the alignment point, in the feature window data are collected, the prediction window is the time of the prediction ahead of sepsis onset. Red sepsis cases, green non-septic cases

Interpretation of machine learning studies predicting sepsis can be confusing, as some predict sepsis at its onset, which may seem counterintuitive and of little practical use. Their goal, however, is to identify whether a patient fulfills a predefined definition of sepsis, including proxies for infection such as antibiotic use or culture sampling. During development, these proxies are available to the model, while in a test set or new clinical patient, these are unknown. A model has therefore trained to predict whether sepsis is present in a new patient based on all other variables. In clinical practice, recognition of sepsis may be delayed and timely detection could expedite diagnosis and treatment. While we prefer the terms identification or detection in this context, we will use the term prediction throughout this work for brevity.

Considering the potential of machine learning in sepsis prediction, we set out to perform a systematic review of published, real-time (i.e. right aligned) machine learning models that predict sepsis including aggravate forms such as septic shock in any hospital setting. We hypothesized that these models show excellent performance retrospectively, but that few prospective studies have been carried out. In addition, we aimed to identify the most important factors that determine predictive performance in a meta-analysis.

Methods

This systematic review was conducted in accordance with the Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies (PRISMA-DTA) statement [18]. The study protocol was registered and approved on the international prospective register of systematic reviews PROSPERO before the start of the study (reference number CRD42019118716).

Search strategy

A comprehensive search was performed in the bibliographic databases PubMed, Embase.com, and Scopus up until September 13th, 2019, in collaboration with a medical librarian (LS). Search terms included controlled terms (MesH in PubMed and Emtree in Embase), as well as free-text terms. The following terms were used (including synonyms and closely related words) as index terms or free-text words: ‘sepsis’ and ‘machine learning’ and ‘prediction’. A search filter was used to limit the results to humans and adults. Only peer-reviewed articles were included. Conference abstracts were included to identify models that were published in full text elsewhere, but were excluded from the review. The full search strategies for all databases can be found in Online Resource 1.

Two review authors (LF and CZ) independently performed the title-abstract and full text screening. Disagreement was resolved by an independent intensivist (PE) and data scientist (MH). For the full text article screen, reasons for exclusion per article were recorded. References of the identified articles were checked for additional papers. Data were extracted by LF and confirmed by CZ. Discrepancies were revisited by both the authors to guarantee database accuracy.

Eligibility criteria and study selection

Studies were eligible if they aimed to predict the onset of sepsis in real time, i.e., right alignment, in adult patients in any hospital setting. Both prospective and retrospective studies were eligible for inclusion. The target condition was the onset of sepsis, severe sepsis, or septic shock. Although the 2016 consensus statement abandoned the term severe sepsis [19], papers prior to the consensus statement targeting severe sepsis were included. The target condition (gold standard) is defined per paper and serves to establish model performance (i.e., how well the model predicts sepsis versus non-sepsis cases). We collected these definitions per paper, as well as the components of these definitions: use of international classification of diseases (ICD) codes, SIRS/SOFA criteria, initiation of antibiotics, or sampling of blood cultures.

Supervised machine learning models were the index test of interest, defined as any machine learning classifying technique to predict the onset of the target condition, through some type of learning from presented data in a training dataset. Scikit Learn is one of the most used packages to code machine learning models in the popular programming language Python. Pragmatically, all supervised learning models found in this package were considered machine learning models [20]. A statement that the paper belongs to the machine learning domain, or any of its synonyms, was required for inclusion. An extensive list of commonly used machine learning model names was added to the search to cover any papers that failed to mention machine learning in their title or abstract.

Other items that were collected from the papers included the year of publication, study design, privacy statements, the origin of the model development and test dataset, use of an online database, description of the study population, the country of origin, the dataset split, the inclusion and exclusion criteria used, data granularity, methods for dealing with missing values, size of the database, number of patients with the outcome, the number of hours the model predicted ahead of time, the features used in the model, whether cross-validation was performed and its number of folds and the length of the sliding window, i.e. hours of data that were continuously fed to the model and the type of machine learning model.

Quality of evidence and risk of bias

As of yet, there exists no widely accepted checklist for assessing the quality of diagnostic machine learning papers in a medical setting. This paper used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology to assess the quality of evidence per hospital setting for all studies reporting the area under the curve of the receiver operating characteristic (AUROC) as their performance metric [21]. In line with the GRADE guidelines for diagnostic test accuracy, we included the domains risk of bias (limitations), comparison of patients, setting, and outcome across studies (indirectness of comparisons), and imprecision of the results. As we do not compute point estimates for multiple studies combined, judgment of inconsistency was omitted. One level of evidence was deducted for each domain with serious concerns or high risk of bias, no factors increased the level of evidence (see Online Resource 2). Overall level of evidence is expressed in four categories (high, moderate, low, very low).

To evaluate risk of bias, the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) criteria [22] were combined with an adapted version of the Joanna Briggs Institute Critical Appraisal checklist for analytical cross-sectional studies [23]. The latter has been used in previous work to assess machine learning papers [24]. Domains included patient selection, index test, reference standard, flow and timing, and data management. In line with the recommendations from the QUADAS-2 guidelines, questions per domain were tailored for this paper and can be found in Online Resource 3. Two review authors (LF and CZ) independently piloted the questions to ascertain between-reviewer agreement. If one of the questions was scored at risk of bias, the domain was scored as high risk of bias. At least one domain at high risk of bias resulted in an overall score of high risk of bias, only one domain scored as unclear risk of bias resulted in an overall score of unclear risk of bias for that paper.

Performance metric and meta-analysis

Substantial heterogeneity was observed between studies regarding the setting, index test, and outcome. We therefore refrained from computing a point estimate for overall model performance. However, the large number of studies and models did allow for analysis of study characteristics’ and model parameters’ contribution to model performance. Multiple models were reported per paper, introducing collinearity in their performance. A linear random effect model was built with a paper-specific random effect to account for correlations between models published in the same paper. For clarity, we refer to all study characteristics that served as input to this analysis as covariates, while variables to develop the presented models are referred to as features.

The machine learning field distinguishes numerous metrics to gauge model performance, none of which gives a complete picture. The AUROC, a summary measure of sensitivity and specificity, has been customary to the field of diagnostic test accuracy. Since 24 out of 28 papers (86%) reported the AUROC, this was pragmatically selected as the main performance metric. Other metrics were collected, but unsystematically reported. As AUROCs are constrained to the interval 0.5 to 1.0, they were transformed and linearized to a continuous scale by taking the logit transformation of the result of the formula \( \left( {\frac{\text{AUROC}}{0.5} - 1} \right) \). Because only 43 models (33%) reported confidence intervals, within-study variability was omitted from the analysis. For studies that did report confidence intervals, one-sided AUROC confidence intervals did not exceed 0.02.

All items collected from the presented studies were added as covariates to the random effects model, including components of the target condition. Missing values in the continuous covariates were imputed with the column median. To account for the high ratio of covariates to number of models, some of the features identified in the models were grouped (lab values, blood gas values, co-morbidities, department information), only covariates with 10% variance in their values were included and models that aimed to predict combined outcomes were removed as they were too scarce in the database. One outlier reference model was excluded [25].

All covariates were first tested in a univariate model for a significant contribution to the transformed AUROC using a likelihood ratio test against an empty model containing only the intercept and the variance components. All significant covariates (p < 0.05) were then considered for a multivariate model. Through backward Akaike information criterion (AIC) selection, a parsimonious model was selected. Covariate coefficients, standard error, and p values are reported. All analyses were carried out in R [26].

Results

Study selection

After removing duplicates and reference checking for extra papers, a total of 2.684 papers were screened. Among these, 130 papers were read full text resulting in 28 papers that met the inclusion criteria for synthesis. Reasons for exclusion at this stage were recorded and can be found in the flow diagram in Fig. 2. From these papers, 130 models were retrieved (range 1–16 models per paper). All studies reported retrospective diagnostic test accuracy. In addition, models were prospectively validated in two papers (7%) and clinically implemented in three papers (11%), as depicted in Fig. 3. Out of all papers, 24 reported AUROC as their performance metric.

Fig. 2
figure 2

Flow diagram. Papers identified in databases, title/abstract screened, read full text, and included in the synthesis. Reasons for exclusion are listed

Fig. 3
figure 3

Prospective versus retrospective models. Percentages specified per paper and for all models

Study characteristics

Most of the studies were carried out in the ICU (n = 15; 54%), followed by hospital wards (n = 7; 25%) and the emergency department (ED, n = 4; 14%). Two studies by Barton et al., and Mao et al., examined all of these settings [25, 27]. In the intensive care, most of the studies modeled sepsis as their target condition (n = 10; 67%), compared to severe sepsis (n = 3; 20%) or septic shock (n = 2; 13%). This contrasts the in-hospital studies, where almost half of the papers aimed to predict septic shock (n = 3; 43%). Figure 4 gives an overview of key characteristics per study.

Fig. 4
figure 4

Overview of retrospective diagnostic test accuracy studies. Papers are binned per hospital setting, data are sorted in ascending order of AUROC values. AUROC ranges are displayed per paper. AUROC area under the curve of the receiver operating characteristic, SVM support vector machines, GLM generalized linear model, NB Naive Bayes, EM ensemble methods, NNM neural network model, DT decision trees, PHM proportional hazards model, LSTM long short term memory, Hrs bef. onset hours before onset * DT, EM, GLM, LSTM, NB, NNM, SVM

Retrospective diagnostic test accuracy varied per setting and target condition. For the studies that reported AUROCs, best predictions of sepsis ranged from 0.87 to 0.97 in the emergency department, to 0.96–0.98 in-hospital and 0.68–0.99 in the intensive care unit. Septic shock predictions in an in-hospital setting ranged between 0.86–0.94 and 0.83–0.96 in the ICU at best. Other outcome measures such as positive predictive value (n = 11; 39%), accuracy (n = 10; 36%), and negative predictive value (n = 6; 21%) were unsystematically reported. The minimum, mean, and maximum AUROC values with relevant study characteristics are visualized per paper in Fig. 4.

Prospective studies included two clinical validation studies (ED and in-hospital) and three interventional studies (in-hospital and ICU). One clinical validation study in the ED showed the machine learning model outperformed manual scoring by nurses and the SIRS criteria when identifying severe sepsis and septic shock [28], the other study made no comparison [29]. The interventional studies included two pre-post implementation studies (in-hospital) [30, 31] and one ICU randomized controlled trial [32]. All looked at mortality and hospital length of stay, but results are mixed as shown in Table 1.

Table 1 Prospective models

For the target condition, different definitions of sepsis, severe sepsis, and septic shock were used. Definitions and their components are reported in Table 2. Definitions that had been used before are named according to the first paper they appeared in. Calvert et al. [33] was one of the first to study machine learning to identify sepsis in an ICU population and Seymour et al. [34] assessed the sepsis-3 criteria. Nine studies (32%) employed a definition for sepsis that had been previously used.

Table 2 Target condition definitions per paper per setting

A breakdown of the paper and model characteristics per setting can be found in Table 3. The number of features used in the models ranges from 2 to 49 and the most common features are shown in Fig. 5. Thirty six percent of papers used MIMIC data; others used non-freely available hospital datasets. Three papers using their own hospital data reported inquiries for data sharing were possible [28, 32, 35], two papers reported data would not be shared [25, 31]. None of the studies mentioned their code was released and only one paper reported adhering to a reporting standard [36].

Table 3 Description of the data per paper and per model
Fig. 5
figure 5

Features used in the papers. Features are grouped by type. ESR erythrocyte sedimentation rate, HR heart rate, MAP mean arterial pressure

Quality of evidence and risk of bias

In accordance with the publication guidelines of the QUADAS-2 criteria, results for the risk of bias for retrospective diagnostic test accuracy studies are shown in Table 4. Nine out of 28 (32%) papers were scored as unclear risk of bias; all other papers were scored as high risk of bias. Papers scored a high risk of bias for failing to describe their study population (patient selection), not reporting their data split or cross-validation strategies (index test), or failing to specify ethical approval (data management). As there exists no gold standard in diagnosing sepsis, the variety in definitions may increase the risk of bias of the models. All papers therefore have an unclear risk of bias concerning the reference standard.

Table 4 QUADAS-2 risk of bias assessment per setting

The GRADE evidence profile can be found in Table 5. Results are shown when at least two studies reported the same target condition. All study aggregates were considered to be at high risk of bias, only five studies were considered at unclear risk of bias (included in brackets in Table 5). One level of evidence was deducted for high risk of bias and one level was deducted for indirectness of the outcome. Consequently, the quality of evidence for each of the settings was scored as low. Additionally, the outcome column distinguishes AUROC values for high and unclear risk of bias studies. Consistently, high risk of bias studies reported the highest AUROC values, although ranges are wide and relatively few unclear risk of bias studies were identified.

Table 5 GRADE evidence profile for area under the receiving operating characteristic curve (AUROC)

Meta-analysis

A total of 111 models were included in the meta-analysis after removal of an outlier (n = 1; 1%), combined outcomes (n = 3; 2%), and models without an AUROC outcome measure (n = 15; 12%). Initially, 103 covariates were included in the model. To reduce the ratio of covariates to the number of models, features used in the models were grouped (n = 41; 40%) and covariates with low variance (n = 24; 23%) and perfectly colinear covariates (n = 1; 1%) removed. This amounted to a total of 39 covariates in the meta-analysis random effect model.

Univariate and multivariate random effect model results are shown in Table 6. Coefficients are logit transformed AUROC values and represent the expected mean change in AUROC, when the sepsis prediction model exhibited the respective characteristic (e.g. used lab values). Univariate analysis of the 39 covariates shows heart rate, respiratory rate, temperature, lab and arterial blood gas values, and neural networks (relative to ensemble methods) positively contributed to the AUROC (range 0.344–0.835). Only temperature, lab values, and model type remained in the multivariate model. On the contrary, defining sepsis using the definition coined by Seymour et al., using SOFA scores in the target condition definition, or any other model but ensemble methods or neural networks negatively impacts AUROC in the univariate analysis (range 0.168–1.039). Since the AUROC was logit transformed, it was back-transformed to the AUROC scale by taking the anti-logit. The relationship between AUROC and the hours before onset of the prediction is visualized for three models in Fig. 6.

Table 6 Univariate and multivariate outcomes
Fig. 6
figure 6

Relative effect of hours before sepsis onset on AUROC for different models. Expected change in AUROC for three models at different prediction windows (hours before sepsis onset)

Discussion

This is the first study to systematically review the use of machine learning to predict sepsis in the intensive care unit, hospital wards, and emergency department. Twenty eight papers reporting 130 machine learning models were included, each showing excellent performance on retrospective data. The most predictive covariates in these models are clinically recognized for their importance in sepsis detection. Assessment of overall pooled performance, however, is hampered by varying sepsis definitions across papers. Clinical implementation studies that demonstrate improvement in patient outcomes using machine learning are scarce.

Performance and clinical relevance of individual models

Clinically, accurate identification of sepsis and prediction of patients at risk of developing sepsis is essential to improve treatment [37]. Current approaches to identify septic patients have centered around biomarkers and (automated) clinical decision rules such as the SIRS and (q)SOFA criteria [38, 39]. However, concerns have been raised regarding the poor sensitivity of the qSOFA possibly leading to delays in sepsis identification [40]. The high sensitivity of the SIRS criteria, on the other hand, could lead to over diagnosis of sepsis resulting in inappropriate antibiotics use [41]. Additionally, most of the investigated biomarkers failed to show discriminative power or clinical relevance [42, 43]. The presented machine learning models provide a novel approach to continuously identify sepsis ahead of time with excellent individual performance. These models present an alternative to the widely used SIRS and SOFA criteria and clinicians may be faced with these models in the near future. Therefore, it is important that they understand the strengths and limitations of these models.

Heterogeneity and pooled performance

Ideally AUROC values across all presented models would be pooled to estimate overall machine learning performance. However, considerable heterogeneity in the sepsis definitions between studies hampers such computation. The lack of a gold standard for sepsis allows for a variety of definitions to be adopted. Many studies use ICD coding, which may be an unreliable instrument to identify septic patients [44, 45]. Arguably, all papers should use the most recent consensus definition [19]. Only a minority of papers used the latest sepsis-3 criteria and within these studies, we found differences in the way the sepsis onset time was defined. Due to these varying definitions, we refrained from computing overall performance of machine learning models and we consequently judged the quality of evidence as low for each of the hospital settings. Nonetheless, each of the definitions is a clinically relevant entity that might justify early antibiotic and supportive treatment.

Additionally, heterogeneity is observed in machine learning models, preprocessing of the data, and hospital setting. While this further limits pooling of the overall performance, it does allow for meta-analysis of the models to identify the most important factors that contribute to model performance. Most predictive covariates from our meta-analysis such as heart rate and temperature are recognized for their clinical importance in sepsis detection. Variables that are part of the SIRS and SOFA criteria were expected to correlate with model performance, since they are frequently part of the sepsis definitions. Interestingly, some other factors that are not part of these criteria, such as arterial blood gas variables, were also strong predictors univariately. Lab values are often not considered in early warning scores [46], but our results imply that these scores may miss predictive information.

Clinical model performance

It is important to investigate, whether improved sepsis predictions lead to better clinical outcomes for patients. We distinguish prospective clinical validation studies that assess model performance in a clinical setting and interventional studies, where the effect of exposing healthcare professionals to model predictions on patient outcomes is investigated. Only one study clinically validated their model and showed that these models outperformed nurse triaging and SIRS criteria in the emergency room [47].

Interventional studies using traditional SIRS and SOFA alarm systems have not shown significant changes in clinical outcomes [48,49,50]. Only three interventional studies have been identified in this review, which were carried out in different clinical settings and show mixed results [31, 32, 51]. None of the studies, however, investigated a direct clinical action associated with the sepsis prediction, but left treatment decisions at the discretion of the clinician. Prior to sepsis onset, however, clinically overt signs of sepsis may be subtle or absent and false positive alerts in these studies may create alarm fatigue. Nonetheless, as of yet, there is no compelling evidence that machine learning predictions lead to better patient outcomes in sepsis.

Future directions and academic contribution

An important message in this paper is that systematic reporting is essential for reliable interpretation and aggregation of results. Almost none of the papers mentioned using a reporting standard and very few papers reported they accept data inquiries [32, 35, 47]. In addition, high bias studies showed highest AUROC values overall. We encourage the authors to strive for the sharing of code and data in compliance with relevant regulations. This would allow for easy data aggregation, model retraining, and comparison as our insight into sepsis definitions evolves.

It should be noted that many models were developed on similar populations. Specifically, numerous models were tested on the freely accessible MIMIC database [27, 33, 52,53,54,55,56,57,58,59] and all models were developed in the United States. The current trend holds risks for promoting inequality in healthcare as no models were developed or validates in middle or low income countries. We encourage developing models on data of different centers and countries to ensure generalizability.

Finally, future research is needed to determine effective integration strategies of these models into the clinical workflow and assess the effect on relevant clinical outcomes. Interestingly, most models only use a small subset of the wealth of available data to clinicians, which may present an opportunity for future models to further increase predictive performance. Lastly, baseline characteristics may lead to clinically relevant heterogeneity in sepsis trials [11]. To administer treatment to more homogenous patient groups, the accurate identification of pre-specified populations by machine learning models could be investigated.

Strengths and limitations

Several strengths can be identified in this study. First of all, this is the first study to systematically list all research in this area. It combines both clinical and more technical work and assesses performance in a clinical light, while studies are scrutinized through a technical and clinical lens. Additionally, a large number of models resulted from the search, which permitted comparison and meta-analysis of the contribution of model components to performance.

This study also has limitations. First, the AUROC was pragmatically chosen as a summary measure, while it may underperform in the setting of imbalanced datasets [60]. Nonetheless, it was the summary measure most frequently reported; other measures would have eroded the possibility to compare performance across studies. Similarly, no contingency tables were feasible for the majority of papers as the necessary data were too infrequently reported and very few papers reported measures of uncertainty such as confidence intervals or standard deviations. In line with a previous machine learning review on imaging [61], we believe reporting of these studies has to be improved to guarantee reliable interpretation and we encourage guideline development in the areas of intensive care and emergency medicine.

Conclusion

This systematic review and meta-analysis show that machine learning models can accurately predict sepsis onset with good discrimination in retrospective cohorts. Important factors associated with model performance include the use of variables that are well recognized for their clinical importance in sepsis. Even though individual models tend to outperform traditional scoring tools, assessment of their pooled performance is limited by heterogeneity of studies. This calls for the development of reporting guidelines for machine learning for intensive care medicine. Clinical implementation of models is currently scarce and is therefore urgently needed across diverse patient populations to determine clinical impact, ensure generalizability, and to bridge the gap between bytes and bedside.