eLetters

7 e-Letters

  • Excellent articulation of a real problem, but unsure if the proposed solution would work.

    The authors are correct that there is a definite problem of lack of availability of clinical risk prediction models (CRPMs) and other clinical digital tools at the 'coalface'. However there may be many other potential solutions to solving this, using more orthodox methods than a novel blockchain-based deployment marketplace.

    It is clear that academic, clinical, managerial , and industry incentives are misaligned and this is why CRPMs don't readily see deployment to places where clinical end-users can easily obtain and use them. But a blockchain-based solution is hard to envisage, when more ordinary deployment methods have not seemingly been tried with sufficient enthusiasm. The article suggests that blockchain 'might' be part of the solution but it would be a more convincing argument were this backed up by an open source proof of concept or a demonstration of such as system in action. It seems unlikely to me that EHR vendors will willingly integrate external features into their systems that are totally reliant on an unproven, fluid 'marketplace' of smart contract execution, with no guarantee of uptime, future cost, or long term reliability or even existence.

    Additionally, widening the discussion of these deployment incentives to include AI-based clinical risk models blurs the picture because these two types of CRPM are very different. The level of clinical trust of such experimental AI models is low, and does not favourably...

    Show More
  • Better prediction of outcomes to aid end-of-life decision making

    Maley and colleagues provide a provocative argument in relation to how we make decisions about end of life. The premise was relatively clear: predictive models have a significant limitation in that they are bound by the decisions made in previous 'similar' cases. As such, the true potential outcomes are unknown. Indeed, would the authors consider the possibility of a 'self-fulfilling prophecy' scenario, whereby the previous decisions - particularly where the decision for comfort measures were taken - reinforce future decisions?
    The article was fascinating and well written, however the proposed mechanism for implementing the causal effect modelling was not clear, at least to the non-statistically informed clinician. The idea of running predictive hypothetical clinical trials is fascinating. The seeming potential to provide greater clarity for the most difficult decision in medicine is intriguing and, certainly this clinician, would be most appreciative of further exploration and explanation of the methods involved. Continuing a real-life scenario and discussing how close we are to such modelling being applicable to real cases - future studies - would also be much appreciated.

  • Clarifying CDC’s Self-Checker RE: Performance of national COVID-19 ‘symptom checkers’: a comparative case simulation study

    Dear Editor,

    Performance of national COVID-19 ‘symptom checkers’: a comparative case simulation study omitted one of the formative aspects of the CDC Coronavirus Symptom Checker (CDC Self-Checker), which is that it regularly undergoes upgrades to content and recommendations in its algorithm. As such, whichever version of the CDC Self-Checker was used in this analysis would have been reflective of vetted CDC guidance at the time it was accessed. The inherent periodic updates of the CDC Self-Checker ensure that it reflects new information and current guidance as those become available.

    Additionally, the omission of an access date indicating when the authors accessed the CDC Self-Checker for their simulation precludes verification of the version used and of the results described in the article. Between this article’s first preprinting in November 30, 2020 and the date of this letter (5/11/21), there have been six revisions to the CDC Self-Checker.

    Finally, the CDC Self-Checker does recommend older adults with established COVID-19 symptoms, such as fever, to seek medical assistance as soon as possible, contrary to the article’s conclusions.

    One of the unique benefits of the CDC Self-Checker is that rather than being a simple Q&A tool, it goes a step further to provide customized recommendations based on personalized and detailed scenarios. The strengths of the CDC Self-Checker provide an innovative and effective complement to health systems wh...

    Show More
  • Concern regarding the recently published sepsis prediction model

    To the editor:

    We read the recent study from Burdick and colleagues with great interest as few machine learning models have been assessed prospectively, and even fewer have shown benefits on clinically-relevant outcomes such as length of stay or mortality.1 However, we are concerned about some of the reported data. The model they deploy, termed InSight across various publications, predicts the development of sepsis based on the systemic inflammatory response syndrome (SIRS) criteria, which has been replaced by the Sepsis-3 guidelines.2,3 Their model which is a gradient boosted classification tree built with XGBoost is trained by looking back at vital sign data and predicts if a patient will meet SIRS criteria in the next four hours. In the present study they define outcomes cohort as “sepsis-related” by including any patient who met 2/4 of the SIRS criteria. Using this definition, they report a pre-intervention in-hospital mortality of 3.86%.4 However, this is far below the expected sepsis related mortality, regardless of the criteria used. Furthermore, by reporting outcomes in patients who met SIRS >2, rather than everyone screened by the algorithm, the relative harm done to patients for whom this model falsely predicted sepsis would not be factored into the equation assessing the algorithm’s value. They report an estimate of the cost reduction, but this again does not account for the costs that could be associated with the adverse effects of unnecessarily tre...

    Show More
  • Quicker to run the test than to correct the errors found

    The Smoking Test was run on our practice, 3 years after the initial assessment was conducted. And some learning can be shared:

    -The test was quick to run, even though original reports were not saved. In consequence, three new reports were created, regarding the type of smoking status considered, and one merging records with the 3 entries, followed by collecting in a spreadsheet the three codes with their last entry date. The software allows to collect specific entries without needing to open the records and to export that data. Using simple formulas in excel (Deducting two dates, to see if the difference of days was negative or positive) it was matter of minutes to find again the number of errors.

    -Trying to correct entries took a lot of time, it was not completed. It was considered not necessarily beneficial, more when about half of the entries assessed were not ours, but from associated organisations. We share data, to improve patient care, but each organisation is responsible for their own quality, for their own entries. One could only request to mark entries in error to them. It means if any organisation, is doing similarly, sharing data entry with others, the reports need to be amended to specify which organisation data is to be entered in the searches. It should fix the location problem encountered.

    -Looking at the data, one had to consider also a change in the report of "Never smoked" to fix the problem of currency. In our sample h...

    Show More
  • Clinical governance issues with telemedicine for periocular lesion assessment

    We read with interest the published study “Accuracy of periocular lesion assessment using telemedicine” [1]. The authors correctly identify a paucity of evidence for image-based triage or management of minor lid complaints, which has become an attractive option for reducing footfall in the oculoplastics clinic in the current situation. We have had a similar experience to the authors with the translation of our “one-stop” minor lids clinic into an image-based initial virtual assessment. We performed a pilot study to ascertain whether we could achieve diagnostic accuracy with images and a brief history that demonstrated agreement between a face-to-face and remote review (paper currently under peer-review) similar to this published study.

    Although demonstrating diagnostic accuracy is important in the development of this service, ensuring safety is a critical clinical governance area and should be addressed while rolling out such a service. Skin cancer referrals were excluded from this published study. While skin cancers are not routinely seen in a minor lids clinic, these are occasionally diagnosed in this setting especially if the referral does not have adequate information. In our cohort of 97 patients seen in the ‘minor operations’ clinic, 8 malignancies (basal cell carcinomas) were identified. These were all flagged up by the virtual assessment. The fact that none of the malignant lesions were missed in our cohort is very encouraging in terms of safety of...

    Show More
  • More is needed to fix the Problem Lists

    If the algorithms presented by Hier and Pearon [1] were to be widely implemented rather the current List order, usually chronologic, the clinician could end up with a list based on organ system. Is this going to improve the misuse of Problem Lists?

    Problem Orientated Medical Records do not have enough Problems created “because the creation tools are so inefficient” [2], because usage by clinicians is highly variable and short of ideal [3], among other causes.

    There is a need to look deeper for solutions. Indeed they are messy because there is no time nor training to keep them up to date [4], also because there is no agreement on what should be the content [4], and so clinicians seeing their content could consider nearly 30% of items irrelevant [5]. In consequence more significant changes are needed. For example:
    -Software automatization to manipulate Problem Lists content. It could improve the readability of the Active Problem List, improve their quality reducing duplications and related Problems by alerting to promote/discourage additions to the Problem List, some could be based on algorithms like described by Hier and Pearon [1], but there is much more, like inactivating after a predetermined period of time a pregnancy or a surgical procedure, for example.
    -Easier to use nomenclature and code entry process to engage more clinicians to use the Problem List. So precious time is not wasted looking for the right code for example.
    - Organ-syst...

    Show More