The authors are correct that there is a definite problem of lack of availability of clinical risk prediction models (CRPMs) and other clinical digital tools at the 'coalface'. However there may be many other potential solutions to solving this, using more orthodox methods than a novel blockchain-based deployment marketplace.
It is clear that academic, clinical, managerial , and industry incentives are misaligned and this is why CRPMs don't readily see deployment to places where clinical end-users can easily obtain and use them. But a blockchain-based solution is hard to envisage, when more ordinary deployment methods have not seemingly been tried with sufficient enthusiasm. The article suggests that blockchain 'might' be part of the solution but it would be a more convincing argument were this backed up by an open source proof of concept or a demonstration of such as system in action. It seems unlikely to me that EHR vendors will willingly integrate external features into their systems that are totally reliant on an unproven, fluid 'marketplace' of smart contract execution, with no guarantee of uptime, future cost, or long term reliability or even existence.
Additionally, widening the discussion of these deployment incentives to include AI-based clinical risk models blurs the picture because these two types of CRPM are very different. The level of clinical trust of such experimental AI models is low, and does not favourably...
The authors are correct that there is a definite problem of lack of availability of clinical risk prediction models (CRPMs) and other clinical digital tools at the 'coalface'. However there may be many other potential solutions to solving this, using more orthodox methods than a novel blockchain-based deployment marketplace.
It is clear that academic, clinical, managerial , and industry incentives are misaligned and this is why CRPMs don't readily see deployment to places where clinical end-users can easily obtain and use them. But a blockchain-based solution is hard to envisage, when more ordinary deployment methods have not seemingly been tried with sufficient enthusiasm. The article suggests that blockchain 'might' be part of the solution but it would be a more convincing argument were this backed up by an open source proof of concept or a demonstration of such as system in action. It seems unlikely to me that EHR vendors will willingly integrate external features into their systems that are totally reliant on an unproven, fluid 'marketplace' of smart contract execution, with no guarantee of uptime, future cost, or long term reliability or even existence.
Additionally, widening the discussion of these deployment incentives to include AI-based clinical risk models blurs the picture because these two types of CRPM are very different. The level of clinical trust of such experimental AI models is low, and does not favourably compare with the high level of clinical trust we place in traditional (non-AI) clinical risk models which are simple, statistics-based, deterministic, well understood, reproducible and evidence-based. AI-based models are almost always proprietary, resulting in a high risk of bias in their clinical evaluations, and a low level of clinical trust.
We don't need a blockchain marketplace - if existing incentives are insufficient for the incumbents in the marketplace then we as clinicians must intervene, we need to develop these CRPMs as open-source software, and deploy them commercially under the aegis of suitable clinically-trusted organisation, such as (but not limited to) the medical and surgical Royal Colleges. A real-life, working, replicable deployment model is that of the RCPCH Digital Growth Charts, which are open-source and deployed as a REST API in exactly this way, with a sustainable business model around them which ensures ongoing reliable presence in the market.
Maley and colleagues provide a provocative argument in relation to how we make decisions about end of life. The premise was relatively clear: predictive models have a significant limitation in that they are bound by the decisions made in previous 'similar' cases. As such, the true potential outcomes are unknown. Indeed, would the authors consider the possibility of a 'self-fulfilling prophecy' scenario, whereby the previous decisions - particularly where the decision for comfort measures were taken - reinforce future decisions?
The article was fascinating and well written, however the proposed mechanism for implementing the causal effect modelling was not clear, at least to the non-statistically informed clinician. The idea of running predictive hypothetical clinical trials is fascinating. The seeming potential to provide greater clarity for the most difficult decision in medicine is intriguing and, certainly this clinician, would be most appreciative of further exploration and explanation of the methods involved. Continuing a real-life scenario and discussing how close we are to such modelling being applicable to real cases - future studies - would also be much appreciated.
Performance of national COVID-19 ‘symptom checkers’: a comparative case simulation study omitted one of the formative aspects of the CDC Coronavirus Symptom Checker (CDC Self-Checker), which is that it regularly undergoes upgrades to content and recommendations in its algorithm. As such, whichever version of the CDC Self-Checker was used in this analysis would have been reflective of vetted CDC guidance at the time it was accessed. The inherent periodic updates of the CDC Self-Checker ensure that it reflects new information and current guidance as those become available.
Additionally, the omission of an access date indicating when the authors accessed the CDC Self-Checker for their simulation precludes verification of the version used and of the results described in the article. Between this article’s first preprinting in November 30, 2020 and the date of this letter (5/11/21), there have been six revisions to the CDC Self-Checker.
Finally, the CDC Self-Checker does recommend older adults with established COVID-19 symptoms, such as fever, to seek medical assistance as soon as possible, contrary to the article’s conclusions.
One of the unique benefits of the CDC Self-Checker is that rather than being a simple Q&A tool, it goes a step further to provide customized recommendations based on personalized and detailed scenarios. The strengths of the CDC Self-Checker provide an innovative and effective complement to health systems wh...
Performance of national COVID-19 ‘symptom checkers’: a comparative case simulation study omitted one of the formative aspects of the CDC Coronavirus Symptom Checker (CDC Self-Checker), which is that it regularly undergoes upgrades to content and recommendations in its algorithm. As such, whichever version of the CDC Self-Checker was used in this analysis would have been reflective of vetted CDC guidance at the time it was accessed. The inherent periodic updates of the CDC Self-Checker ensure that it reflects new information and current guidance as those become available.
Additionally, the omission of an access date indicating when the authors accessed the CDC Self-Checker for their simulation precludes verification of the version used and of the results described in the article. Between this article’s first preprinting in November 30, 2020 and the date of this letter (5/11/21), there have been six revisions to the CDC Self-Checker.
Finally, the CDC Self-Checker does recommend older adults with established COVID-19 symptoms, such as fever, to seek medical assistance as soon as possible, contrary to the article’s conclusions.
One of the unique benefits of the CDC Self-Checker is that rather than being a simple Q&A tool, it goes a step further to provide customized recommendations based on personalized and detailed scenarios. The strengths of the CDC Self-Checker provide an innovative and effective complement to health systems while efficiently delivering easy to understand recommendations to the public. We invite readers to check it out by visiting: Coronavirus Self-Checker | CDC
We read the recent study from Burdick and colleagues with great interest as few machine learning models have been assessed prospectively, and even fewer have shown benefits on clinically-relevant outcomes such as length of stay or mortality.1 However, we are concerned about some of the reported data. The model they deploy, termed InSight across various publications, predicts the development of sepsis based on the systemic inflammatory response syndrome (SIRS) criteria, which has been replaced by the Sepsis-3 guidelines.2,3 Their model which is a gradient boosted classification tree built with XGBoost is trained by looking back at vital sign data and predicts if a patient will meet SIRS criteria in the next four hours. In the present study they define outcomes cohort as “sepsis-related” by including any patient who met 2/4 of the SIRS criteria. Using this definition, they report a pre-intervention in-hospital mortality of 3.86%.4 However, this is far below the expected sepsis related mortality, regardless of the criteria used. Furthermore, by reporting outcomes in patients who met SIRS >2, rather than everyone screened by the algorithm, the relative harm done to patients for whom this model falsely predicted sepsis would not be factored into the equation assessing the algorithm’s value. They report an estimate of the cost reduction, but this again does not account for the costs that could be associated with the adverse effects of unnecessarily tre...
We read the recent study from Burdick and colleagues with great interest as few machine learning models have been assessed prospectively, and even fewer have shown benefits on clinically-relevant outcomes such as length of stay or mortality.1 However, we are concerned about some of the reported data. The model they deploy, termed InSight across various publications, predicts the development of sepsis based on the systemic inflammatory response syndrome (SIRS) criteria, which has been replaced by the Sepsis-3 guidelines.2,3 Their model which is a gradient boosted classification tree built with XGBoost is trained by looking back at vital sign data and predicts if a patient will meet SIRS criteria in the next four hours. In the present study they define outcomes cohort as “sepsis-related” by including any patient who met 2/4 of the SIRS criteria. Using this definition, they report a pre-intervention in-hospital mortality of 3.86%.4 However, this is far below the expected sepsis related mortality, regardless of the criteria used. Furthermore, by reporting outcomes in patients who met SIRS >2, rather than everyone screened by the algorithm, the relative harm done to patients for whom this model falsely predicted sepsis would not be factored into the equation assessing the algorithm’s value. They report an estimate of the cost reduction, but this again does not account for the costs that could be associated with the adverse effects of unnecessarily treating patients who did not have sepsis (false positives). We are additionally concerned that the results are reported only as relative risk reductions as opposed to absolute risk reductions. In doing so the values appear more impressive. Based on the data presented, we calculate the absolute risk reduction to be 0.0152, and a number needed-to-treat (NNT) of 65.8. These numbers are likely an overestimate given that the denominator did not include all patients who were screened. Equally relevant to report is the number needed-to-harm (NNH) which pertains to patients who falsely triggers the algorithm and receive unnecessary, and potentially deleterious overtreatment. Most importantly, we are troubled that the results of this study may promote a false impression that algorithms can be sold and implemented without validation and re-calibration using local data. Readers should be constantly reminded that the accuracy of algorithms is bound by space and time.5 There must be processes in place to continuously monitor the performance of an algorithm before it is deployed by a health system.
Christopher V. Cosgriff, M.D., M.P.H
Leo Anthony Celi, M.D., M.S., M.P.H
1. Burdick, H., et al. Effect of a sepsis prediction algorithm on patient mortality, length of stay and readmission: a prospective multicentre clinical outcomes evaluation of real-world patient data from US hospitals. BMJ Health & Care Informatics 27, e100109 (2020).
2. Mao, Q., et al. Multicentre validation of a sepsis prediction algorithm using only vital sign data in the emergency department, general ward and ICU. BMJ Open 8, e017833 (2018).
3. Singer, M., et al. The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3). JAMA 315, 801-810 (2016).
4. Johnson, A.E.W., et al. A Comparative Analysis of Sepsis Identification Methods in an Electronic Database. Critical care medicine 46, 494-499 (2018).
5. Davis, S.E., Lasko, T.A., Chen, G. & Matheny, M.E. Calibration Drift Among Regression and Machine Learning Models for Hospital Mortality. AMIA ... Annual Symposium proceedings. AMIA Symposium 2017, 625-634 (2018).
The Smoking Test was run on our practice, 3 years after the initial assessment was conducted. And some learning can be shared:
-The test was quick to run, even though original reports were not saved. In consequence, three new reports were created, regarding the type of smoking status considered, and one merging records with the 3 entries, followed by collecting in a spreadsheet the three codes with their last entry date. The software allows to collect specific entries without needing to open the records and to export that data. Using simple formulas in excel (Deducting two dates, to see if the difference of days was negative or positive) it was matter of minutes to find again the number of errors.
-Trying to correct entries took a lot of time, it was not completed. It was considered not necessarily beneficial, more when about half of the entries assessed were not ours, but from associated organisations. We share data, to improve patient care, but each organisation is responsible for their own quality, for their own entries. One could only request to mark entries in error to them. It means if any organisation, is doing similarly, sharing data entry with others, the reports need to be amended to specify which organisation data is to be entered in the searches. It should fix the location problem encountered.
-Looking at the data, one had to consider also a change in the report of "Never smoked" to fix the problem of currency. In our sample h...
The Smoking Test was run on our practice, 3 years after the initial assessment was conducted. And some learning can be shared:
-The test was quick to run, even though original reports were not saved. In consequence, three new reports were created, regarding the type of smoking status considered, and one merging records with the 3 entries, followed by collecting in a spreadsheet the three codes with their last entry date. The software allows to collect specific entries without needing to open the records and to export that data. Using simple formulas in excel (Deducting two dates, to see if the difference of days was negative or positive) it was matter of minutes to find again the number of errors.
-Trying to correct entries took a lot of time, it was not completed. It was considered not necessarily beneficial, more when about half of the entries assessed were not ours, but from associated organisations. We share data, to improve patient care, but each organisation is responsible for their own quality, for their own entries. One could only request to mark entries in error to them. It means if any organisation, is doing similarly, sharing data entry with others, the reports need to be amended to specify which organisation data is to be entered in the searches. It should fix the location problem encountered.
-Looking at the data, one had to consider also a change in the report of "Never smoked" to fix the problem of currency. In our sample half of those entries in this particular report were preceding 2018, when we run the initial test. A simple fix is to run a report on this code looking only for entries done after last test. This report is the pivot of the process. It did not matter whether entries were done in last year regarding the other parameters, the error is determined by the presence of this code.
-Finally, in a digital world, where patients can enter data on their records through online questionnaires, clinicians need to be vigilant on the answers, as one case was noted of a patient stating that "never smoked|" this year when there are a number of entries of a smoker status in the past. It is not possible to challenge the answer in the consultation, but not to accept an entry to be added when it is known to be wrong.
In summary, location and currency problems can be easily fixed and direct patient entries need consideration.
We read with interest the published study “Accuracy of periocular lesion assessment using telemedicine” [1]. The authors correctly identify a paucity of evidence for image-based triage or management of minor lid complaints, which has become an attractive option for reducing footfall in the oculoplastics clinic in the current situation. We have had a similar experience to the authors with the translation of our “one-stop” minor lids clinic into an image-based initial virtual assessment. We performed a pilot study to ascertain whether we could achieve diagnostic accuracy with images and a brief history that demonstrated agreement between a face-to-face and remote review (paper currently under peer-review) similar to this published study.
Although demonstrating diagnostic accuracy is important in the development of this service, ensuring safety is a critical clinical governance area and should be addressed while rolling out such a service. Skin cancer referrals were excluded from this published study. While skin cancers are not routinely seen in a minor lids clinic, these are occasionally diagnosed in this setting especially if the referral does not have adequate information. In our cohort of 97 patients seen in the ‘minor operations’ clinic, 8 malignancies (basal cell carcinomas) were identified. These were all flagged up by the virtual assessment. The fact that none of the malignant lesions were missed in our cohort is very encouraging in terms of safety of...
We read with interest the published study “Accuracy of periocular lesion assessment using telemedicine” [1]. The authors correctly identify a paucity of evidence for image-based triage or management of minor lid complaints, which has become an attractive option for reducing footfall in the oculoplastics clinic in the current situation. We have had a similar experience to the authors with the translation of our “one-stop” minor lids clinic into an image-based initial virtual assessment. We performed a pilot study to ascertain whether we could achieve diagnostic accuracy with images and a brief history that demonstrated agreement between a face-to-face and remote review (paper currently under peer-review) similar to this published study.
Although demonstrating diagnostic accuracy is important in the development of this service, ensuring safety is a critical clinical governance area and should be addressed while rolling out such a service. Skin cancer referrals were excluded from this published study. While skin cancers are not routinely seen in a minor lids clinic, these are occasionally diagnosed in this setting especially if the referral does not have adequate information. In our cohort of 97 patients seen in the ‘minor operations’ clinic, 8 malignancies (basal cell carcinomas) were identified. These were all flagged up by the virtual assessment. The fact that none of the malignant lesions were missed in our cohort is very encouraging in terms of safety of this service. However, these numbers are small and it would be helpful to know if the authors or others had a similar experience with identification of malignant lid lesions.
There have been many rapid changes with service redesign in the last year, many of which have been very successful. It is important that the clinical governance around these redesigned services is robust and, in this particular instance, we have evidence that this new method of service delivery does not miss any malignant lesions. A process of continuous monitoring and audit should, therefore, form an essential part of these services.
1. Kang S, Dehabadi M, Sim DA, et al Accuracy of periocular lesion assessment using telemedicine BMJ Health & Care Informatics 2021;28:e100287. doi: 10.1136/bmjhci-2020-100287
If the algorithms presented by Hier and Pearon [1] were to be widely implemented rather the current List order, usually chronologic, the clinician could end up with a list based on organ system. Is this going to improve the misuse of Problem Lists?
Problem Orientated Medical Records do not have enough Problems created “because the creation tools are so inefficient” [2], because usage by clinicians is highly variable and short of ideal [3], among other causes.
There is a need to look deeper for solutions. Indeed they are messy because there is no time nor training to keep them up to date [4], also because there is no agreement on what should be the content [4], and so clinicians seeing their content could consider nearly 30% of items irrelevant [5]. In consequence more significant changes are needed. For example:
-Software automatization to manipulate Problem Lists content. It could improve the readability of the Active Problem List, improve their quality reducing duplications and related Problems by alerting to promote/discourage additions to the Problem List, some could be based on algorithms like described by Hier and Pearon [1], but there is much more, like inactivating after a predetermined period of time a pregnancy or a surgical procedure, for example.
-Easier to use nomenclature and code entry process to engage more clinicians to use the Problem List. So precious time is not wasted looking for the right code for example.
- Organ-syst...
If the algorithms presented by Hier and Pearon [1] were to be widely implemented rather the current List order, usually chronologic, the clinician could end up with a list based on organ system. Is this going to improve the misuse of Problem Lists?
Problem Orientated Medical Records do not have enough Problems created “because the creation tools are so inefficient” [2], because usage by clinicians is highly variable and short of ideal [3], among other causes.
There is a need to look deeper for solutions. Indeed they are messy because there is no time nor training to keep them up to date [4], also because there is no agreement on what should be the content [4], and so clinicians seeing their content could consider nearly 30% of items irrelevant [5]. In consequence more significant changes are needed. For example:
-Software automatization to manipulate Problem Lists content. It could improve the readability of the Active Problem List, improve their quality reducing duplications and related Problems by alerting to promote/discourage additions to the Problem List, some could be based on algorithms like described by Hier and Pearon [1], but there is much more, like inactivating after a predetermined period of time a pregnancy or a surgical procedure, for example.
-Easier to use nomenclature and code entry process to engage more clinicians to use the Problem List. So precious time is not wasted looking for the right code for example.
- Organ-system based List could also be used with hierarchies so specialists could have expanded Problems of their field while others could have a more “high level” entry for other specialities.
More is needed or we will continue arguing about how to manage better the Problem List and end with another List differently chaotic, still too lengthy, inaccurate and irrelevant.
References
1. Hier, D. B., & Pearson, J. (2019). Two algorithms for the reorganisation of the problem list by organ system. BMJ Health & Care Informatics, 26(1).
2. Buchanan, J. (2017). Accelerating the benefits of the problem oriented medical record. Applied clinical informatics, 26(01), 180-190.
3. Wright A, Maloney F, Feblowitz J. Clinician attitudes toward and use of
electronic problem lists: a thematic analysis. BMC Med Inform Decis
Mak. 2011;11:36–45
4. Millares Martin P, Sbaffi L (2019). Electronic Health Records (EHR) and Problem Lists in Leeds UK. Variability of General Practitioners’ views. Health Informatics Journal. https://doi.org/10.1177/1460458219895184
5. Poissant, L., Taylor, L., Huang, A., & Tamblyn, R. (2010). Assessing the accuracy of an inter-institutional automated patient-specific health problem list. BMC medical informatics and decision making, 10(1), 10.
The authors are correct that there is a definite problem of lack of availability of clinical risk prediction models (CRPMs) and other clinical digital tools at the 'coalface'. However there may be many other potential solutions to solving this, using more orthodox methods than a novel blockchain-based deployment marketplace.
It is clear that academic, clinical, managerial , and industry incentives are misaligned and this is why CRPMs don't readily see deployment to places where clinical end-users can easily obtain and use them. But a blockchain-based solution is hard to envisage, when more ordinary deployment methods have not seemingly been tried with sufficient enthusiasm. The article suggests that blockchain 'might' be part of the solution but it would be a more convincing argument were this backed up by an open source proof of concept or a demonstration of such as system in action. It seems unlikely to me that EHR vendors will willingly integrate external features into their systems that are totally reliant on an unproven, fluid 'marketplace' of smart contract execution, with no guarantee of uptime, future cost, or long term reliability or even existence.
Additionally, widening the discussion of these deployment incentives to include AI-based clinical risk models blurs the picture because these two types of CRPM are very different. The level of clinical trust of such experimental AI models is low, and does not favourably...
Show MoreMaley and colleagues provide a provocative argument in relation to how we make decisions about end of life. The premise was relatively clear: predictive models have a significant limitation in that they are bound by the decisions made in previous 'similar' cases. As such, the true potential outcomes are unknown. Indeed, would the authors consider the possibility of a 'self-fulfilling prophecy' scenario, whereby the previous decisions - particularly where the decision for comfort measures were taken - reinforce future decisions?
The article was fascinating and well written, however the proposed mechanism for implementing the causal effect modelling was not clear, at least to the non-statistically informed clinician. The idea of running predictive hypothetical clinical trials is fascinating. The seeming potential to provide greater clarity for the most difficult decision in medicine is intriguing and, certainly this clinician, would be most appreciative of further exploration and explanation of the methods involved. Continuing a real-life scenario and discussing how close we are to such modelling being applicable to real cases - future studies - would also be much appreciated.
Dear Editor,
Performance of national COVID-19 ‘symptom checkers’: a comparative case simulation study omitted one of the formative aspects of the CDC Coronavirus Symptom Checker (CDC Self-Checker), which is that it regularly undergoes upgrades to content and recommendations in its algorithm. As such, whichever version of the CDC Self-Checker was used in this analysis would have been reflective of vetted CDC guidance at the time it was accessed. The inherent periodic updates of the CDC Self-Checker ensure that it reflects new information and current guidance as those become available.
Additionally, the omission of an access date indicating when the authors accessed the CDC Self-Checker for their simulation precludes verification of the version used and of the results described in the article. Between this article’s first preprinting in November 30, 2020 and the date of this letter (5/11/21), there have been six revisions to the CDC Self-Checker.
Finally, the CDC Self-Checker does recommend older adults with established COVID-19 symptoms, such as fever, to seek medical assistance as soon as possible, contrary to the article’s conclusions.
One of the unique benefits of the CDC Self-Checker is that rather than being a simple Q&A tool, it goes a step further to provide customized recommendations based on personalized and detailed scenarios. The strengths of the CDC Self-Checker provide an innovative and effective complement to health systems wh...
Show MoreTo the editor:
We read the recent study from Burdick and colleagues with great interest as few machine learning models have been assessed prospectively, and even fewer have shown benefits on clinically-relevant outcomes such as length of stay or mortality.1 However, we are concerned about some of the reported data. The model they deploy, termed InSight across various publications, predicts the development of sepsis based on the systemic inflammatory response syndrome (SIRS) criteria, which has been replaced by the Sepsis-3 guidelines.2,3 Their model which is a gradient boosted classification tree built with XGBoost is trained by looking back at vital sign data and predicts if a patient will meet SIRS criteria in the next four hours. In the present study they define outcomes cohort as “sepsis-related” by including any patient who met 2/4 of the SIRS criteria. Using this definition, they report a pre-intervention in-hospital mortality of 3.86%.4 However, this is far below the expected sepsis related mortality, regardless of the criteria used. Furthermore, by reporting outcomes in patients who met SIRS >2, rather than everyone screened by the algorithm, the relative harm done to patients for whom this model falsely predicted sepsis would not be factored into the equation assessing the algorithm’s value. They report an estimate of the cost reduction, but this again does not account for the costs that could be associated with the adverse effects of unnecessarily tre...
Show MoreThe Smoking Test was run on our practice, 3 years after the initial assessment was conducted. And some learning can be shared:
-The test was quick to run, even though original reports were not saved. In consequence, three new reports were created, regarding the type of smoking status considered, and one merging records with the 3 entries, followed by collecting in a spreadsheet the three codes with their last entry date. The software allows to collect specific entries without needing to open the records and to export that data. Using simple formulas in excel (Deducting two dates, to see if the difference of days was negative or positive) it was matter of minutes to find again the number of errors.
-Trying to correct entries took a lot of time, it was not completed. It was considered not necessarily beneficial, more when about half of the entries assessed were not ours, but from associated organisations. We share data, to improve patient care, but each organisation is responsible for their own quality, for their own entries. One could only request to mark entries in error to them. It means if any organisation, is doing similarly, sharing data entry with others, the reports need to be amended to specify which organisation data is to be entered in the searches. It should fix the location problem encountered.
-Looking at the data, one had to consider also a change in the report of "Never smoked" to fix the problem of currency. In our sample h...
Show MoreWe read with interest the published study “Accuracy of periocular lesion assessment using telemedicine” [1]. The authors correctly identify a paucity of evidence for image-based triage or management of minor lid complaints, which has become an attractive option for reducing footfall in the oculoplastics clinic in the current situation. We have had a similar experience to the authors with the translation of our “one-stop” minor lids clinic into an image-based initial virtual assessment. We performed a pilot study to ascertain whether we could achieve diagnostic accuracy with images and a brief history that demonstrated agreement between a face-to-face and remote review (paper currently under peer-review) similar to this published study.
Although demonstrating diagnostic accuracy is important in the development of this service, ensuring safety is a critical clinical governance area and should be addressed while rolling out such a service. Skin cancer referrals were excluded from this published study. While skin cancers are not routinely seen in a minor lids clinic, these are occasionally diagnosed in this setting especially if the referral does not have adequate information. In our cohort of 97 patients seen in the ‘minor operations’ clinic, 8 malignancies (basal cell carcinomas) were identified. These were all flagged up by the virtual assessment. The fact that none of the malignant lesions were missed in our cohort is very encouraging in terms of safety of...
Show MoreIf the algorithms presented by Hier and Pearon [1] were to be widely implemented rather the current List order, usually chronologic, the clinician could end up with a list based on organ system. Is this going to improve the misuse of Problem Lists?
Problem Orientated Medical Records do not have enough Problems created “because the creation tools are so inefficient” [2], because usage by clinicians is highly variable and short of ideal [3], among other causes.
There is a need to look deeper for solutions. Indeed they are messy because there is no time nor training to keep them up to date [4], also because there is no agreement on what should be the content [4], and so clinicians seeing their content could consider nearly 30% of items irrelevant [5]. In consequence more significant changes are needed. For example:
Show More-Software automatization to manipulate Problem Lists content. It could improve the readability of the Active Problem List, improve their quality reducing duplications and related Problems by alerting to promote/discourage additions to the Problem List, some could be based on algorithms like described by Hier and Pearon [1], but there is much more, like inactivating after a predetermined period of time a pregnancy or a surgical procedure, for example.
-Easier to use nomenclature and code entry process to engage more clinicians to use the Problem List. So precious time is not wasted looking for the right code for example.
- Organ-syst...