Accuracy of Practitioner Estimates of Probability of Diagnosis Before and After Testing. Daniel J. Morgan et al. JAMA Intern Med., April 5, 2021. DOI 10.1001/jamainternmed.2021.0269
Key Points
Question Do practitioners understand the probability of common clinical diagnoses?
Findings In this survey study of 553 practitioners performing primary care, respondents overestimated the probability of diagnosis before and after testing. This posttest overestimation was associated with consistent overestimates of pretest probability and overestimates of disease after specific diagnostic test results.
Meaning These findings suggest that many practitioners are unaccustomed to using probability in diagnosis and clinical practice. Widespread overestimates of the probability of disease likely contribute to overdiagnosis and overuse.
Abstract
Importance Accurate diagnosis is essential to proper patient care.
Objective To explore practitioner understanding of diagnostic reasoning.
Design, Setting, and Participants In this survey study, 723 practitioners at outpatient clinics in 8 US states were asked to estimate the probability of disease for 4 scenarios common in primary care (pneumonia, cardiac ischemia, breast cancer screening, and urinary tract infection) and the association of positive and negative test results with disease probability from June 1, 2018, to November 26, 2019. Of these practitioners, 585 responded to the survey, and 553 answered all of the questions. An expert panel developed the survey and determined correct responses based on literature review.
Results A total of 553 (290 resident physicians, 202 attending physicians, and 61 nurse practitioners and physician assistants) of 723 practitioners (76.5%) fully completed the survey (median age, 32 years; interquartile range, 29-44 years; 293 female [53.0%]; 296 [53.5%] White). Pretest probability was overestimated in all scenarios. Probabilities of disease after positive results were overestimated as follows: pneumonia after positive radiology results, 95% (evidence range, 46%-65%; comparison P < .001); breast cancer after positive mammography results, 50% (evidence range, 3%-9%; P < .001); cardiac ischemia after positive stress test result, 70% (evidence range, 2%-11%; P < .001); and urinary tract infection after positive urine culture result, 80% (evidence range, 0%-8.3%; P < .001). Overestimates of probability of disease with negative results were also observed as follows: pneumonia after negative radiography results, 50% (evidence range, 10%-19%; P < .001); breast cancer after negative mammography results, 5% (evidence range, <0.05%; P < .001); cardiac ischemia after negative stress test result, 5% (evidence range, 0.43%-2.5%; P < .001); and urinary tract infection after negative urine culture result, 5% (evidence range, 0%-0.11%; P < .001). Probability adjustments in response to test results varied from accurate to overestimates of risk by type of test (imputed median positive and negative likelihood ratios [LRs] for practitioners for chest radiography for pneumonia: positive LR, 4.8; evidence, 2.6; negative LR, 0.3; evidence, 0.3; mammography for breast cancer: positive LR, 44.3; evidence range, 13.0-33.0; negative LR, 1.0; evidence range, 0.05-0.24; exercise stress test for cardiac ischemia: positive LR, 21.0; evidence range, 2.0-2.7; negative LR, 0.6; evidence range, 0.5-0.6; urine culture for urinary tract infection: positive LR, 9.0; evidence, 9.0; negative LR, 0.1; evidence, 0.1).
Conclusions and Relevance This survey study suggests that for common diseases and tests, practitioners overestimate the probability of disease before and after testing. Pretest probability was overestimated in all scenarios, whereas adjustment in probability after a positive or negative result varied by test. Widespread overestimates of the probability of disease likely contribute to overdiagnosis and overuse.
In this survey study, in scenarios commonly encountered in primary care practice, practitioners overestimated the probability of disease by 2 to 10 times compared with the scientific evidence, both before and after testing. This result was mostly associated with overestimates of pretest probability, which were observed across all scenarios. Adjustments to probability in response to test results varied from accurate to overestimates of risk by type of test. There was variation in accuracy between type of practitioner that was small compared with the magnitude of difference between practitioners and the scientific evidence. Many practitioners reported that they would treat patients for disease for which likelihood had been overestimated.
The most striking finding from this study was that practitioners consistently and significantly overestimate the likelihood of disease. Small studies with limited generalizability have had similar findings, often asking practitioners to perform one isolated aspect of diagnosis, such as interpreting a test result. However, past studies8-11 have not explored a range of questions or clarified estimates at different steps in the diagnostic pathway. The reason for inaccurate estimates of probability are not clear, although anecdotes reported during the current study imply that practitioners often do not think in terms of probability. One participant stated that estimating probability of disease “isn’t how you do medicine.” This attitude is consistent with a previous study20 of diagnostic strategies that describe an initial pattern recognition phase of care with only 10% of practitioners engaging in a secondary phase of probabilistic reasoning.
This study found that probability estimates were consistently biased toward overestimation, as has been seen in other contexts, such as expectations of high stock returns among investors.21 This overestimation is consistent with cognitive biases, including base rate neglect, anchoring bias, and confirmation bias.14 These biases drive overestimation because true base rates are usually lower than expected and anchoring tends to reflect experiences that represent improbable events or those in which a diagnosis was missed. Such cognitive biases have been associated with diagnostic errors that may occur from errors in estimating risk.5,22,23 Notably, practitioners in this survey were often residents or academic physicians who often practice with populations with higher prevalence of disease. This experience may have also contributed to higher estimates of disease.
Pretest probabilities were consistently overestimated for all questions, but overestimates were particularly apparent for the pneumonia and UTI scenarios. Estimates of pretest probability generally reflect clinical knowledge. Reasons for overestimates for these infectious diseases may relate to the fact that antibiotics are often appropriately given even when the likelihood of infection is moderate. In the UTI scenario, estimates of high pretest probability may reflect the evolution of the definition of asymptomatic bacteriuria as a separate entity from UTI.24
In contrast to past literature,8-10,19 practitioners accurately adjusted estimates of disease based on the results of some tests, as demonstrated by the imputed likelihood ratios. This adjustment could be artifactual because of inability to adjust probability for tests that had high pretest estimates (ie, pneumonia and UTI). In other cases, practitioners markedly overestimated the probability of disease after testing, specifically after a positive or negative mammography result or a positive exercise stress test result. Practitioners are known to overestimate chance of disease when completing a theoretical estimate of likelihood of disease after a positive test result when pretest probability was 1 in 1000 tests.9,10 The current study included the identical question with an identical response, with participants estimating the likelihood of disease at 95% when the correct answer was 2%.5,8-10,19 The findings regarding real-life examples are consistent with evidence from limited past studies,8-11 for example, physician interpretation of a positive mammography result in a typical woman as conveying 81% probability of breast cancer.8
The assessment of test results in this study was simplified to positive or negative. This dichotomization reflects the literature on the sensitivity and specificity of testing.5,6 However, in clinical medicine, these tests often present a range of descriptions for a positive result from mild positives, such as well-circumscribed density on a mammogram, to a strongly positive result, such as inducible ischemia on a stress test or spiculated mass on a mammogram. A more strongly positive or abnormal result would be less sensitive but more specific for disease. This study did not evaluate interpretation of more complex test results.
There are important implications of the finding of a gap between practitioner estimates and scientific estimates of the probability of disease. Practitioners who overestimate the probability of disease would be expected to use that overestimation when deciding whether to initiate therapy, which could lead to overuse of medications and procedures with associated patient harms. Practitioners in the study reported that they would initiate treatment based on estimates of disease, including 78.2% who would treat cardiac ischemia and 71.0% who would treat a UTI when a positive test result would place their patient at 11% or less chance of disease. These errors would similarly corrupt shared decision-making with patients, which relies on practitioner understanding and communication of the likelihood of various outcomes.25-27 Training in shared decision-making has focused on communication skills,28 not on understanding the probability of disease,29 but the findings suggest another important educational target.
More focus on diagnostic reasoning in medical education is important. The finding of a primary problem with pretest probability estimates may be more amenable to intervention than the more commonly discussed bayesian adjustment to probability from test results.30 Pretest probability is commonly discussed in medical education, but a standard method for estimating pretest probability has not been described.30 Ideally, such estimates incorporate knowledge of disease prevalence and the predictive value of components of the history and physical examination, but for many conditions this information is difficult to find. The fact that estimates are so far from scientific evidence identifies a pressing need for improvement. There are a limited number of well-characterized diseases with pretest probability calculators, notably cardiac ischemia.31,32 Despite the fact that respondents in this study had no access to external aids while completing the survey, pretest estimates of cardiac ischemia were more accurate than for other clinical scenarios, implying that access to these calculators may improve knowledge and impact clinical reasoning. There is also a need to improve bayesian adjustment in probability from test results, which requires readily accessible references for clinical sensitivity and specificity. Computer visual decision aids that guide estimates of probability may also have a role.5,33 Alternative approaches, such as natural frequencies and naturalistic decision-making or use of heuristics, may improve decisions.34
This study has limitations. One is that the small fraction of respondents who did not complete the survey were more likely to be female, nurse practitioners, or physician assistants or to have been in practice for more than 10 years. However, the overall response rate was high. The format of survey questions required participants to estimate pretest probability before giving interpretation of positive or negative test results, which may not reflect their natural practice. Finally, although validity was extensively assessed via a multidisciplinary expert panel, reliability of our novel survey was not assessed.
No comments:
Post a Comment