Collective Intelligence for Clinical Diagnosis—Are 2 (or 3) Heads Better Than 1? Stephan D. Fihn. JAMA Network Open. 2019;2(3):e191071, doi:10.1001/jamanetworkopen.2019.1071
nce upon a time, medical students were taught that the correct approach to diagnosis was to collect a standard, complete set of data and then, based on those data elements, create an exhaustive list of potential diagnoses. The final and most difficult step was then to take this list and engage in a systemic process of deductive reasoning to rule out possibilities until the 1 final diagnosis was established. Master clinicians modeled this process of differential diagnosis in the classic clinicopathologic conferences (CPCs) that were regularly held in most teaching hospitals and published regularly in medical journals. During the past several decades, the popularity of the CPC has faded under criticism that cases discussed were often atypical and the setting was artificial because bits of data were doled out to discussants in a sequential fashion that did not mirror actual clinical practice. Moreover, they came to be seen more as theatrical events than meaningful teaching exercises.
The major reason for the demise of the CPC, however, was that it became apparent that master clinicians did not actually think in this manner at all. Medical educators who carefully observed astute clinicians found that the clinicians began generating hypotheses during the first few moments of an encounter and iteratively updated them while limiting the number of possibilities being entertained to no more than 5 to 7.1 They also found that even the notion of a master clinician is often illusory because diagnostic accuracy is largely a function of knowledge and experience within a specific domain (or set of domains) as opposed to general brilliance as a diagnostician.
This shift in understanding how physicians think developed in parallel with the growth of cognitive psychology, which focuses on how we process and respond to information. As we confront similar situations over time, the brain develops shortcuts known as heuristics that simplify problems and facilitate prompt and efficient responses. Without these heuristics, we would be forced to adopt a CPC approach to the myriad decisions we all face in everyday life, which would be exhausting and paralyzing. Because they are simplifications, these heuristics are subject to error. Research during the past several decades has revealed that although we maintain a Cartesian vision of ourselves as logical creatures, we are all, in fact, subject to a host of biases that distort our perceptions and lead us to make irrational decisions. Many of these have been cataloged by Amos Tversky, PhD, and Daniel Kahneman, PhD, such as recency bias (overweighting recent events compared with distant ones), framing effects (drawing different conclusions from the same information, depending on how it is presented), primacy bias (being influenced more by information presented earlier than later), anchoring (focusing on a piece of information and discounting the rest), and confirmation bias (placing undue emphasis on information consistent with a preconception).2 These perceptual misrepresentations lead to predictable mistakes such as overestimating the frequency of rare events when they are highly visible; underestimating the frequency of common, mundane events; and seeing patterns where none exist. Understanding these quirks underpins the emerging field of behavioral economics, which helps to explain how markets behave but also enables commercial and political entities to manipulate our opinions, sometimes in perverse ways.
One conclusion that can be drawn from cognitive psychology is that human beings generally perform poorly when thinking in probabilistic terms. Naturally, this has grave implications for our ability to function as good diagnosticians. A growing literature suggests that diagnostic error is common and can lead, not unexpectedly, to harm.3
Acknowledging these human frailties, how can we compensate? One potential solution is to harness the power of computers. [...]
Check also: Biased Policy Professionals. Sheheryar Banuri, Stefan
Dercon, and Varun Gauri. World Bank Policy Research Working Paper 8113. https://www.bipartisanalliance.com/2017/08/biased-policy-professionals-world-bank.html
And:
Dispelling the Myth: Training in Education or Neuroscience Decreases
but Does Not Eliminate Beliefs in Neuromyths. Kelly Macdonald et al.
Frontiers in Psychology, Aug 10 2017. https://www.bipartisanalliance.com/2017/08/training-in-education-or-neuroscience.html
And:
Wisdom and how to cultivate it: Review of emerging evidence for a
constructivist model of wise thinking. Igor Grossmann. European
Psychologist, in press. Pre-print: https://www.bipartisanalliance.com/2017/08/wisdom-and-how-to-cultivate-it-review.html
And:
Individuals with greater science literacy and education have more
polarized beliefs on controversial science topics. Caitlin Drummond and
Baruch Fischhoff. Proceedings of the National Academy of Sciences, vol.
114 no. 36, pp 9587–9592, https://www.bipartisanalliance.com/2017/09/individuals-with-greater-science.html
And:
Expert ability can actually impair the accuracy of expert perception
when judging others' performance: Adaptation and fallibility in experts'
judgments of novice performers. By Larson, J. S., & Billeter, D. M.
(2017). Journal of Experimental Psychology: Learning, Memory, and
Cognition, 43(2), 271–288. https://www.bipartisanalliance.com/2017/06/expert-ability-can-actually-impair.html
No comments:
Post a Comment