It depends: Partisan evaluation of conditional probability importance. Leaf Van Boven et al. Cognition, Mar 2 2019, https://doi.org/10.1016/j.cognition.2019.01.020
Highlights
• Political partisans disagreed about the importance of conditional probabilities.
• Supporters of restricting immigration and banning assault weapons favored uninformative “hit rates”.
• Policy opponents favored normatively informative base rates and inverse conditionals.
• Highly numerate partisans were more polarized than less numerate partisans.
• Adopting an expert’s perspective reduced partisan differences.
Abstract: Policies to suppress rare events such as terrorism often restrict co-occurring categories such as Muslim immigration. Evaluating restrictive policies requires clear thinking about conditional probabilities. For example, terrorism is extremely rare. So even if most terrorist immigrants are Muslim—a high “hit rate”—the inverse conditional probability of Muslim immigrants being terrorists is extremely low. Yet the inverse conditional probability is more relevant to evaluating restrictive policies such as the threat of terrorism if Muslim immigration were restricted. We suggest that people engage in partisan evaluation of conditional probabilities, judging hit rates as more important when they support politically prescribed restrictive policies. In two studies, supporters of expelling asylum seekers from Tel Aviv, Israel, of banning Muslim immigration and travel to the United States, and of banning assault weapons judged “hit rate” probabilities (e.g., that terrorists are Muslims) as more important than did policy opponents, who judged the inverse conditional probabilities (e.g., that Muslims are terrorists) as more important. These partisan differences spanned restrictive policies favored by Rightists and Republicans (expelling asylum seekers and banning Muslim travel) and by Democrats (banning assault weapons). Inviting partisans to adopt an unbiased expert’s perspective partially reduced these partisan differences. In Study 2 (but not Study 1), partisan differences were larger among more numerate partisans, suggesting that numeracy supported motivated reasoning. These findings have implications for polarization, political judgment, and policy evaluation. Even when partisans agree about what the statistical facts are, they markedly disagree about the relevance of those statistical facts.
Check also: Biased Policy Professionals. Sheheryar Banuri, Stefan Dercon, and Varun Gauri. World Bank Policy Research Working Paper 8113. https://www.bipartisanalliance.com/2017/08/biased-policy-professionals-world-bank.html
And: Dispelling the Myth: Training in Education or Neuroscience Decreases but Does Not Eliminate Beliefs in Neuromyths. Kelly Macdonald et al. Frontiers in Psychology, Aug 10 2017. https://www.bipartisanalliance.com/2017/08/training-in-education-or-neuroscience.html
And: Wisdom and how to cultivate it: Review of emerging evidence for a constructivist model of wise thinking. Igor Grossmann. European Psychologist, in press. Pre-print: https://www.bipartisanalliance.com/2017/08/wisdom-and-how-to-cultivate-it-review.html
And: Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Caitlin Drummond and Baruch Fischhoff. Proceedings of the National Academy of Sciences, vol. 114 no. 36, pp 9587–9592, https://www.bipartisanalliance.com/2017/09/individuals-with-greater-science.html
And: Expert ability can actually impair the accuracy of expert perception when judging others' performance: Adaptation and fallibility in experts' judgments of novice performers. By Larson, J. S., & Billeter, D. M. (2017). Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(2), 271–288. https://www.bipartisanalliance.com/2017/06/expert-ability-can-actually-impair.html
Bipartisan Alliance, a Society for the Study of the US Constitution, and of Human Nature, where Republicans and Democrats meet.
Saturday, March 2, 2019
One conclusion that can be drawn from cognitive psychology is that human beings generally perform poorly when thinking in probabilistic terms; acknowledging these human frailties, how can we compensate?
Collective Intelligence for Clinical Diagnosis—Are 2 (or 3) Heads Better Than 1? Stephan D. Fihn. JAMA Network Open. 2019;2(3):e191071, doi:10.1001/jamanetworkopen.2019.1071
nce upon a time, medical students were taught that the correct approach to diagnosis was to collect a standard, complete set of data and then, based on those data elements, create an exhaustive list of potential diagnoses. The final and most difficult step was then to take this list and engage in a systemic process of deductive reasoning to rule out possibilities until the 1 final diagnosis was established. Master clinicians modeled this process of differential diagnosis in the classic clinicopathologic conferences (CPCs) that were regularly held in most teaching hospitals and published regularly in medical journals. During the past several decades, the popularity of the CPC has faded under criticism that cases discussed were often atypical and the setting was artificial because bits of data were doled out to discussants in a sequential fashion that did not mirror actual clinical practice. Moreover, they came to be seen more as theatrical events than meaningful teaching exercises.
The major reason for the demise of the CPC, however, was that it became apparent that master clinicians did not actually think in this manner at all. Medical educators who carefully observed astute clinicians found that the clinicians began generating hypotheses during the first few moments of an encounter and iteratively updated them while limiting the number of possibilities being entertained to no more than 5 to 7.1 They also found that even the notion of a master clinician is often illusory because diagnostic accuracy is largely a function of knowledge and experience within a specific domain (or set of domains) as opposed to general brilliance as a diagnostician.
This shift in understanding how physicians think developed in parallel with the growth of cognitive psychology, which focuses on how we process and respond to information. As we confront similar situations over time, the brain develops shortcuts known as heuristics that simplify problems and facilitate prompt and efficient responses. Without these heuristics, we would be forced to adopt a CPC approach to the myriad decisions we all face in everyday life, which would be exhausting and paralyzing. Because they are simplifications, these heuristics are subject to error. Research during the past several decades has revealed that although we maintain a Cartesian vision of ourselves as logical creatures, we are all, in fact, subject to a host of biases that distort our perceptions and lead us to make irrational decisions. Many of these have been cataloged by Amos Tversky, PhD, and Daniel Kahneman, PhD, such as recency bias (overweighting recent events compared with distant ones), framing effects (drawing different conclusions from the same information, depending on how it is presented), primacy bias (being influenced more by information presented earlier than later), anchoring (focusing on a piece of information and discounting the rest), and confirmation bias (placing undue emphasis on information consistent with a preconception).2 These perceptual misrepresentations lead to predictable mistakes such as overestimating the frequency of rare events when they are highly visible; underestimating the frequency of common, mundane events; and seeing patterns where none exist. Understanding these quirks underpins the emerging field of behavioral economics, which helps to explain how markets behave but also enables commercial and political entities to manipulate our opinions, sometimes in perverse ways.
One conclusion that can be drawn from cognitive psychology is that human beings generally perform poorly when thinking in probabilistic terms. Naturally, this has grave implications for our ability to function as good diagnosticians. A growing literature suggests that diagnostic error is common and can lead, not unexpectedly, to harm.3
Acknowledging these human frailties, how can we compensate? One potential solution is to harness the power of computers. [...]
Check also: Biased Policy Professionals. Sheheryar Banuri, Stefan Dercon, and Varun Gauri. World Bank Policy Research Working Paper 8113. https://www.bipartisanalliance.com/2017/08/biased-policy-professionals-world-bank.html
And: Dispelling the Myth: Training in Education or Neuroscience Decreases but Does Not Eliminate Beliefs in Neuromyths. Kelly Macdonald et al. Frontiers in Psychology, Aug 10 2017. https://www.bipartisanalliance.com/2017/08/training-in-education-or-neuroscience.html
And: Wisdom and how to cultivate it: Review of emerging evidence for a constructivist model of wise thinking. Igor Grossmann. European Psychologist, in press. Pre-print: https://www.bipartisanalliance.com/2017/08/wisdom-and-how-to-cultivate-it-review.html
And: Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Caitlin Drummond and Baruch Fischhoff. Proceedings of the National Academy of Sciences, vol. 114 no. 36, pp 9587–9592, https://www.bipartisanalliance.com/2017/09/individuals-with-greater-science.html
And: Expert ability can actually impair the accuracy of expert perception when judging others' performance: Adaptation and fallibility in experts' judgments of novice performers. By Larson, J. S., & Billeter, D. M. (2017). Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(2), 271–288. https://www.bipartisanalliance.com/2017/06/expert-ability-can-actually-impair.html
nce upon a time, medical students were taught that the correct approach to diagnosis was to collect a standard, complete set of data and then, based on those data elements, create an exhaustive list of potential diagnoses. The final and most difficult step was then to take this list and engage in a systemic process of deductive reasoning to rule out possibilities until the 1 final diagnosis was established. Master clinicians modeled this process of differential diagnosis in the classic clinicopathologic conferences (CPCs) that were regularly held in most teaching hospitals and published regularly in medical journals. During the past several decades, the popularity of the CPC has faded under criticism that cases discussed were often atypical and the setting was artificial because bits of data were doled out to discussants in a sequential fashion that did not mirror actual clinical practice. Moreover, they came to be seen more as theatrical events than meaningful teaching exercises.
The major reason for the demise of the CPC, however, was that it became apparent that master clinicians did not actually think in this manner at all. Medical educators who carefully observed astute clinicians found that the clinicians began generating hypotheses during the first few moments of an encounter and iteratively updated them while limiting the number of possibilities being entertained to no more than 5 to 7.1 They also found that even the notion of a master clinician is often illusory because diagnostic accuracy is largely a function of knowledge and experience within a specific domain (or set of domains) as opposed to general brilliance as a diagnostician.
This shift in understanding how physicians think developed in parallel with the growth of cognitive psychology, which focuses on how we process and respond to information. As we confront similar situations over time, the brain develops shortcuts known as heuristics that simplify problems and facilitate prompt and efficient responses. Without these heuristics, we would be forced to adopt a CPC approach to the myriad decisions we all face in everyday life, which would be exhausting and paralyzing. Because they are simplifications, these heuristics are subject to error. Research during the past several decades has revealed that although we maintain a Cartesian vision of ourselves as logical creatures, we are all, in fact, subject to a host of biases that distort our perceptions and lead us to make irrational decisions. Many of these have been cataloged by Amos Tversky, PhD, and Daniel Kahneman, PhD, such as recency bias (overweighting recent events compared with distant ones), framing effects (drawing different conclusions from the same information, depending on how it is presented), primacy bias (being influenced more by information presented earlier than later), anchoring (focusing on a piece of information and discounting the rest), and confirmation bias (placing undue emphasis on information consistent with a preconception).2 These perceptual misrepresentations lead to predictable mistakes such as overestimating the frequency of rare events when they are highly visible; underestimating the frequency of common, mundane events; and seeing patterns where none exist. Understanding these quirks underpins the emerging field of behavioral economics, which helps to explain how markets behave but also enables commercial and political entities to manipulate our opinions, sometimes in perverse ways.
One conclusion that can be drawn from cognitive psychology is that human beings generally perform poorly when thinking in probabilistic terms. Naturally, this has grave implications for our ability to function as good diagnosticians. A growing literature suggests that diagnostic error is common and can lead, not unexpectedly, to harm.3
Acknowledging these human frailties, how can we compensate? One potential solution is to harness the power of computers. [...]
Check also: Biased Policy Professionals. Sheheryar Banuri, Stefan Dercon, and Varun Gauri. World Bank Policy Research Working Paper 8113. https://www.bipartisanalliance.com/2017/08/biased-policy-professionals-world-bank.html
And: Dispelling the Myth: Training in Education or Neuroscience Decreases but Does Not Eliminate Beliefs in Neuromyths. Kelly Macdonald et al. Frontiers in Psychology, Aug 10 2017. https://www.bipartisanalliance.com/2017/08/training-in-education-or-neuroscience.html
And: Wisdom and how to cultivate it: Review of emerging evidence for a constructivist model of wise thinking. Igor Grossmann. European Psychologist, in press. Pre-print: https://www.bipartisanalliance.com/2017/08/wisdom-and-how-to-cultivate-it-review.html
And: Individuals with greater science literacy and education have more polarized beliefs on controversial science topics. Caitlin Drummond and Baruch Fischhoff. Proceedings of the National Academy of Sciences, vol. 114 no. 36, pp 9587–9592, https://www.bipartisanalliance.com/2017/09/individuals-with-greater-science.html
And: Expert ability can actually impair the accuracy of expert perception when judging others' performance: Adaptation and fallibility in experts' judgments of novice performers. By Larson, J. S., & Billeter, D. M. (2017). Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(2), 271–288. https://www.bipartisanalliance.com/2017/06/expert-ability-can-actually-impair.html
Gay men seem more satisfied with their job than other men; lesbians appear less satisfied with their job than other women; reason could be discrimination, who may lead gay men to have low expectations
(I can’t get no) jobsatisfaction? Differences by sexual orientation in Sweden. Lina Alden et al. Linnaeus University, 2018. http://www.diva-portal.org/smash/get/diva2:1291798/FULLTEXT01.pdf
Abstract: We present results from aunique nationwide survey conducted in Sweden on sexual orientation and job satisfaction. Our results show that gay men, on average, seem more satisfied with their job than heterosexual men; lesbians appear less satisfied with their job than heterosexual women. However, the issue of sexual orientation and job satisfaction is complex since gay men, despite their high degree of job satisfaction, like lesbians find their job more mentally straining than heterosexuals. We conclude that gay men and lesbians are facing other stressers at work than heterosexuals do.We also conclude that discrimination and prejudice may lead gay men to have low expectations about their job; these low expectations may translate into high job satisfaction. In contrast, prejudice and discrimination may hinder lesbians from realizing their career plans, resulting in low job satisfaction.
Keywords: Job satisfaction,sexual orientation
Abstract: We present results from aunique nationwide survey conducted in Sweden on sexual orientation and job satisfaction. Our results show that gay men, on average, seem more satisfied with their job than heterosexual men; lesbians appear less satisfied with their job than heterosexual women. However, the issue of sexual orientation and job satisfaction is complex since gay men, despite their high degree of job satisfaction, like lesbians find their job more mentally straining than heterosexuals. We conclude that gay men and lesbians are facing other stressers at work than heterosexuals do.We also conclude that discrimination and prejudice may lead gay men to have low expectations about their job; these low expectations may translate into high job satisfaction. In contrast, prejudice and discrimination may hinder lesbians from realizing their career plans, resulting in low job satisfaction.
Keywords: Job satisfaction,sexual orientation
People believe they value their minds more than other people value theirs, and that they value their bodies less
Jordan, M. R., Gebert, T., & Looser, C. E. (2019). Perspective taking failures in the valuation of mind and body. Journal of Experimental Psychology: General, 148(3), 407-420. http://dx.doi.org/10.1037/xge0000571
Abstract: Accurately inferring the values and preferences of others is crucial for successful social interactions. Nevertheless, without direct access to others’ minds, perspective taking errors are common. Across 5 studies, we demonstrate a systematic perspective taking failure: People believe they value their minds more than others do and often believe they value their bodies less than others do. The bias manifests across a variety of domains and measures, from judgments about the severity of injuries to preferences for new abilities to assessments of how much one is defined by their mind and body. This perspective taking failure was diminished—but still present—when participants thought of a close other. Finally, we assess and find evidence for the notion that this perspective taking failure is a function of the fact that others’ minds are less salient than others’ bodies. It appears to be the case that people believe the most salient cue from a target is also the best indicator of their values and preferences. This bias has implications for the ways in which we create social policy, judge others’ actions, make choices on behalf of others, and allocate resources to the physically and mentally ill.
Abstract: Accurately inferring the values and preferences of others is crucial for successful social interactions. Nevertheless, without direct access to others’ minds, perspective taking errors are common. Across 5 studies, we demonstrate a systematic perspective taking failure: People believe they value their minds more than others do and often believe they value their bodies less than others do. The bias manifests across a variety of domains and measures, from judgments about the severity of injuries to preferences for new abilities to assessments of how much one is defined by their mind and body. This perspective taking failure was diminished—but still present—when participants thought of a close other. Finally, we assess and find evidence for the notion that this perspective taking failure is a function of the fact that others’ minds are less salient than others’ bodies. It appears to be the case that people believe the most salient cue from a target is also the best indicator of their values and preferences. This bias has implications for the ways in which we create social policy, judge others’ actions, make choices on behalf of others, and allocate resources to the physically and mentally ill.
Asymmetry in individuals’ willingness to venture into cross-cutting spaces, with conservatives more likely to follow media and political accounts classified as left-leaning than the reverse
How Many People Live in Political Bubbles on Social Media? Evidence From Linked Survey and Twitter Data. Gregory Eady et al. SAGE Open, February 28, 2019. https://doi.org/10.1177/2158244019832705
Abstract: A major point of debate in the study of the Internet and politics is the extent to which social media platforms encourage citizens to inhabit online “bubbles” or “echo chambers,” exposed primarily to ideologically congenial political information. To investigate this question, we link a representative survey of Americans with data from respondents’ public Twitter accounts (N = 1,496). We then quantify the ideological distributions of users’ online political and media environments by merging validated estimates of user ideology with the full set of accounts followed by our survey respondents (N = 642,345) and the available tweets posted by those accounts (N ~ 1.2 billion). We study the extent to which liberals and conservatives encounter counter-attitudinal messages in two distinct ways: (a) by the accounts they follow and (b) by the tweets they receive from those accounts, either directly or indirectly (via retweets). More than a third of respondents do not follow any media sources, but among those who do, we find a substantial amount of overlap (51%) in the ideological distributions of accounts followed by users on opposite ends of the political spectrum. At the same time, however, we find asymmetries in individuals’ willingness to venture into cross-cutting spaces, with conservatives more likely to follow media and political accounts classified as left-leaning than the reverse. Finally, we argue that such choices are likely tempered by online news watching behavior.
Keywords: media consumption, media & society, mass communication, communication, social sciences, political communication, new media, communication technologies, political behavior, political science
Abstract: A major point of debate in the study of the Internet and politics is the extent to which social media platforms encourage citizens to inhabit online “bubbles” or “echo chambers,” exposed primarily to ideologically congenial political information. To investigate this question, we link a representative survey of Americans with data from respondents’ public Twitter accounts (N = 1,496). We then quantify the ideological distributions of users’ online political and media environments by merging validated estimates of user ideology with the full set of accounts followed by our survey respondents (N = 642,345) and the available tweets posted by those accounts (N ~ 1.2 billion). We study the extent to which liberals and conservatives encounter counter-attitudinal messages in two distinct ways: (a) by the accounts they follow and (b) by the tweets they receive from those accounts, either directly or indirectly (via retweets). More than a third of respondents do not follow any media sources, but among those who do, we find a substantial amount of overlap (51%) in the ideological distributions of accounts followed by users on opposite ends of the political spectrum. At the same time, however, we find asymmetries in individuals’ willingness to venture into cross-cutting spaces, with conservatives more likely to follow media and political accounts classified as left-leaning than the reverse. Finally, we argue that such choices are likely tempered by online news watching behavior.
Keywords: media consumption, media & society, mass communication, communication, social sciences, political communication, new media, communication technologies, political behavior, political science