Sutherland SE, Rehman US, Fallis EE. A Descriptive Analysis of Sexual Problems in Long-Term Heterosexual Relationships. J Sex Med 2019;XX:XXX–XXX. Mar 2019. https://doi.org/10.1016/j.jsxm.2019.02.015
Abstract
Background: Although much research has described individual sexual dysfunctions, few studies to date have examined the types of problems that couples consider most significant in their sexual relationships.
Aim: To clarify the types of relational sexual problems that are most common and most severe in the sexual lives of individuals in long-term romantic relationships.
Methods: A community sample of 117 mixed-sex couples completed this in-lab study. Members of each couple separately completed a demographics questionnaire and a measure of their relational sexual problems, the Sexual Problems Questionnaire (SPQ). Descriptive analyses (eg, examination of means, frequency counts) were conducted to determine the most common and severe sexual problems reported by participants. t-Tests were performed to examine gender differences in mean severity ratings for each SPQ item. Qualitative data were examined by conducting a frequency count on the SPQ items that participants reported to be most important in their sexual relationships. Results of all frequency counts were divided by the total sample size and are reported as percentages.
Main Outcome Measures: Participants reported on the severity of their sexual problems using the 25-item SPQ.
Results: Quantitative analyses revealed that the most common and problematic sexual problems endorsed by both sexes were frequency of sex, sexual initiation, and showing interest. A frequency count of participants’ qualitative reports also revealed that frequency of sex (women = 36%; men = 39%), sexual initiation (women = 33%; men = 32%), and showing interest (women and men = 25%) were the most important sexual issues for most individuals.
Clinical Implication: The most pressing relational sexual problems for couples in long-term romantic relationships are consistent between sexes and pertain to the domain of sexual desire.
Strength & Limitations: The current study used an expanded measure of sexual problems, which allowed participants to report on a broad range of issues in their sexual relationships. The direction of such relational sexual problems (eg, desiring more or less sexual frequency) was not explored.
Conclusion: The key problems in sexual relationships center on the theme of sexual desire, and men and women consider these issues to be problematic to a similar extent.
Wednesday, March 27, 2019
Tuesday, March 26, 2019
Prisons privatization is not a function of simple ideological or economic considerations; rather, has been a consequence of the administrative & legal costs associated with litigation brought by prisoners
Why Do States Privatize their Prisons? The Unintended Consequences of Inmate Litigation. Anna Gunderson, Job market paper 2019. Feb 2019. www.annagunderson.com/uploads/1/5/3/2/15320172/gunderson2019.pdf
The United States has witnessed privatization of a variety of government functions over the last three decades. Media and politicians often attribute the decision to privatize to ideological commitments to small government and fiscal pressure. These claims are particularly notable in the context of prison privatization, where states and the federal government have employed private companies to operate and manage private correctional facilities. I argue state prison privatization is not a function of simple ideological or economic considerations. Rather, prison privatization has been a (potentially unintended) consequence of the administrative and legal costs associated with litigation brought by prisoners. I assemble an original database of prison privatization in the US and demonstrate that the privatization of prisons is best predicted by the legal pressure on state corrections systems, rather than the ideological orientation of a state government.
Appendix: http://www.annagunderson.com/uploads/1/5/3/2/15320172/gunderson2019appendix.pdf
The United States has witnessed privatization of a variety of government functions over the last three decades. Media and politicians often attribute the decision to privatize to ideological commitments to small government and fiscal pressure. These claims are particularly notable in the context of prison privatization, where states and the federal government have employed private companies to operate and manage private correctional facilities. I argue state prison privatization is not a function of simple ideological or economic considerations. Rather, prison privatization has been a (potentially unintended) consequence of the administrative and legal costs associated with litigation brought by prisoners. I assemble an original database of prison privatization in the US and demonstrate that the privatization of prisons is best predicted by the legal pressure on state corrections systems, rather than the ideological orientation of a state government.
Appendix: http://www.annagunderson.com/uploads/1/5/3/2/15320172/gunderson2019appendix.pdf
Automation always reduces the labor share in value added and may reduce labor demand even as it raises productivity; this is counterbalanced by the creation of new tasks in which labor has a comparative advantage
Automation and New Tasks: How Technology Displaces and Reinstates Labor. Daron Acemoglu, Pascual Restrepo. NBER Working Paper No. 25684, Mar 2019. https://www.nber.org/papers/w25684
Abstract: We present a framework for understanding the effects of automation and other types of technological changes on labor demand, and use it to interpret changes in US employment over the recent past. At the center of our framework is the allocation of tasks to capital and labor—the task content of production. Automation, which enables capital to replace labor in tasks it was previously engaged in, shifts the task content of production against labor because of a displacement effect. As a result, automation always reduces the labor share in value added and may reduce labor demand even as it raises productivity. The effects of automation are counterbalanced by the creation of new tasks in which labor has a comparative advantage. The introduction of new tasks changes the task content of production in favor of labor because of a reinstatement effect, and always raises the labor share and labor demand. We show how the role of changes in the task content of production—due to automation and new tasks—can be inferred from industry-level data. Our empirical decomposition suggests that the slower growth of employment over the last three decades is accounted for by an acceleration in the displacement effect, especially in manufacturing, a weaker reinstatement effect, and slower growth of productivity than in previous decades.
Abstract: We present a framework for understanding the effects of automation and other types of technological changes on labor demand, and use it to interpret changes in US employment over the recent past. At the center of our framework is the allocation of tasks to capital and labor—the task content of production. Automation, which enables capital to replace labor in tasks it was previously engaged in, shifts the task content of production against labor because of a displacement effect. As a result, automation always reduces the labor share in value added and may reduce labor demand even as it raises productivity. The effects of automation are counterbalanced by the creation of new tasks in which labor has a comparative advantage. The introduction of new tasks changes the task content of production in favor of labor because of a reinstatement effect, and always raises the labor share and labor demand. We show how the role of changes in the task content of production—due to automation and new tasks—can be inferred from industry-level data. Our empirical decomposition suggests that the slower growth of employment over the last three decades is accounted for by an acceleration in the displacement effect, especially in manufacturing, a weaker reinstatement effect, and slower growth of productivity than in previous decades.
Experimentally-Induced Inflammation Predicts Present Focus: Levels of proinflammatory cytokines may play a mechanistic role in the desire for immediately available rewards
Experimentally-Induced Inflammation Predicts Present Focus. Jeffrey Gassen et al. Adaptive Human Behavior and Physiology, February 19 2019. https://link.springer.com/article/10.1007/s40750-019-00110-7
Abstract
Objective: Here, we provide an experimental test of the relationship between levels of proinflammatory cytokines and present-focused decision-making.
Methods: We examined whether increases in salivary levels of proinflammatory cytokines (interleukin-1β and interleukin-6) engendered by visually priming immunologically-relevant threats (pathogen threat, physical harm) and opportunities (mating) predicted temporal discounting, a key component of present-focused decision-making.
Results: As hypothesized, results revealed that each experimental manipulation led to a significant rise in both salivary interleukin-1β and interleukin-6. Moreover, post-manipulation levels of each cytokine independently predicted temporal discounting across conditions. These results were not moderated by pre-manipulation levels of either cytokine, nor were they found using the difference between pre- and post-manipulation levels of cytokines as a predictor.
Conclusions: Together, these results suggest that levels of proinflammatory cytokines may play a mechanistic role in the desire for immediately available rewards.
Keywords: Inflammation Life history theory Temporal focus Cytokines Impulsivity
Abstract
Objective: Here, we provide an experimental test of the relationship between levels of proinflammatory cytokines and present-focused decision-making.
Methods: We examined whether increases in salivary levels of proinflammatory cytokines (interleukin-1β and interleukin-6) engendered by visually priming immunologically-relevant threats (pathogen threat, physical harm) and opportunities (mating) predicted temporal discounting, a key component of present-focused decision-making.
Results: As hypothesized, results revealed that each experimental manipulation led to a significant rise in both salivary interleukin-1β and interleukin-6. Moreover, post-manipulation levels of each cytokine independently predicted temporal discounting across conditions. These results were not moderated by pre-manipulation levels of either cytokine, nor were they found using the difference between pre- and post-manipulation levels of cytokines as a predictor.
Conclusions: Together, these results suggest that levels of proinflammatory cytokines may play a mechanistic role in the desire for immediately available rewards.
Keywords: Inflammation Life history theory Temporal focus Cytokines Impulsivity
The results of the study note the ‘queen bee syndrome’, in which powerful women at the top levels of management are not supportive of female managers attempting to climb the ladder
Effects of supervisor gender on promotability of female managers. Hyondong Kim, Tong Hyouk Kang. Asia Pacific Journal of Human Resources, March 20 2019. https://doi.org/10.1111/1744-7941.12224
Abstract: In order to more fully understand the importance of same‐gender competition in female supervisor–subordinate working relationships, this study examined the effects of supervisor gender on promotion probabilities for Korean female managers with or without managerial qualifications (e.g. mentoring participation and job ranks). Using a balanced panel sample of 568 Korean female managers in each of four waves (in total, 2272 female managers over 7 years), we conducted a multinomial logistic regression analysis to estimate the promotability of female managers. Our findings showed that mentoring participation negatively affects promotion probability for female managers when they have female supervisors (vs male supervisors). Competitive interdependence can be exacerbated between female managers and female supervisors, especially when they are qualified to compete for the same resources and opportunities, which are limited for female managers and supervisors.
Abstract: In order to more fully understand the importance of same‐gender competition in female supervisor–subordinate working relationships, this study examined the effects of supervisor gender on promotion probabilities for Korean female managers with or without managerial qualifications (e.g. mentoring participation and job ranks). Using a balanced panel sample of 568 Korean female managers in each of four waves (in total, 2272 female managers over 7 years), we conducted a multinomial logistic regression analysis to estimate the promotability of female managers. Our findings showed that mentoring participation negatively affects promotion probability for female managers when they have female supervisors (vs male supervisors). Competitive interdependence can be exacerbated between female managers and female supervisors, especially when they are qualified to compete for the same resources and opportunities, which are limited for female managers and supervisors.
Conservatives & Republicans are less likely to report severe forms of sexual harassment & assault, which may explain differences in beliefs on these issues; to be determined if due to reporting biases or differential vulnerabilities
Political Differences in American Reports of Sexual Harassment and Assault. Rupa Jose, James H. Fowler, Anita Raj. Journal of Interpersonal Violence, March 22, 2019. https://doi.org/10.1177/0886260519835003
Abstract: Political ideology has been linked to beliefs regarding sexual harassment and assault (SH&A). Using data from the January 2018 Stop Street Sexual Harassment online poll (N = 2,009), this study examined associations of political identity and political ideology with self-reported experiences of being the victim of SH&A. SH&A experiences were coded into four mutually exclusive groups: none, non-physically aggressive harassment, physically aggressive harassment, or sexual assault. Sex-stratified logistic regression models assessed associations of interest, adjusting for participant demographics. Among women, more conservative political ideology was negatively associated with reports of sexual assault, odds ratio (OR) = 0.85, 95% confidence interval (CI) = [0.74, 0.98]. Among males, more conservative political ideology was negatively associated with reports of physically aggressive sexual harassment (OR = 0.85, 95% CI = [0.73, 0.98]), and greater Republican affiliation was negatively associated with reports of sexual assault (OR = 0.82, 95% CI = [0.68, 0.99]). Conservative and Republican women and men are thus less likely to report more severe forms of SH&A, which may explain differences in beliefs on these issues. Research is needed to determine if political differences are due to reporting biases or differential vulnerabilities.
Keywords: political party, political orientation, gender, sexual harassment, sexual violence
Abstract: Political ideology has been linked to beliefs regarding sexual harassment and assault (SH&A). Using data from the January 2018 Stop Street Sexual Harassment online poll (N = 2,009), this study examined associations of political identity and political ideology with self-reported experiences of being the victim of SH&A. SH&A experiences were coded into four mutually exclusive groups: none, non-physically aggressive harassment, physically aggressive harassment, or sexual assault. Sex-stratified logistic regression models assessed associations of interest, adjusting for participant demographics. Among women, more conservative political ideology was negatively associated with reports of sexual assault, odds ratio (OR) = 0.85, 95% confidence interval (CI) = [0.74, 0.98]. Among males, more conservative political ideology was negatively associated with reports of physically aggressive sexual harassment (OR = 0.85, 95% CI = [0.73, 0.98]), and greater Republican affiliation was negatively associated with reports of sexual assault (OR = 0.82, 95% CI = [0.68, 0.99]). Conservative and Republican women and men are thus less likely to report more severe forms of SH&A, which may explain differences in beliefs on these issues. Research is needed to determine if political differences are due to reporting biases or differential vulnerabilities.
Keywords: political party, political orientation, gender, sexual harassment, sexual violence
Enjoy it again: Repeat experiences are less repetitive than people think
O'Brien, E. (2019). Enjoy it again: Repeat experiences are less repetitive than people think. Journal of Personality and Social Psychology, 116(4), 519-540. http://dx.doi.org/10.1037/pspa0000147
Abstract: What would it be like to revisit a museum, restaurant, or city you just visited? To rewatch a movie you just watched? To replay a game you just played? People often have opportunities to repeat hedonic activities. Seven studies (total N = 3,356) suggest that such opportunities may be undervalued: Many repeat experiences are not as dull as they appear. Studies 1–3 documented the basic effect. All participants first completed a real-world activity once in full (Study 1, museum exhibit; Study 2, movie; Study 3, video game). Then, some predicted their reactions to repeating it whereas others actually repeated it. Predictors underestimated Experiencers’ enjoyment, even when experienced enjoyment indeed declined. Studies 4 and 5 compared mechanisms: neglecting the pleasurable byproduct of continued exposure to the same content (e.g., fluency) versus neglecting the new content that manifests by virtue of continued exposure (e.g., discovery), both of which might dilute uniform dullness. We found stronger support for the latter: The misprediction was moderated by stimulus complexity (Studies 4 and 5) and mediated by the amount of novelty discovered within the stimulus (Study 5), holding exposure constant. Doing something once may engender an inflated sense that one has now seen “it,” leaving people naïve to the missed nuances remaining to enjoy. Studies 6 and 7 highlighted consequences: Participants incurred costs to avoid repeats so to maximize enjoyment, in specific contexts for which repetition would have been as enjoyable (Study 6) or more enjoyable (Study 7) as the provided novel alternative. These findings warrant a new look at traditional assumptions about hedonic adaptation and novelty preferences. Repetition too could add an unforeseen spice to life.
Abstract: What would it be like to revisit a museum, restaurant, or city you just visited? To rewatch a movie you just watched? To replay a game you just played? People often have opportunities to repeat hedonic activities. Seven studies (total N = 3,356) suggest that such opportunities may be undervalued: Many repeat experiences are not as dull as they appear. Studies 1–3 documented the basic effect. All participants first completed a real-world activity once in full (Study 1, museum exhibit; Study 2, movie; Study 3, video game). Then, some predicted their reactions to repeating it whereas others actually repeated it. Predictors underestimated Experiencers’ enjoyment, even when experienced enjoyment indeed declined. Studies 4 and 5 compared mechanisms: neglecting the pleasurable byproduct of continued exposure to the same content (e.g., fluency) versus neglecting the new content that manifests by virtue of continued exposure (e.g., discovery), both of which might dilute uniform dullness. We found stronger support for the latter: The misprediction was moderated by stimulus complexity (Studies 4 and 5) and mediated by the amount of novelty discovered within the stimulus (Study 5), holding exposure constant. Doing something once may engender an inflated sense that one has now seen “it,” leaving people naïve to the missed nuances remaining to enjoy. Studies 6 and 7 highlighted consequences: Participants incurred costs to avoid repeats so to maximize enjoyment, in specific contexts for which repetition would have been as enjoyable (Study 6) or more enjoyable (Study 7) as the provided novel alternative. These findings warrant a new look at traditional assumptions about hedonic adaptation and novelty preferences. Repetition too could add an unforeseen spice to life.
The Sad State of Happiness in the United States and the Role of Digital Media: Changes in leasure may be a big cause
The Sad State of Happiness in the United States and the Role of Digital Media. Jean M. Twenge. World Happiness Report 2019, Mar 20 2019. http://worldhappiness.report/ed/2019/the-sad-state-of-happiness-in-the-united-states-and-the-role-of-digital-media/
Excerpts. Full text and lots of graphics in the link above.
The years since 2010 have not been good ones for happiness and well-being among Americans. Even as the United States economy improved after the end of the Great Recession in 2009, happiness among adults did not rebound to the higher levels of the 1990s, continuing a slow decline ongoing since at least 2000 in the General Social Survey (Twenge et al., 2016; also see Figure 5.1). Happiness was measured with the question, “Taken all together, how would you say things are these days—would you say that you are very happy, pretty happy, or not too happy?” with the response choices coded 1, 2, or 3.
[...]
Happiness and life satisfaction among United States adolescents, which increased between 1991 and 2011, suddenly declined after 2012 (Twenge et al., 2018a; see Figure 5.2). Thus, by 2016-17, both adults and adolescents were reporting significantly less happiness than they had in the 2000s.
[...]
In addition, numerous indicators of low psychological well-being such as depression, suicidal ideation, and self-harm increased sharply among adolescents since 2010, particularly among girls and young women (Mercado et al., 2017; Mojtabai et al., 2016; Plemmons et al., 2018; Twenge et al., 2018b, 2019a). Depression and self-harm also increased over this time period among children and adolescents in the UK (Morgan et al., 2017; NHS, 2018; Patalay & Gage, 2019). Thus, those in iGen (born after 1995) are markedly lower in psychological well-being than Millennials (born 1980-1994) were at the same age (Twenge, 2017).
This decline in happiness and mental health seems paradoxical. By most accounts, Americans should be happier now than ever. The violent crime rate is low, as is the unemployment rate. Income per capita has steadily grown over the last few decades. This is the Easterlin paradox: As the standard of living improves, so should happiness – but it has not.
Several credible explanations have been posited to explain the decline in happiness among adult Americans, including declines in social capital and social support (Sachs, 2017) and increases in obesity and substance abuse (Sachs, 2018). In this article, I suggest another, complementary explanation: that Americans are less happy due to fundamental shifts in how they spend their leisure time. I focus primarily on adolescents, since more thorough analyses on trends in time use have been performed for this age group. However, future analyses may find that similar trends also appear among adults.
The data on time use among United States adolescents comes primarily from the Monitoring the Future survey of 13- to 18-year-olds (conducted since 1976 for 12th graders and since 1991 for 8th and 10th graders), and the American Freshman Survey of entering university students (conducted since 1966, with time use data since 1987). Both collect large, nationally representative samples every year (for more details, see iGen, Twenge, 2017).
The rise of digital media and the fall of everything else
Over the last decade, the amount of time adolescents spend on screen activities (especially digital media such as gaming, social media, texting, and time online) has steadily increased, accelerating after 2012 after the majority of Americans owned smartphones (Twenge et al., 2019b). By 2017, the average 12th grader (17-18 years old) spent more than 6 hours a day of leisure time on just three digital media activities (internet, social media, and texting; see Figure 5.3). By 2018, 95% of United States adolescents had access to a smartphone, and 45% said they were online “almost constantly” (Anderson & Jiang, 2018).
During the same time period that digital media use increased, adolescents began to spend less time interacting with each other in person, including getting together with friends, socializing, and going to parties. In 2016, iGen college-bound high school seniors spent an hour less a day on face-to-face interaction than GenX adolescents did in the late 1980s (Twenge et al., 2019). Thus, the way adolescents socialize has fundamentally shifted, moving toward online activities and away from face-to-face social interaction.
Other activities that typically do not involve screens have also declined: Adolescents spent less time attending religious services (Twenge et al., 2015), less time reading books and magazines (Twenge et al., 2019b), and (perhaps most crucially) less time sleeping (Twenge et al., 2017). These declines are not due to time spent on homework, which has declined or stayed the same, or time spent on extracurricular activities, which has stayed about the same (Twenge & Park, 2019). The only activity adolescents have spent significantly more time on during the last decade is digital media. As Figure 5.4 demonstrates, the amount of time adolescents spend online increased at the same time that sleep and in-person social interaction declined, in tandem with a decline in general happiness.
[...]
Several studies have found that adolescents and young adults who spend more time on digital media are lower in well-being (e.g., Booker et al., 2015; Lin et al., 2016; Twenge & Campbell, 2018). For example, girls spending 5 or more hours a day on social media are three times more likely to be depressed than non-users (Kelly et al., 2019), and heavy internet users (vs. light users) are twice as likely to be unhappy (Twenge et al., 2018). Sleeping, face-to-face social interaction, and attending religious services – less frequent activities among iGen teens compared to previous generations – are instead linked to more happiness. Overall, activities related to smartphones and digital media are linked to less happiness, and those not involving technology are linked to more happiness. (See Figure 5.5; note that when iGen adolescents listen to music, they usually do so using their phones with earbuds).
Figure 5.5: Correlation between activities and general happiness, 8th and 10th graders, Monitoring the Future, 2013-2016 (controlled for race, gender, SES, and grade level)
[...]
In short, adolescents who spend more time on electronic devices are less happy, and adolescents who spend more time on most other activities are happier. This creates the possibility that iGen adolescents are less happy because their increased time on digital media has displaced time that previous generations spent on non-screen activities linked to happiness. In other words, digital media may have an indirect effect on happiness as it displaces time that could be otherwise spent on more beneficial activities.
Digital media activities may also have a direct impact on well-being. This may occur via upward social comparison, in which people feel that their lives are inferior compared to the glamorous “highlight reels” of others’ social media pages; these feelings are linked to depression (Steers et al., 2014). Cyberbullying, another direct effect of digital media, is also a significant risk factor for depression (Daine et al., 2013; John et al., 2018). When used during face-to-face social interaction, smartphone use appears to interfere with the enjoyment usually derived from such activities; for example, friends randomly assigned to have their phones available while having dinner at a restaurant enjoyed the activity less than those who did not have their phones available (Dwyer et al., 2018), and strangers in a waiting room who had their phones available were significantly less likely to talk to or smile at other people (Kushlev et al., 2019). Thus, higher use of digital media may be linked to lower well-being via direct means or by displacing time that might have been spent on activities more beneficial for well-being.
Correlation and causation
The analyses presented thus far are correlational, so they cannot prove that digital media time causes unhappiness. Third variables may be operating, though most studies control for factors such as gender, race, age, and socioeconomic status. Reverse causation is also possible: Perhaps unhappy people spend more time on digital media rather than digital media causing unhappiness. However, several longitudinal studies following the same individuals over time have found that digital media use predicts lower well-being later (e.g., Allen & Vella, 2018; Booker et al., 2018; Kim, 2017; Kross et al., 2013; Schmiedeberg & Schroder, 2017; Shakya & Christakis, 2017). In addition, two random-assignment experiments found that people who limit or cease social media use improve their well-being. Tromholt (2017) randomly assigned more than 1,000 adults to either continue their normal use of Facebook or give it up for a week; those who gave it up reported more happiness and less depression at the end of the week. Similarly, Hunt et al. (2018) asked college students to limit their social media use to 10 minutes a day per platform and no more than 30 minutes a day total, compared to a control group that continued their normal use. Those who limited their use were less lonely and less depressed over the course of several weeks.
Both the longitudinal and experimental studies suggest that at least some of the causation runs from digital media use to well-being. In addition, the increases in teen depression after smartphones became common after 2011 cannot be explained by low well-being causing digital media use (if so, one would be forced to argue that a rise in teen depression caused greater ownership of smartphones, an argument that defies logic). [...]
In addition, the indirect effects of digital media in displacing time spent on face-to-face social interaction and sleep are not as subject to reverse causation arguments. Deprivation of social interaction (Baumeister & Leary, 1995; Hartgerink et al., 2015; Lieberman, 2014) and lack of sleep (Zhai et al., 2015) are clear risk factors for unhappiness and low well-being. [...]
Conclusion
Thus, the large amount of time adolescents spend interacting with electronic devices may have direct links to unhappiness and/or may have displaced time once spent on more beneficial activities, leading to declines in happiness. It is not as certain if adults have also begun to spend less time interacting face-to-face and less time sleeping. However, given that adults in recent years spent just as much time with digital media as adolescents do (Common Sense Media, 2016), it seems likely that their time use has shifted as well. Future research should explore this possibility.
[...]
Excerpts. Full text and lots of graphics in the link above.
The years since 2010 have not been good ones for happiness and well-being among Americans. Even as the United States economy improved after the end of the Great Recession in 2009, happiness among adults did not rebound to the higher levels of the 1990s, continuing a slow decline ongoing since at least 2000 in the General Social Survey (Twenge et al., 2016; also see Figure 5.1). Happiness was measured with the question, “Taken all together, how would you say things are these days—would you say that you are very happy, pretty happy, or not too happy?” with the response choices coded 1, 2, or 3.
[...]
Happiness and life satisfaction among United States adolescents, which increased between 1991 and 2011, suddenly declined after 2012 (Twenge et al., 2018a; see Figure 5.2). Thus, by 2016-17, both adults and adolescents were reporting significantly less happiness than they had in the 2000s.
[...]
In addition, numerous indicators of low psychological well-being such as depression, suicidal ideation, and self-harm increased sharply among adolescents since 2010, particularly among girls and young women (Mercado et al., 2017; Mojtabai et al., 2016; Plemmons et al., 2018; Twenge et al., 2018b, 2019a). Depression and self-harm also increased over this time period among children and adolescents in the UK (Morgan et al., 2017; NHS, 2018; Patalay & Gage, 2019). Thus, those in iGen (born after 1995) are markedly lower in psychological well-being than Millennials (born 1980-1994) were at the same age (Twenge, 2017).
This decline in happiness and mental health seems paradoxical. By most accounts, Americans should be happier now than ever. The violent crime rate is low, as is the unemployment rate. Income per capita has steadily grown over the last few decades. This is the Easterlin paradox: As the standard of living improves, so should happiness – but it has not.
Several credible explanations have been posited to explain the decline in happiness among adult Americans, including declines in social capital and social support (Sachs, 2017) and increases in obesity and substance abuse (Sachs, 2018). In this article, I suggest another, complementary explanation: that Americans are less happy due to fundamental shifts in how they spend their leisure time. I focus primarily on adolescents, since more thorough analyses on trends in time use have been performed for this age group. However, future analyses may find that similar trends also appear among adults.
The data on time use among United States adolescents comes primarily from the Monitoring the Future survey of 13- to 18-year-olds (conducted since 1976 for 12th graders and since 1991 for 8th and 10th graders), and the American Freshman Survey of entering university students (conducted since 1966, with time use data since 1987). Both collect large, nationally representative samples every year (for more details, see iGen, Twenge, 2017).
The rise of digital media and the fall of everything else
Over the last decade, the amount of time adolescents spend on screen activities (especially digital media such as gaming, social media, texting, and time online) has steadily increased, accelerating after 2012 after the majority of Americans owned smartphones (Twenge et al., 2019b). By 2017, the average 12th grader (17-18 years old) spent more than 6 hours a day of leisure time on just three digital media activities (internet, social media, and texting; see Figure 5.3). By 2018, 95% of United States adolescents had access to a smartphone, and 45% said they were online “almost constantly” (Anderson & Jiang, 2018).
During the same time period that digital media use increased, adolescents began to spend less time interacting with each other in person, including getting together with friends, socializing, and going to parties. In 2016, iGen college-bound high school seniors spent an hour less a day on face-to-face interaction than GenX adolescents did in the late 1980s (Twenge et al., 2019). Thus, the way adolescents socialize has fundamentally shifted, moving toward online activities and away from face-to-face social interaction.
Other activities that typically do not involve screens have also declined: Adolescents spent less time attending religious services (Twenge et al., 2015), less time reading books and magazines (Twenge et al., 2019b), and (perhaps most crucially) less time sleeping (Twenge et al., 2017). These declines are not due to time spent on homework, which has declined or stayed the same, or time spent on extracurricular activities, which has stayed about the same (Twenge & Park, 2019). The only activity adolescents have spent significantly more time on during the last decade is digital media. As Figure 5.4 demonstrates, the amount of time adolescents spend online increased at the same time that sleep and in-person social interaction declined, in tandem with a decline in general happiness.
[...]
Several studies have found that adolescents and young adults who spend more time on digital media are lower in well-being (e.g., Booker et al., 2015; Lin et al., 2016; Twenge & Campbell, 2018). For example, girls spending 5 or more hours a day on social media are three times more likely to be depressed than non-users (Kelly et al., 2019), and heavy internet users (vs. light users) are twice as likely to be unhappy (Twenge et al., 2018). Sleeping, face-to-face social interaction, and attending religious services – less frequent activities among iGen teens compared to previous generations – are instead linked to more happiness. Overall, activities related to smartphones and digital media are linked to less happiness, and those not involving technology are linked to more happiness. (See Figure 5.5; note that when iGen adolescents listen to music, they usually do so using their phones with earbuds).
Figure 5.5: Correlation between activities and general happiness, 8th and 10th graders, Monitoring the Future, 2013-2016 (controlled for race, gender, SES, and grade level)
[...]
In short, adolescents who spend more time on electronic devices are less happy, and adolescents who spend more time on most other activities are happier. This creates the possibility that iGen adolescents are less happy because their increased time on digital media has displaced time that previous generations spent on non-screen activities linked to happiness. In other words, digital media may have an indirect effect on happiness as it displaces time that could be otherwise spent on more beneficial activities.
Digital media activities may also have a direct impact on well-being. This may occur via upward social comparison, in which people feel that their lives are inferior compared to the glamorous “highlight reels” of others’ social media pages; these feelings are linked to depression (Steers et al., 2014). Cyberbullying, another direct effect of digital media, is also a significant risk factor for depression (Daine et al., 2013; John et al., 2018). When used during face-to-face social interaction, smartphone use appears to interfere with the enjoyment usually derived from such activities; for example, friends randomly assigned to have their phones available while having dinner at a restaurant enjoyed the activity less than those who did not have their phones available (Dwyer et al., 2018), and strangers in a waiting room who had their phones available were significantly less likely to talk to or smile at other people (Kushlev et al., 2019). Thus, higher use of digital media may be linked to lower well-being via direct means or by displacing time that might have been spent on activities more beneficial for well-being.
Correlation and causation
The analyses presented thus far are correlational, so they cannot prove that digital media time causes unhappiness. Third variables may be operating, though most studies control for factors such as gender, race, age, and socioeconomic status. Reverse causation is also possible: Perhaps unhappy people spend more time on digital media rather than digital media causing unhappiness. However, several longitudinal studies following the same individuals over time have found that digital media use predicts lower well-being later (e.g., Allen & Vella, 2018; Booker et al., 2018; Kim, 2017; Kross et al., 2013; Schmiedeberg & Schroder, 2017; Shakya & Christakis, 2017). In addition, two random-assignment experiments found that people who limit or cease social media use improve their well-being. Tromholt (2017) randomly assigned more than 1,000 adults to either continue their normal use of Facebook or give it up for a week; those who gave it up reported more happiness and less depression at the end of the week. Similarly, Hunt et al. (2018) asked college students to limit their social media use to 10 minutes a day per platform and no more than 30 minutes a day total, compared to a control group that continued their normal use. Those who limited their use were less lonely and less depressed over the course of several weeks.
Both the longitudinal and experimental studies suggest that at least some of the causation runs from digital media use to well-being. In addition, the increases in teen depression after smartphones became common after 2011 cannot be explained by low well-being causing digital media use (if so, one would be forced to argue that a rise in teen depression caused greater ownership of smartphones, an argument that defies logic). [...]
In addition, the indirect effects of digital media in displacing time spent on face-to-face social interaction and sleep are not as subject to reverse causation arguments. Deprivation of social interaction (Baumeister & Leary, 1995; Hartgerink et al., 2015; Lieberman, 2014) and lack of sleep (Zhai et al., 2015) are clear risk factors for unhappiness and low well-being. [...]
Conclusion
Thus, the large amount of time adolescents spend interacting with electronic devices may have direct links to unhappiness and/or may have displaced time once spent on more beneficial activities, leading to declines in happiness. It is not as certain if adults have also begun to spend less time interacting face-to-face and less time sleeping. However, given that adults in recent years spent just as much time with digital media as adolescents do (Common Sense Media, 2016), it seems likely that their time use has shifted as well. Future research should explore this possibility.
[...]
Risking Other People's Money: Experimental Evidence on the Role of Incentives and Personality Traits
Risking Other People's Money: Experimental Evidence on the Role of Incentives and Personality Traits. Ola Andersson et al. The Scandinavian Journal of Economics, March 18 2019. https://doi.org/10.1111/sjoe.12353
Abstract: Decision makers often face incentives to increase risk‐taking on behalf of others through bonus contracts and relative performance contracts. We conduct an experimental study of risk‐taking on behalf of others using a large heterogeneous sample and find that people respond to such incentives without much apparent concern for stakeholders. Responses are heterogeneous and mitigated by personality traits. The findings suggest that lack of concern for others’ risk exposure hardly requires “financial psychopaths” in order to flourish, but is diminished by social concerns.
---
I.Introduction
Risk taking on behalf of others is common in many economic and financial decisions. Examples include fund managers investing their clients’ money and executives acting on behalf of shareholders. To motivate decision makers, the authority to take decisions on behalf of others is often coupledwith powerfulincentives. A basic problem with this practice is that it is typically hard to construct compensation schemes that perfectly align the incentives of decision makers with the interests of stakeholders. Indeed, in the wake of the recent financial crisis, actors in the financial sector have beenroutinely accused of taking increasedrisk on behalf of investors. The introduction of advanced financial products has expanded opportunities to hedge risks, creating further incentives for increased risk-taking. During a public hearing in the US Senate involving the CEO of a leading investment bank, it emerged from internal e-mails that the bank had taken bets against its own clients’ investmentsto hedge their profits. [***I do not agree with this mention here, it seems the authors support the view that these bets were wrong or immoral***.] Moreover, Andrew Haldane, director of the Bank of England, argues that the banking sector’s problems arerooted in the fact that the private risks of financial decision makers are not alignedwith social risks, and that the latter areof a much greatermagnitude (Haldane 2011).In addition, Rajan (2006) suggests that new developments in the finance industry—such as added layers of financial management and new complex financial products—have exacerbated the problem. The argumentsmadein the previous paragraph suggestthat increased risk takingis undesirablefrom a societal point of view. However, theoretically one may argue that increased risk takingis desirable. It is well knownin the finance literature that incentive schemes may be used to increase risk takingbeyond what is motivated by the decision makers risk preferences (Shavell 1979). The argument made is usually that the owners of capital are well diversified and thereby interested in maximizing dividends payout (risk neutrality). The decision makers, on the other hand, are not well diversified and if risk aversethey may take sub-optimal decisions if the reimbursement scheme does not compensate for the difference in risk exposure and risk preferences. Such compensation may come from incentive schemes thatinduce a positive risk shift (e.g., by introducing competition or bonus schemes as in this paper).An alternative motivation is that owners of capital are risk averse, and aware of it, but would like their decisions to reflect dividend maximization. In particular, from a normative stance they agree that risk-neutral decisions are optimal, but when facing actual decisions, they cannot refrain frommaking decisions that depart from this principle. It may then be preferred to delegate to a decision maker whois, for example,less emotionally attached. Inboth cases, the increased risk takingis then optimal from the capital owner’s and society’s perspective and should be encouraged. In this paper, we do not directly address whether increased risk taking on behalf of others is welfare enhancing or not, wesimplycompare the level of risk taken for others under different incentive schemes. As a point of comparison, we estimate risk taking on behalf of others in a situation without distortive(orcorrective) incentives. Hence, when we refer to increased risk taking, we mean risk taking above thelevel decision makers takeon behalf of others in such a neutral situation. Since we find that the level of risk taking on behalf of others without distorting incentives is indistinguishable to the level of risk that individuals take when making decisions on their ownbehalf, it is natural to view departures from this level as detrimental to the principal. However, it should be stressedthat in line with the discussion above, we cannot rule out the existence of emotional and cognitive constraints that impede decision makers to act in accordance withtheir owninterest. That is, a higher level of risk taking could be desirable although decision makers do not choose this for themselves. From previous literature, we know that competitive incentives increase risk taking for individuals working in the finance industry (Kirchler et al. 2018) and students (Dijk et al. 2016). Outstanding questions are whether such behaviour is present in the general population and whether it extends to situations where the decision has consequences for other people. The aim of this paper isto study such incentive schemes, with hedging opportunities or misaligned incentive contract, in a controlled environment using a large sample of people fromall walks of life. In particular, we let decision makers takedecisions on behalf of two other individuals under bonus and competitive incentives, which may distort risk takingas well as open up for hedging opportunities depending on the dividend correlation. A potential counterbalancing force to increasedrisk takingmay bethat decision makers feel responsible to broader groups or have altruistic preferences, i.e., they intrinsically care about the outcome they generate on behalf of others(Andreoni and Miller 2002). Indeed, if such a concern is sufficiently strong, it may operate as a natural moderator of extrinsic incentives to take on more risk. Determining the strength of these forces is an empirical question, made especially difficultbecause it is likely that behavioral responses to misaligned incentives differ between individuals. Understanding this heterogeneity is important because sometimes we can choose upon whom to bestow the responsibility of making decisions on behalf of others, and we can select people according to their characteristics. To study this, we employ several measures of personality traits, both survey-based and behavioural measures. Our focus here is on risk-taking behaviour when there are monetary conflicts of interest between the decision maker and investors (henceforth called receivers). In our experiment, decision makers take risky decisions on behalf of two receivers, whose payoffs may be negatively or positively correlated. When the payoffs of the receivers are perfectly negatively
correlated, the decision makers can exploit the correlation to increase their ownpayoff without increasing their ownrisk exposure. On the contrary, when payoffs are perfectly positively correlated, such risk-free gains are not possible. We allow decision makers to take decisions under both regimes. For decision makers, we incorporate two types of incentive structures common in the financial sector. First, we consider a bonus-like incentive scheme where the decision maker’s compensation is proportional to the total payoffs of the two receivers. Within our experimental setup, we show theoretically that such bonus schemescreate material incentives for increased risk-taking if the receivers’ returns are negatively correlated. Second, we study winner-take-all competition between decision makers who are matchedin pairs. The decision maker who generates the higher total payoff on behalf of her receivers earns aperformance fee as a percentage of the total payoff to the receivers, while the otherdecision makerearns nothing. Competitive incentives are commonplace in financial marketsand create option-like convex compensationschemes(Chevalier and Ellison 1997).We show theoretically that such compensation schemes create material incentives for increased risk taking, independent of the correlation structure of the receivers’ returns. The intuitionis that increasing the risk exposure increases the chance of outperforming peers, and this mechanism trumps any concerns for individual risk-taking by the decision maker. We believe the research reported here is the first to experimentally investigate the effects of such incentives on risk-taking on behalf of others on a large scale using a random sample of the general population. Our experimental study yields two main findings. First, ordinary people respond to powerful incentives to take risks. In particular, in line with our hypotheses, we find that bonus schemes trigger increased risk-taking on behalf of others only when receivers’ returns are negatively correlated. Hence, a bonus scheme with well-aligned risk profiles between decision makers and receivers does not distort risk-taking in our setting. Competition, on the other
hand, triggers increased risk-taking irrespective of the correlation structure of receivers’ returns. For the receivers, competition between the decision makers thereby always leads to higher risk exposure. Overall, we find that individual incentives dominate oversocial concerns in the settings studied here. However, we also findconsiderable heterogeneity in how people respond to such financial incentives. We have access to a large and heterogeneous sample along with a wealth of measures from earlier surveys and experiments. This unique data enables us to identify and investigate who chooses to expose others to risk. We find that measures of personality related topro-social orientation are associatedwithrisk-taking on behalf of others. Indeed, individuals with more pro-social orientations expose receivers to significantly lessrisk. It has been popular to decry decision makers in the financial industry as “financial psychopaths” (see, e.g., DeCovny, 2012).We are not in a position to judge whether this is an accurate description, but our observations, based on a fairly representative sample of the general populationcoupled with individual personality measures, allow us to conclude that lack of concern for others’ risk exposure hardly depends on “financial psychopaths” to flourish. Ordinary people tend to do it when the incentives of decision makers and receivers are not aligned. The general lesson is that policymakersshould become more circumspect in designing incentives and institutionsbecause they impact the risks that are takenon behalf of others. Scientific evidence on the characteristics of individuals working in the financial sector is scant. Concerning risk preferences, Haigh and List (2005) find that professional traders exhibit behaviour consistent with myopic loss aversion to a greater extent than students. In a small sample (n= 21) of traders, Durand et al. (2008) find that average Big 5 scores among traders are not significantly different from the population averages. Along similar lines, using
a small sample of day traders,Loet al.(2005) were unable to relate trader performance to personality traits. Oberlechner (2004) investigates which personal characteristics are perceived as important for being successful as a foreign exchange trader. However, the characteristics emphasized are not directly comparable with the Big 5 inventorythat we use to measure personality traits. The closest match to agreeableness and extraversion (which we find to be important in Table 3) is probably social skills. Interestingly, social skills were considered the least important of the 23 delineated skills.Sjöberg and Engelberg (2009) compare financial economics students with a sample from the Swedish population. They find that compared to the overall population financial economics students are less altruistic (as measured by interest in peace and the environment) and less risk averse.
Abstract: Decision makers often face incentives to increase risk‐taking on behalf of others through bonus contracts and relative performance contracts. We conduct an experimental study of risk‐taking on behalf of others using a large heterogeneous sample and find that people respond to such incentives without much apparent concern for stakeholders. Responses are heterogeneous and mitigated by personality traits. The findings suggest that lack of concern for others’ risk exposure hardly requires “financial psychopaths” in order to flourish, but is diminished by social concerns.
---
I.Introduction
Risk taking on behalf of others is common in many economic and financial decisions. Examples include fund managers investing their clients’ money and executives acting on behalf of shareholders. To motivate decision makers, the authority to take decisions on behalf of others is often coupledwith powerfulincentives. A basic problem with this practice is that it is typically hard to construct compensation schemes that perfectly align the incentives of decision makers with the interests of stakeholders. Indeed, in the wake of the recent financial crisis, actors in the financial sector have beenroutinely accused of taking increasedrisk on behalf of investors. The introduction of advanced financial products has expanded opportunities to hedge risks, creating further incentives for increased risk-taking. During a public hearing in the US Senate involving the CEO of a leading investment bank, it emerged from internal e-mails that the bank had taken bets against its own clients’ investmentsto hedge their profits. [***I do not agree with this mention here, it seems the authors support the view that these bets were wrong or immoral***.] Moreover, Andrew Haldane, director of the Bank of England, argues that the banking sector’s problems arerooted in the fact that the private risks of financial decision makers are not alignedwith social risks, and that the latter areof a much greatermagnitude (Haldane 2011).In addition, Rajan (2006) suggests that new developments in the finance industry—such as added layers of financial management and new complex financial products—have exacerbated the problem. The argumentsmadein the previous paragraph suggestthat increased risk takingis undesirablefrom a societal point of view. However, theoretically one may argue that increased risk takingis desirable. It is well knownin the finance literature that incentive schemes may be used to increase risk takingbeyond what is motivated by the decision makers risk preferences (Shavell 1979). The argument made is usually that the owners of capital are well diversified and thereby interested in maximizing dividends payout (risk neutrality). The decision makers, on the other hand, are not well diversified and if risk aversethey may take sub-optimal decisions if the reimbursement scheme does not compensate for the difference in risk exposure and risk preferences. Such compensation may come from incentive schemes thatinduce a positive risk shift (e.g., by introducing competition or bonus schemes as in this paper).An alternative motivation is that owners of capital are risk averse, and aware of it, but would like their decisions to reflect dividend maximization. In particular, from a normative stance they agree that risk-neutral decisions are optimal, but when facing actual decisions, they cannot refrain frommaking decisions that depart from this principle. It may then be preferred to delegate to a decision maker whois, for example,less emotionally attached. Inboth cases, the increased risk takingis then optimal from the capital owner’s and society’s perspective and should be encouraged. In this paper, we do not directly address whether increased risk taking on behalf of others is welfare enhancing or not, wesimplycompare the level of risk taken for others under different incentive schemes. As a point of comparison, we estimate risk taking on behalf of others in a situation without distortive(orcorrective) incentives. Hence, when we refer to increased risk taking, we mean risk taking above thelevel decision makers takeon behalf of others in such a neutral situation. Since we find that the level of risk taking on behalf of others without distorting incentives is indistinguishable to the level of risk that individuals take when making decisions on their ownbehalf, it is natural to view departures from this level as detrimental to the principal. However, it should be stressedthat in line with the discussion above, we cannot rule out the existence of emotional and cognitive constraints that impede decision makers to act in accordance withtheir owninterest. That is, a higher level of risk taking could be desirable although decision makers do not choose this for themselves. From previous literature, we know that competitive incentives increase risk taking for individuals working in the finance industry (Kirchler et al. 2018) and students (Dijk et al. 2016). Outstanding questions are whether such behaviour is present in the general population and whether it extends to situations where the decision has consequences for other people. The aim of this paper isto study such incentive schemes, with hedging opportunities or misaligned incentive contract, in a controlled environment using a large sample of people fromall walks of life. In particular, we let decision makers takedecisions on behalf of two other individuals under bonus and competitive incentives, which may distort risk takingas well as open up for hedging opportunities depending on the dividend correlation. A potential counterbalancing force to increasedrisk takingmay bethat decision makers feel responsible to broader groups or have altruistic preferences, i.e., they intrinsically care about the outcome they generate on behalf of others(Andreoni and Miller 2002). Indeed, if such a concern is sufficiently strong, it may operate as a natural moderator of extrinsic incentives to take on more risk. Determining the strength of these forces is an empirical question, made especially difficultbecause it is likely that behavioral responses to misaligned incentives differ between individuals. Understanding this heterogeneity is important because sometimes we can choose upon whom to bestow the responsibility of making decisions on behalf of others, and we can select people according to their characteristics. To study this, we employ several measures of personality traits, both survey-based and behavioural measures. Our focus here is on risk-taking behaviour when there are monetary conflicts of interest between the decision maker and investors (henceforth called receivers). In our experiment, decision makers take risky decisions on behalf of two receivers, whose payoffs may be negatively or positively correlated. When the payoffs of the receivers are perfectly negatively
correlated, the decision makers can exploit the correlation to increase their ownpayoff without increasing their ownrisk exposure. On the contrary, when payoffs are perfectly positively correlated, such risk-free gains are not possible. We allow decision makers to take decisions under both regimes. For decision makers, we incorporate two types of incentive structures common in the financial sector. First, we consider a bonus-like incentive scheme where the decision maker’s compensation is proportional to the total payoffs of the two receivers. Within our experimental setup, we show theoretically that such bonus schemescreate material incentives for increased risk-taking if the receivers’ returns are negatively correlated. Second, we study winner-take-all competition between decision makers who are matchedin pairs. The decision maker who generates the higher total payoff on behalf of her receivers earns aperformance fee as a percentage of the total payoff to the receivers, while the otherdecision makerearns nothing. Competitive incentives are commonplace in financial marketsand create option-like convex compensationschemes(Chevalier and Ellison 1997).We show theoretically that such compensation schemes create material incentives for increased risk taking, independent of the correlation structure of the receivers’ returns. The intuitionis that increasing the risk exposure increases the chance of outperforming peers, and this mechanism trumps any concerns for individual risk-taking by the decision maker. We believe the research reported here is the first to experimentally investigate the effects of such incentives on risk-taking on behalf of others on a large scale using a random sample of the general population. Our experimental study yields two main findings. First, ordinary people respond to powerful incentives to take risks. In particular, in line with our hypotheses, we find that bonus schemes trigger increased risk-taking on behalf of others only when receivers’ returns are negatively correlated. Hence, a bonus scheme with well-aligned risk profiles between decision makers and receivers does not distort risk-taking in our setting. Competition, on the other
hand, triggers increased risk-taking irrespective of the correlation structure of receivers’ returns. For the receivers, competition between the decision makers thereby always leads to higher risk exposure. Overall, we find that individual incentives dominate oversocial concerns in the settings studied here. However, we also findconsiderable heterogeneity in how people respond to such financial incentives. We have access to a large and heterogeneous sample along with a wealth of measures from earlier surveys and experiments. This unique data enables us to identify and investigate who chooses to expose others to risk. We find that measures of personality related topro-social orientation are associatedwithrisk-taking on behalf of others. Indeed, individuals with more pro-social orientations expose receivers to significantly lessrisk. It has been popular to decry decision makers in the financial industry as “financial psychopaths” (see, e.g., DeCovny, 2012).We are not in a position to judge whether this is an accurate description, but our observations, based on a fairly representative sample of the general populationcoupled with individual personality measures, allow us to conclude that lack of concern for others’ risk exposure hardly depends on “financial psychopaths” to flourish. Ordinary people tend to do it when the incentives of decision makers and receivers are not aligned. The general lesson is that policymakersshould become more circumspect in designing incentives and institutionsbecause they impact the risks that are takenon behalf of others. Scientific evidence on the characteristics of individuals working in the financial sector is scant. Concerning risk preferences, Haigh and List (2005) find that professional traders exhibit behaviour consistent with myopic loss aversion to a greater extent than students. In a small sample (n= 21) of traders, Durand et al. (2008) find that average Big 5 scores among traders are not significantly different from the population averages. Along similar lines, using
a small sample of day traders,Loet al.(2005) were unable to relate trader performance to personality traits. Oberlechner (2004) investigates which personal characteristics are perceived as important for being successful as a foreign exchange trader. However, the characteristics emphasized are not directly comparable with the Big 5 inventorythat we use to measure personality traits. The closest match to agreeableness and extraversion (which we find to be important in Table 3) is probably social skills. Interestingly, social skills were considered the least important of the 23 delineated skills.Sjöberg and Engelberg (2009) compare financial economics students with a sample from the Swedish population. They find that compared to the overall population financial economics students are less altruistic (as measured by interest in peace and the environment) and less risk averse.
Sexual Selection, Agonistic Signaling: Presence of beard increased the speed & accuracy with which participants recognized displays of anger but not happiness; & increased the rated prosociality of happy faces
Sexual Selection, Agonistic Signaling, and the Effect of Beards on Recognition of Men’s Anger Displays. Belinda M. Craig, Nicole L. Nelson, Barnaby J. W. Dixson. Psychological Science, March 25, 2019. https://doi.org/10.1177/0956797619834876
Abstract: The beard is arguably one of the most obvious signals of masculinity in humans. Almost 150 years ago, Darwin suggested that beards evolved to communicate formidability to other males, but no studies have investigated whether beards enhance recognition of threatening expressions, such as anger. We found that the presence of a beard increased the speed and accuracy with which participants recognized displays of anger but not happiness (Experiment 1, N = 219). This effect was not due to negative evaluations shared by beardedness and anger or to negative stereotypes associated with beardedness, as beards did not facilitate recognition of another negative expression, sadness (Experiment 2, N = 90), and beards increased the rated prosociality of happy faces in addition to the rated masculinity and aggressiveness of angry faces (Experiment 3, N = 445). A computer-based emotion classifier reproduced the influence of beards on emotion recognition (Experiment 4). The results suggest that beards may alter perceived facial structure, facilitating rapid judgments of anger in ways that conform to evolutionary theory.
Keywords: facial hair, emotion recognition, face processing, intrasexual selection, open data
---
Is this perceived intuitively by soldiers? Many more soldiers than civilians wear a beard in countries like the US, where for decades the overwhelming majority of men didn't sport one.
---
Excerpts: Agonistic interactions between males during competition over resources, status, and mating opportunities occur across the mammalian class and have shaped the evolution of weaponry and threat displays (Darwin, 1871; Emlen, 2008; Kokko, Jennions, & Brooks, 2006). In humans, these displays are manifest in a variety of bodily and facial dimorphisms, of which beardedness is one of the most visually salient (B. J. Dixson & Vasey, 2012; B. J. W. Dixson, Lee, Sherlock, & Talamas, 2017). Beards provide an accurate indication of male sexual maturity, and bearded faces are rated as more masculine, dominant, and aggressive than clean-shaven faces (B. J. Dixson & Brooks, 2013; Muscarella & Cunningham, 1996; Neave & Shields, 2008). These effects stem from the fact that beards grow around the jaw and mouth, and thus emphasize jaw size and masculine facial structure (B. J. W. Dixson et al., 2017; B. J. W. Dixson, Sulikowski, Gouda-Vossos, Rantala, & Brooks, 2016; Sherlock, Tegg, Sulikowski, & Dixson, 2017). Beardedness has a greater influence on ratings of masculinity and dominance than does craniofacial shape or jaw size (B. J. W. Dixson et al., 2017; Sherlock et al., 2017).The enhancing effects of facial hair on judgments of men’s facial masculinity, dominance, and aggressiveness by framing components of masculine facial shape have been measured using stimuli depicting neutral facial expressions. However, faces carry multiple sources of social information, including emotional facial expres-sions that can convey internal states and intentions. Facial expressions such as displays of anger can be enacted in agonistic interactions to signal interpersonal threat (Blair, 2003; Schmidt & Cohn, 2001; Sell, Cosmides, & Tooby, 2014; Tay, 2015). It has been hypothesized that beardedness facilitates recognition of threatening displays, including displays of anger, by enhancing masculine facial features related to dominance (particu-larly jaw size; Blanchard, 2009; Goodhart, 1960; Guthrie, 1970), but to date, there have been no behavioral studies detailing whether beards influence recognition of angry expressions.Although the influence of facial hair on recognition of expressions of anger has not been directly tested, previous findings suggest that it is plausible that beards facilitate the recognition of anger. Previous research has demonstrated that people are faster to recognize anger when it is displayed on male faces than when it is displayed on female faces (Becker, Kenrick, Neuberg, Blackwell, & Smith, 2007). This influence of masculinity on recognition of anger has been partly attributed to overlap between structural cues of anger and masculinity. It has been suggested that angry facial expressions emphasize masculine facial structures, such as the prom-inence of the jaw (Becker et al., 2007; Hess, Adams, Grammer, & Kleck, 2009; Sacco & Hugenberg, 2009). Facial hair grows around the areas involved in express-ing a range of emotions, including anger, and also enhances masculine craniofacial structure and the prominence of the jaw (B. J. W. Dixson et al., 2017; Sherlock et al., 2017). These observations suggest that the pres-ence of a beard may facilitate recognition of angry expressions.To test whether beards amplify displays of anger, we presented participants with photographs of standard-ized expressions of anger and happiness posed by the same men when bearded and clean-shaven. Participants categorized the emotion displayed in each face, and we examined how facial hair affected their speed and accuracy. If participants were faster to recognize anger but not happiness on bearded than on clean-shaven faces, this would indicate that beardedness facilitates recognition of a threatening emotional expression spe-cifically and not emotional expressions more generally. After we found such a specific effect, we explored pos-sible underlying mechanisms in two behavioral experi-ments and a final experiment with a computer-based emotion classifier.
Abstract: The beard is arguably one of the most obvious signals of masculinity in humans. Almost 150 years ago, Darwin suggested that beards evolved to communicate formidability to other males, but no studies have investigated whether beards enhance recognition of threatening expressions, such as anger. We found that the presence of a beard increased the speed and accuracy with which participants recognized displays of anger but not happiness (Experiment 1, N = 219). This effect was not due to negative evaluations shared by beardedness and anger or to negative stereotypes associated with beardedness, as beards did not facilitate recognition of another negative expression, sadness (Experiment 2, N = 90), and beards increased the rated prosociality of happy faces in addition to the rated masculinity and aggressiveness of angry faces (Experiment 3, N = 445). A computer-based emotion classifier reproduced the influence of beards on emotion recognition (Experiment 4). The results suggest that beards may alter perceived facial structure, facilitating rapid judgments of anger in ways that conform to evolutionary theory.
Keywords: facial hair, emotion recognition, face processing, intrasexual selection, open data
---
Is this perceived intuitively by soldiers? Many more soldiers than civilians wear a beard in countries like the US, where for decades the overwhelming majority of men didn't sport one.
---
Excerpts: Agonistic interactions between males during competition over resources, status, and mating opportunities occur across the mammalian class and have shaped the evolution of weaponry and threat displays (Darwin, 1871; Emlen, 2008; Kokko, Jennions, & Brooks, 2006). In humans, these displays are manifest in a variety of bodily and facial dimorphisms, of which beardedness is one of the most visually salient (B. J. Dixson & Vasey, 2012; B. J. W. Dixson, Lee, Sherlock, & Talamas, 2017). Beards provide an accurate indication of male sexual maturity, and bearded faces are rated as more masculine, dominant, and aggressive than clean-shaven faces (B. J. Dixson & Brooks, 2013; Muscarella & Cunningham, 1996; Neave & Shields, 2008). These effects stem from the fact that beards grow around the jaw and mouth, and thus emphasize jaw size and masculine facial structure (B. J. W. Dixson et al., 2017; B. J. W. Dixson, Sulikowski, Gouda-Vossos, Rantala, & Brooks, 2016; Sherlock, Tegg, Sulikowski, & Dixson, 2017). Beardedness has a greater influence on ratings of masculinity and dominance than does craniofacial shape or jaw size (B. J. W. Dixson et al., 2017; Sherlock et al., 2017).The enhancing effects of facial hair on judgments of men’s facial masculinity, dominance, and aggressiveness by framing components of masculine facial shape have been measured using stimuli depicting neutral facial expressions. However, faces carry multiple sources of social information, including emotional facial expres-sions that can convey internal states and intentions. Facial expressions such as displays of anger can be enacted in agonistic interactions to signal interpersonal threat (Blair, 2003; Schmidt & Cohn, 2001; Sell, Cosmides, & Tooby, 2014; Tay, 2015). It has been hypothesized that beardedness facilitates recognition of threatening displays, including displays of anger, by enhancing masculine facial features related to dominance (particu-larly jaw size; Blanchard, 2009; Goodhart, 1960; Guthrie, 1970), but to date, there have been no behavioral studies detailing whether beards influence recognition of angry expressions.Although the influence of facial hair on recognition of expressions of anger has not been directly tested, previous findings suggest that it is plausible that beards facilitate the recognition of anger. Previous research has demonstrated that people are faster to recognize anger when it is displayed on male faces than when it is displayed on female faces (Becker, Kenrick, Neuberg, Blackwell, & Smith, 2007). This influence of masculinity on recognition of anger has been partly attributed to overlap between structural cues of anger and masculinity. It has been suggested that angry facial expressions emphasize masculine facial structures, such as the prom-inence of the jaw (Becker et al., 2007; Hess, Adams, Grammer, & Kleck, 2009; Sacco & Hugenberg, 2009). Facial hair grows around the areas involved in express-ing a range of emotions, including anger, and also enhances masculine craniofacial structure and the prominence of the jaw (B. J. W. Dixson et al., 2017; Sherlock et al., 2017). These observations suggest that the pres-ence of a beard may facilitate recognition of angry expressions.To test whether beards amplify displays of anger, we presented participants with photographs of standard-ized expressions of anger and happiness posed by the same men when bearded and clean-shaven. Participants categorized the emotion displayed in each face, and we examined how facial hair affected their speed and accuracy. If participants were faster to recognize anger but not happiness on bearded than on clean-shaven faces, this would indicate that beardedness facilitates recognition of a threatening emotional expression spe-cifically and not emotional expressions more generally. After we found such a specific effect, we explored pos-sible underlying mechanisms in two behavioral experi-ments and a final experiment with a computer-based emotion classifier.
Monday, March 25, 2019
Is the Putative Mirror Neuron System Associated with Empathy? Only weakly... Too much hype, as we already knew
Bekkali, Soukayna, George J. Youssef, Peter H. Donaldson, Natalia Albein-Urios, Christian Hyde, and Peter G. Enticott. 2019. “Is the Putative Mirror Neuron System Associated with Empathy? A Systematic Review and Meta-analysis.” PsyArXiv. March 20. doi:10.31234/osf.io/6bu4p
Abstract: Theoretical perspectives suggest that the mirror neuron system (MNS) is an important neurobiological contributor to empathy, but empirical support is mixed. Here, we adopt a summary model for empathy, consisting of motor, emotional, and cognitive components of empathy. This review provides an overview of existing empirical studies investigating the relationship between putative MNS activity and empathy in healthy populations. 52 studies were identified that investigated the association between the MNS and at least one domain of empathy, representing data from 1044 participants. Our results suggests that emotional and cognitive empathy are moderately correlated with MNS activity, while motor empathy showed no relationship. Results varied across techniques used to acquire MNS activity (TMS, EEG, and fMRI). Overall, results provide some evidence for a relationship between the MNS and empathy. Our findings also highlight methodological variability in study design as an important factor in understanding this relationship. We discuss limitations regarding these methodological variations and important implications for clinical and community translations, as well as suggestions for future research.
Vulgarization: There Is Only Weak Evidence That Mirror Neurons Underlie Human Empathy – New Review And Meta-Analysis. Christian Jarrett. The British Psychological Society Research Digest, March 25, 2019. https://digest.bps.org.uk/2019/03/25/there-is-only-weak-evidence-that-mirror-neurons-underlie-human-empathy-new-review-and-meta-analysis/
Abstract: Theoretical perspectives suggest that the mirror neuron system (MNS) is an important neurobiological contributor to empathy, but empirical support is mixed. Here, we adopt a summary model for empathy, consisting of motor, emotional, and cognitive components of empathy. This review provides an overview of existing empirical studies investigating the relationship between putative MNS activity and empathy in healthy populations. 52 studies were identified that investigated the association between the MNS and at least one domain of empathy, representing data from 1044 participants. Our results suggests that emotional and cognitive empathy are moderately correlated with MNS activity, while motor empathy showed no relationship. Results varied across techniques used to acquire MNS activity (TMS, EEG, and fMRI). Overall, results provide some evidence for a relationship between the MNS and empathy. Our findings also highlight methodological variability in study design as an important factor in understanding this relationship. We discuss limitations regarding these methodological variations and important implications for clinical and community translations, as well as suggestions for future research.
Vulgarization: There Is Only Weak Evidence That Mirror Neurons Underlie Human Empathy – New Review And Meta-Analysis. Christian Jarrett. The British Psychological Society Research Digest, March 25, 2019. https://digest.bps.org.uk/2019/03/25/there-is-only-weak-evidence-that-mirror-neurons-underlie-human-empathy-new-review-and-meta-analysis/
People have a tendency to ‘shoot the messenger,’ deeming innocent bearers of bad news unlikeable; it is unique to the (innocent) messenger, & not mere bystanders; & distinct from merely receiving information that one disagrees with
John, Leslie, Hayley Blunden, and Heidi Liu. "Shooting the Messenger." Journal of Experimental Psychology: General (forthcoming). https://www.hbs.edu/faculty/Pages/item.aspx?num=55611
Abstract: Eleven experiments provide evidence that people have a tendency to ‘shoot the messenger,’ deeming innocent bearers of bad news unlikeable. In a pre-registered lab experiment, participants rated messengers who delivered bad news from a random drawing asrelatively unlikeable (Study 1). A second set of studies points to the specificity of the effect: Study 2A shows that it is unique to the (innocent) messenger, and not mere bystanders. Study 2B shows that it is distinct from merely receiving information that one disagrees with. We suggest that people’s tendency to deem bearers of bad news as unlikeable stems in part from their desire to make sense of chance processes. Consistent with this account, receiving bad news activates the desire to sense-make (Study 3A), and in turn, activating this desire enhances the tendency to dislike bearers of bad news (Study 3B). Next, stemming from the idea that unexpected outcomesheighten the desire to sense-make, Study 4 shows that when bad news is unexpected, messenger dislike is pronounced. Finally, consistent with the notion that people fulfill the desire to sense-make by attributing agency to entities adjacent to chance events, messenger dislike is correlated with the belief that the messenger had malevolent motives (Studies 5A, 5B, & 5C). Studies 6A & 6B go further, manipulating messenger motives independently from news valence to suggest its causal role in our process account: the tendency to dislike bearers of bad news is mitigated when recipients are made aware of the benevolence of the messenger’s motives.
Keywords:judgment,communication, sense-making, attribution, disclosure
Abstract: Eleven experiments provide evidence that people have a tendency to ‘shoot the messenger,’ deeming innocent bearers of bad news unlikeable. In a pre-registered lab experiment, participants rated messengers who delivered bad news from a random drawing asrelatively unlikeable (Study 1). A second set of studies points to the specificity of the effect: Study 2A shows that it is unique to the (innocent) messenger, and not mere bystanders. Study 2B shows that it is distinct from merely receiving information that one disagrees with. We suggest that people’s tendency to deem bearers of bad news as unlikeable stems in part from their desire to make sense of chance processes. Consistent with this account, receiving bad news activates the desire to sense-make (Study 3A), and in turn, activating this desire enhances the tendency to dislike bearers of bad news (Study 3B). Next, stemming from the idea that unexpected outcomesheighten the desire to sense-make, Study 4 shows that when bad news is unexpected, messenger dislike is pronounced. Finally, consistent with the notion that people fulfill the desire to sense-make by attributing agency to entities adjacent to chance events, messenger dislike is correlated with the belief that the messenger had malevolent motives (Studies 5A, 5B, & 5C). Studies 6A & 6B go further, manipulating messenger motives independently from news valence to suggest its causal role in our process account: the tendency to dislike bearers of bad news is mitigated when recipients are made aware of the benevolence of the messenger’s motives.
Keywords:judgment,communication, sense-making, attribution, disclosure
Surprisingly many highly educated individuals are prone to attribute a stage magician's feats to genuine psychic powers, despite knowing of trickery
From 2018: Magic Performances – When Explained in Psychic Terms by University Students. Lise Lesaffre, Gustav Kuhn, Ahmad Abu-Akel, Déborah Rochat and Christine Mohr. Front. Psychol., Nov 6 2018. https://doi.org/10.3389/fpsyg.2018.02129
Abstract: Paranormal beliefs (PBs), such as the belief in the soul, or in extrasensory perception, are common in the general population. While there is information regarding what these beliefs correlate with (e.g., cognitive biases, personality styles), there is little information regarding the causal direction between these beliefs and their correlates. To investigate the formation of beliefs, we use an experimental design, in which PBs and belief-associated cognitive biases are assessed before and after a central event: a magic performance (see also Mohr et al., 2018). In the current paper, we report a series of studies investigating the “paranormal potential” of magic performances (Study 1, N = 49; Study 2, N = 89; Study 3, N = 123). We investigated (i) which magic performances resulted in paranormal explanations, and (ii) whether PBs and a belief-associated cognitive bias (i.e., repetition avoidance) became enhanced after the performance. Repetition avoidance was assessed using a random number generation task. After the performance, participants rated to what extent the magic performance could be explained in psychic (paranormal), conjuring, or religious terms. We found that conjuring explanations were negatively associated with religious and psychic explanations, whereas religious and psychic explanations were positively associated. Enhanced repetition avoidance correlated with higher PBs ahead of the performance. We also observed a significant increase in psychic explanations and a drop in conjuring explanations when performances involved powerful psychic routines (e.g., the performer contacted the dead). While the experimentally induced enhancement of psychic explanations is promising, future studies should account for potential variables that might explain absent framing and before–after effects (e.g., emotion, attention). Such effects are essential to understand the formation and manipulation of belief.
Vulgarization: Experiencing the impossible https://thepsychologist.bps.org.uk/volume-32/april-2019/experiencing-impossible
Abstract: Paranormal beliefs (PBs), such as the belief in the soul, or in extrasensory perception, are common in the general population. While there is information regarding what these beliefs correlate with (e.g., cognitive biases, personality styles), there is little information regarding the causal direction between these beliefs and their correlates. To investigate the formation of beliefs, we use an experimental design, in which PBs and belief-associated cognitive biases are assessed before and after a central event: a magic performance (see also Mohr et al., 2018). In the current paper, we report a series of studies investigating the “paranormal potential” of magic performances (Study 1, N = 49; Study 2, N = 89; Study 3, N = 123). We investigated (i) which magic performances resulted in paranormal explanations, and (ii) whether PBs and a belief-associated cognitive bias (i.e., repetition avoidance) became enhanced after the performance. Repetition avoidance was assessed using a random number generation task. After the performance, participants rated to what extent the magic performance could be explained in psychic (paranormal), conjuring, or religious terms. We found that conjuring explanations were negatively associated with religious and psychic explanations, whereas religious and psychic explanations were positively associated. Enhanced repetition avoidance correlated with higher PBs ahead of the performance. We also observed a significant increase in psychic explanations and a drop in conjuring explanations when performances involved powerful psychic routines (e.g., the performer contacted the dead). While the experimentally induced enhancement of psychic explanations is promising, future studies should account for potential variables that might explain absent framing and before–after effects (e.g., emotion, attention). Such effects are essential to understand the formation and manipulation of belief.
Vulgarization: Experiencing the impossible https://thepsychologist.bps.org.uk/volume-32/april-2019/experiencing-impossible
54–58% of Google search snippets amplify partisanship, likely because of journalistic practice: It uses terms & quotes from partisan politicians in the introduction (& meta-data) of articles (favored by the summarization algorithm)
Auditing the Partisanship of Google Search Snippets. Desheng Hu et al. To be presented in THE International World Wide Web Conference 2019. https://cbw.sh/static/pdf/hu-www19.pdf
Abstract: The text snippets presented in web search results provide userswith a slice of page content that they can quickly scan to help in-form their click decisions. However, little is known about how thesesnippets are generated or how they relate to a user’s search query. Motivated by the growing body of evidence suggesting that searchengine rankings can influence undecided voters, we conducted an algorithm audit of the political partisanship of Google Search snip-pets relative to the webpages they are extracted from. To accomplish this, we constructed lexicon of partisan cues to measure partisan-ship and construct a set of left- and right-leaning search queries.Then, we collected a large dataset of Search Engine Results Pages (SERPs) by running our partisan queries and their autocompletesuggestions on Google Search. After using our lexicon to score themachine-coded partisanship of snippets and webpages, we found that Google Search’s snippets generally amplify partisanship, and that this effect is robust across different types of webpages, query topics, and partisan (left- and right-leaning) queries.
---
• We present the first large-scale analysis of machine-codedpartisanship in Google Search snippets, covering 4,570 political queries and their autocomplete suggestions.
• We audit the behavior of Google Search’s document summarization algorithm, and find that snippets tend to be drawn from text that is near the beginning of webpages. We further observe that the algorithm leverages visible text and textual meta-data (such as alt-text on images) from webpages.
• Overall, we find that 54–58% of snippets amplify partisanship, depending on the fraction of our lexicon that is used for scoring, i.e., the snippets contain stronger partisan cues on average than the corresponding webpage they were synthesized from. This finding remains consistent across SERPs from left- and right-leaning queries and pages with and with-out structured meta-data that may influence Google Search’s document summarization algorithm [28, 29].
• Surprisingly, we find that 19–24% of snippets have inverse partisanship than the corresponding webpage.
• We identify 31 websites where Google Search consistently produces snippets that differ from the underlying webpagesin terms of the machine-coded partisanship, with high statistical significance. These websites include prominent news and social media services.
We believe that it is highly unlikely that Google has intentionallyengineered their document summarization algorithm to amplify partisan cues. Instead, a more likely explanation for our findingsis that journalistic practice encourages the use of partisan terms and quotes from partisan politicians in the introduction (and meta-data) of articles, which are also the types of text favored by the summarization algorithm.
It is unlikely that Google has intentionally engineered their document summarization algorithm to amplify partisan cues. Instead, a more likely explanation for our findings is that journalistic practice encourages the use of partisan terms and quotes from partisan politicians in the introduction (and meta-data) of articles, which are also the types of text favored by the summarization algorithm.
Abstract: The text snippets presented in web search results provide userswith a slice of page content that they can quickly scan to help in-form their click decisions. However, little is known about how thesesnippets are generated or how they relate to a user’s search query. Motivated by the growing body of evidence suggesting that searchengine rankings can influence undecided voters, we conducted an algorithm audit of the political partisanship of Google Search snip-pets relative to the webpages they are extracted from. To accomplish this, we constructed lexicon of partisan cues to measure partisan-ship and construct a set of left- and right-leaning search queries.Then, we collected a large dataset of Search Engine Results Pages (SERPs) by running our partisan queries and their autocompletesuggestions on Google Search. After using our lexicon to score themachine-coded partisanship of snippets and webpages, we found that Google Search’s snippets generally amplify partisanship, and that this effect is robust across different types of webpages, query topics, and partisan (left- and right-leaning) queries.
---
• We present the first large-scale analysis of machine-codedpartisanship in Google Search snippets, covering 4,570 political queries and their autocomplete suggestions.
• We audit the behavior of Google Search’s document summarization algorithm, and find that snippets tend to be drawn from text that is near the beginning of webpages. We further observe that the algorithm leverages visible text and textual meta-data (such as alt-text on images) from webpages.
• Overall, we find that 54–58% of snippets amplify partisanship, depending on the fraction of our lexicon that is used for scoring, i.e., the snippets contain stronger partisan cues on average than the corresponding webpage they were synthesized from. This finding remains consistent across SERPs from left- and right-leaning queries and pages with and with-out structured meta-data that may influence Google Search’s document summarization algorithm [28, 29].
• Surprisingly, we find that 19–24% of snippets have inverse partisanship than the corresponding webpage.
• We identify 31 websites where Google Search consistently produces snippets that differ from the underlying webpagesin terms of the machine-coded partisanship, with high statistical significance. These websites include prominent news and social media services.
We believe that it is highly unlikely that Google has intentionallyengineered their document summarization algorithm to amplify partisan cues. Instead, a more likely explanation for our findingsis that journalistic practice encourages the use of partisan terms and quotes from partisan politicians in the introduction (and meta-data) of articles, which are also the types of text favored by the summarization algorithm.
It is unlikely that Google has intentionally engineered their document summarization algorithm to amplify partisan cues. Instead, a more likely explanation for our findings is that journalistic practice encourages the use of partisan terms and quotes from partisan politicians in the introduction (and meta-data) of articles, which are also the types of text favored by the summarization algorithm.
From 2017: Factors involving extramarital affairs among married adults in Bangladesh
From 2017: Factors involving extramarital affairs among married adults in Bangladesh. Yasmin Jaha et al. International Journal of Community Medicine and Public Health, May 2017. http://dx.doi.org/10.18203/2394-6040.ijcmph20171506
Abstract: Extramarital affairs have become a common occurrence in modern society. Many studies have pointed to the lack of variety in a relationship as a contributing factor related to divorce and extramarital affairs. This study wants to explore the reasons behind developing the extramarital affair in married adults. The study is based upon the information gathered through scanning newspapers, journals, books, and browsing the Internet and all related papers which were published from 1980 to 2016. However, there is a limited data on the specific topic. After reviewing literatures, we found some common factors that might be responsible for extramarital affairs. Based on that we have made some recommendations for lowering this devastating situation. From this study, we expect that the stakeholders and policy makers can develop some new thoughts and strategies to let down this overwhelming condition.
Keywords: Extramarital affair, Married adults, Bangladesh
---
Check also Have Humans Evolved to Be Cheaters? Is it something general? Have other monogamous species did the same? https://www.bipartisanalliance.com/2019/03/have-humans-evolved-to-be-cheaters-is.html
---
Excerpts:
After reviewing, we think these factors can be accountable for extramarital relationship for Bangladesh and as well as other developing countries.
Marriage
It is the state of being united to a person of the opposite sex as husband or wife in a consensual and contractual relationship recognized by law we considered as marriage.57 Formally arranged marriage goes through various experiences, thus requires careful handling of all the situations with patience and good understanding with related people which often become hampered.58 Another thing is people who marry in their early 20s would most likely have achieved some level of stability and social standing by their mid-30s. An interesting phenomenon is the deprivation of teenage relationship experience.59 This perceived feeling makes an individual careless enough to fall into an extramarital affair.58
Sex
Sex is perhaps one of the most important reasons, and it is an amalgamation of love and passion between two people. After a certain period, sex can become monotonous and if there is a lack of desire, passion, and romance in the relationship, then it may drive the person to seek it with someone else. Individuals with less selfcontrol and deeper dissatisfaction of sexual desire have the common reason of intense sexual addiction to opposite sex as a fact of searching for extramarital relationship.58
Married for the wrong reasons
Many people enter marriage for the wrong reasons like pressures from family and society. After a point, many people agree to marriage without even getting to know each other. High priority in searching always the best for himself or herself by an individual is another potential factor of causing extramarital relationship. Regardless of the justification, even after being married, this poor tendency often creates emptiness to fill up which the individuals immediately get attracted to another person meeting their expectation and ultimately their previous relationship ends up.58
Becoming parents
Newcomer parents experience rapid but practical changes in terms of pursuing responsibilities, changes in precedence order, time sharing, etc. The new environment requires tremendous efforts from the mother. Hence, incidences can be found where the male person may have perceived feeling of deprivation with less attention in terms of importance and can be driven to the involvement in extramarital affair in case of less consideration from the women.58
Physical dissatisfaction
It is the most penetrating issue for married persons’ involvement in extramarital affair since fulfillment of sexual satisfaction remains as a key phenomenon to sustain physical relationship. Lack of sexual satisfaction ultimately causes expectation gap and married female were found highly sensitive to this issue in several previous studies. The same things can happen in case of male persons in terms of their desire to fulfill sexual discontent through making a new affair and procuring a more responsive partner.58
Career advancement
Discord regarding the question of career and financial establishment is another pathetic reason but reality. It hampers economic solvency of a married couple and intensifies mismanagement of family finance issues triggering the onset of a continuous wrangling. After the effect of this volatile situation surely does not help the mutual cognitive understanding, as it constantly pressurizes the financial stability of the married couple. Long term impact of this issue creates acute scenario with severe conflict of intrinsic remorse and loyalty.58
Need for excitement
Human instinct is quite immeasurable, diversified, and it is impossible to fathom and predict the nature and depth of any desire. A result of which is the outbreak of an extramarital affair through searching for the opportunity of removing boredom, exhaustion as well as for recreation.58
No common interests
The role of negotiation and respect for each other’s choice has no alternative to sustain peaceful coexistence in pursuing the conjugal life responsibilities. Any failure to recognize this fact and doing accordingly cause distortion the mutual interest and a sharp cut down in the personal time spending which can lead towards development of an extramarital affair after a certain period of time.58
Self-esteem problems
Perceived recognition about self-esteem and cognition about individual’s own behavioral aptitude requires balance to maintain sound level of personal judgment. The apparent deprivation of being less or least loved by own husband or wife is a powerful phenomenon and can be easily driven by the feeling of deprivation of companionship and love from partner. Psychological deprivation is not an abstract phenomenon. Therefore, it is not possible to improve mental condition without positive conscience of the individual to overcome the threat of extramarital affair involvement.60
Social network
The anonymity and easy availability of online dating now result in many more spouses looking for love outside marriage.61 In accordance with a recent survey of the American Academy of Matrimonial Lawyers (AAML), for the last five (5) years, an astonishing scenario has been observed by the nation’s 81% top divorce attorneys claiming that the easy availability of individual identity and contact in social media is the key promoter behind the rapid increase in divorce rate. Here Facebook remains as the undisputed pioneer with an awful 66% contribution to the online divorce evidence, according to the original source. Like Facebook, there are so many social websites where married persons can express themselves and seek for an extramarital relation.62
Other issues
Other factors instigating individual tendency of searching for compassionate love are as follows: i) opportunity to live together even after being committed in a marriage; ii) midlife crisis as a period of psychological stress occurring in middle age (this shapes individual thought to be triggered by a physical, occupational, or domestic event); iii) concerns related to natural deterioration of physical condition (this can cause significant and sometimes unwanted accidental changes in life).60
Abstract: Extramarital affairs have become a common occurrence in modern society. Many studies have pointed to the lack of variety in a relationship as a contributing factor related to divorce and extramarital affairs. This study wants to explore the reasons behind developing the extramarital affair in married adults. The study is based upon the information gathered through scanning newspapers, journals, books, and browsing the Internet and all related papers which were published from 1980 to 2016. However, there is a limited data on the specific topic. After reviewing literatures, we found some common factors that might be responsible for extramarital affairs. Based on that we have made some recommendations for lowering this devastating situation. From this study, we expect that the stakeholders and policy makers can develop some new thoughts and strategies to let down this overwhelming condition.
Keywords: Extramarital affair, Married adults, Bangladesh
---
Check also Have Humans Evolved to Be Cheaters? Is it something general? Have other monogamous species did the same? https://www.bipartisanalliance.com/2019/03/have-humans-evolved-to-be-cheaters-is.html
---
Excerpts:
After reviewing, we think these factors can be accountable for extramarital relationship for Bangladesh and as well as other developing countries.
Marriage
It is the state of being united to a person of the opposite sex as husband or wife in a consensual and contractual relationship recognized by law we considered as marriage.57 Formally arranged marriage goes through various experiences, thus requires careful handling of all the situations with patience and good understanding with related people which often become hampered.58 Another thing is people who marry in their early 20s would most likely have achieved some level of stability and social standing by their mid-30s. An interesting phenomenon is the deprivation of teenage relationship experience.59 This perceived feeling makes an individual careless enough to fall into an extramarital affair.58
Sex
Sex is perhaps one of the most important reasons, and it is an amalgamation of love and passion between two people. After a certain period, sex can become monotonous and if there is a lack of desire, passion, and romance in the relationship, then it may drive the person to seek it with someone else. Individuals with less selfcontrol and deeper dissatisfaction of sexual desire have the common reason of intense sexual addiction to opposite sex as a fact of searching for extramarital relationship.58
Married for the wrong reasons
Many people enter marriage for the wrong reasons like pressures from family and society. After a point, many people agree to marriage without even getting to know each other. High priority in searching always the best for himself or herself by an individual is another potential factor of causing extramarital relationship. Regardless of the justification, even after being married, this poor tendency often creates emptiness to fill up which the individuals immediately get attracted to another person meeting their expectation and ultimately their previous relationship ends up.58
Becoming parents
Newcomer parents experience rapid but practical changes in terms of pursuing responsibilities, changes in precedence order, time sharing, etc. The new environment requires tremendous efforts from the mother. Hence, incidences can be found where the male person may have perceived feeling of deprivation with less attention in terms of importance and can be driven to the involvement in extramarital affair in case of less consideration from the women.58
Physical dissatisfaction
It is the most penetrating issue for married persons’ involvement in extramarital affair since fulfillment of sexual satisfaction remains as a key phenomenon to sustain physical relationship. Lack of sexual satisfaction ultimately causes expectation gap and married female were found highly sensitive to this issue in several previous studies. The same things can happen in case of male persons in terms of their desire to fulfill sexual discontent through making a new affair and procuring a more responsive partner.58
Career advancement
Discord regarding the question of career and financial establishment is another pathetic reason but reality. It hampers economic solvency of a married couple and intensifies mismanagement of family finance issues triggering the onset of a continuous wrangling. After the effect of this volatile situation surely does not help the mutual cognitive understanding, as it constantly pressurizes the financial stability of the married couple. Long term impact of this issue creates acute scenario with severe conflict of intrinsic remorse and loyalty.58
Need for excitement
Human instinct is quite immeasurable, diversified, and it is impossible to fathom and predict the nature and depth of any desire. A result of which is the outbreak of an extramarital affair through searching for the opportunity of removing boredom, exhaustion as well as for recreation.58
No common interests
The role of negotiation and respect for each other’s choice has no alternative to sustain peaceful coexistence in pursuing the conjugal life responsibilities. Any failure to recognize this fact and doing accordingly cause distortion the mutual interest and a sharp cut down in the personal time spending which can lead towards development of an extramarital affair after a certain period of time.58
Self-esteem problems
Perceived recognition about self-esteem and cognition about individual’s own behavioral aptitude requires balance to maintain sound level of personal judgment. The apparent deprivation of being less or least loved by own husband or wife is a powerful phenomenon and can be easily driven by the feeling of deprivation of companionship and love from partner. Psychological deprivation is not an abstract phenomenon. Therefore, it is not possible to improve mental condition without positive conscience of the individual to overcome the threat of extramarital affair involvement.60
Social network
The anonymity and easy availability of online dating now result in many more spouses looking for love outside marriage.61 In accordance with a recent survey of the American Academy of Matrimonial Lawyers (AAML), for the last five (5) years, an astonishing scenario has been observed by the nation’s 81% top divorce attorneys claiming that the easy availability of individual identity and contact in social media is the key promoter behind the rapid increase in divorce rate. Here Facebook remains as the undisputed pioneer with an awful 66% contribution to the online divorce evidence, according to the original source. Like Facebook, there are so many social websites where married persons can express themselves and seek for an extramarital relation.62
Other issues
Other factors instigating individual tendency of searching for compassionate love are as follows: i) opportunity to live together even after being committed in a marriage; ii) midlife crisis as a period of psychological stress occurring in middle age (this shapes individual thought to be triggered by a physical, occupational, or domestic event); iii) concerns related to natural deterioration of physical condition (this can cause significant and sometimes unwanted accidental changes in life).60
Benefits of zebra stripes: Behaviour of tabanid flies around zebras and horses
Benefits of zebra stripes: Behaviour of tabanid flies around zebras and horses. Tim Caro et al. PLOS, February 20, 2019. https://doi.org/10.1371/journal.pone.0210831
Abstract: Averting attack by biting flies is increasingly regarded as the evolutionary driver of zebra stripes, although the precise mechanism by which stripes ameliorate attack by ectoparasites is unknown. We examined the behaviour of tabanids (horse flies) in the vicinity of captive plains zebras and uniformly coloured domestic horses living on a horse farm in Britain. Observations showed that fewer tabanids landed on zebras than on horses per unit time, although rates of tabanid circling around or briefly touching zebra and horse pelage did not differ. In an experiment in which horses sequentially wore cloth coats of different colours, those wearing a striped pattern suffered far lower rates of tabanid touching and landing on coats than the same horses wearing black or white, yet there were no differences in attack rates to their naked heads. In separate, detailed video analyses, tabanids approached zebras faster and failed to decelerate before contacting zebras, and proportionately more tabanids simply touched rather than landed on zebra pelage in comparison to horses. Taken together, these findings indicate that, up close, striped surfaces prevented flies from making a controlled landing but did not influence tabanid behaviour at a distance. To counteract flies, zebras swished their tails and ran away from fly nuisance whereas horses showed higher rates of skin twitching. As a consequence of zebras’ striping, very few tabanids successfully landed on zebras and, as a result of zebras’ changeable behaviour, few stayed a long time, or probed for blood.
---
Instead, there is an emerging consensus among biologists that the primary function of contrasting black and white stripes on the three species of zebras is to thwart attack from tabanids, and possibly glossinids, stomoxys and other biting muscoids based on laboratory and field experiments with striped materials [3, 9–12] and on comparative evidence [13]. In Africa where zebras live, tabanids carry diseases fatal to zebras including trypanosomiasis, equine infectious anemia, African horse sickness and equine influenza [14] and zebras are particularly susceptible to infection because their thin pelage allows biting flies to probe successfully with their mouthparts [13]. The exact mechanism by which stripes prevent flies from obtaining a blood meal is less well understood, however. Flies may fail to detect a zebra from a distance, or from close up, either as a result of misinterpreting optic flow as they approach [15], by interfering with cues that promote a landing response [9, 16], or even by disrupting the polarization signature of their host [12]. Unfortunately, detailed observations of biting flies in the vicinity of live zebras have so far been unavailable but such information would help elucidate the stage at which stripes exert an effect on host seeking by biting flies.
In this study we compare several measures of behaviour of wild tabanid horse flies around captive zebras and domestic horses living in the same habitat using direct observations and video footage. We also compare the behaviour of tabanids around horses wearing differently coloured cloth coats, report on the duration of time that tabanids spend on equids with different coloured pelage, and compare the behaviour of horses and zebras in response to biting fly annoyance.
Three additional but more speculative points may be made in closing. First, we found that rates at which tabanids circled and touched a single grey horse were lower than for zebras although landing rates did not differ significantly (Table Ba-c in S1 File). This was in contrast to comparisons between zebras and horses of other colours where circling and touching rates did not differ but where zebras enjoyed fewer landings per unit time. More work on grey pelage in relation to fly annoyance is clearly needed because stripes will appear grey from a distance to flies (Text A in S1 File).
Second, we found that there was no difference in rates at which tabanids moved across the surface of striped or uniform coats. Since black and white stripes give off different heat loads during the day [30–32], they could possibly confuse a tabanid if it tried to locate a capillary by thermal sensitivity (although we have no evidence that they do this). If stripes did prevent a tabanid from locating a capillary we might expect greater rates of searching zebra pelage but this was not the case.
Third, extremely high rates of tail flicking were seen in the zebra/wild ass hybrid at Dundry (Text B in S1 File) similar to that observed in African wild asses at the Tierpark Zoo (table 5.3 in [3]) suggesting that tail flicking may in part be a species-specific trait. Striping is also a species specific trait and also under partial genetic control (as witnessed by mother-offspring striping similarities, for example, TC pers obs). Therefore both morphological and behavioural anti-parasite defense strategies appear to be under strong selection in zebras.
Abstract: Averting attack by biting flies is increasingly regarded as the evolutionary driver of zebra stripes, although the precise mechanism by which stripes ameliorate attack by ectoparasites is unknown. We examined the behaviour of tabanids (horse flies) in the vicinity of captive plains zebras and uniformly coloured domestic horses living on a horse farm in Britain. Observations showed that fewer tabanids landed on zebras than on horses per unit time, although rates of tabanid circling around or briefly touching zebra and horse pelage did not differ. In an experiment in which horses sequentially wore cloth coats of different colours, those wearing a striped pattern suffered far lower rates of tabanid touching and landing on coats than the same horses wearing black or white, yet there were no differences in attack rates to their naked heads. In separate, detailed video analyses, tabanids approached zebras faster and failed to decelerate before contacting zebras, and proportionately more tabanids simply touched rather than landed on zebra pelage in comparison to horses. Taken together, these findings indicate that, up close, striped surfaces prevented flies from making a controlled landing but did not influence tabanid behaviour at a distance. To counteract flies, zebras swished their tails and ran away from fly nuisance whereas horses showed higher rates of skin twitching. As a consequence of zebras’ striping, very few tabanids successfully landed on zebras and, as a result of zebras’ changeable behaviour, few stayed a long time, or probed for blood.
---
Introduction
The function of zebra stripes has been a source of scientific interest for over 150 years generating many hypotheses including camouflage, confusion of predators, signaling to conspecifics, thermoregulation and avoidance of biting flies [1] but contemporary data show that only one stands up to careful scrutiny [2–4]. Briefly, regarding camouflage, zebra stripes are difficult for lion Panthera leo and spotted hyaena Crocuta crocuta predators to resolve at any great distance making crypsis against mammalian predators an unlikely benefit [5]. Regarding confusion of predators, zebras do not have the sort of striping pattern that aids in confusion [6] and African lions take zebra prey disproportionately more than expected suggesting an absence of confusion effect [7]. Regarding social benefits, rates of grooming and patterns of association are no greater in striped equids than in unstriped equids [3]. Finally, there are no thermoregulatory benefits to striping based on controlled experiments using water drums [4], infrared photography of free-living herbivores [3] and logical argument in regards to flank striping [8].Instead, there is an emerging consensus among biologists that the primary function of contrasting black and white stripes on the three species of zebras is to thwart attack from tabanids, and possibly glossinids, stomoxys and other biting muscoids based on laboratory and field experiments with striped materials [3, 9–12] and on comparative evidence [13]. In Africa where zebras live, tabanids carry diseases fatal to zebras including trypanosomiasis, equine infectious anemia, African horse sickness and equine influenza [14] and zebras are particularly susceptible to infection because their thin pelage allows biting flies to probe successfully with their mouthparts [13]. The exact mechanism by which stripes prevent flies from obtaining a blood meal is less well understood, however. Flies may fail to detect a zebra from a distance, or from close up, either as a result of misinterpreting optic flow as they approach [15], by interfering with cues that promote a landing response [9, 16], or even by disrupting the polarization signature of their host [12]. Unfortunately, detailed observations of biting flies in the vicinity of live zebras have so far been unavailable but such information would help elucidate the stage at which stripes exert an effect on host seeking by biting flies.
In this study we compare several measures of behaviour of wild tabanid horse flies around captive zebras and domestic horses living in the same habitat using direct observations and video footage. We also compare the behaviour of tabanids around horses wearing differently coloured cloth coats, report on the duration of time that tabanids spend on equids with different coloured pelage, and compare the behaviour of horses and zebras in response to biting fly annoyance.
Conclusion
In summary, multiple lines of evidence indicate that stripes prevent effective landing by tabanids once they are in the vicinity of the host but did not prevent them approaching from a distance. In addition, zebras appear to use behavioural means to prevent tabanids spending time on them through constant tail swishing and even running away. As a consequence of both of these morphological and behavioural defenses, very few tabanids are able to probe for a zebra blood meal as evidenced by our data.Three additional but more speculative points may be made in closing. First, we found that rates at which tabanids circled and touched a single grey horse were lower than for zebras although landing rates did not differ significantly (Table Ba-c in S1 File). This was in contrast to comparisons between zebras and horses of other colours where circling and touching rates did not differ but where zebras enjoyed fewer landings per unit time. More work on grey pelage in relation to fly annoyance is clearly needed because stripes will appear grey from a distance to flies (Text A in S1 File).
Second, we found that there was no difference in rates at which tabanids moved across the surface of striped or uniform coats. Since black and white stripes give off different heat loads during the day [30–32], they could possibly confuse a tabanid if it tried to locate a capillary by thermal sensitivity (although we have no evidence that they do this). If stripes did prevent a tabanid from locating a capillary we might expect greater rates of searching zebra pelage but this was not the case.
Third, extremely high rates of tail flicking were seen in the zebra/wild ass hybrid at Dundry (Text B in S1 File) similar to that observed in African wild asses at the Tierpark Zoo (table 5.3 in [3]) suggesting that tail flicking may in part be a species-specific trait. Striping is also a species specific trait and also under partial genetic control (as witnessed by mother-offspring striping similarities, for example, TC pers obs). Therefore both morphological and behavioural anti-parasite defense strategies appear to be under strong selection in zebras.
People are often characterized as poor savers; an attentional asymmetry away from money-saved relative to money-earned, potentially contributes to decreased everyday salience and future wealth
From 2018: Differential temporal salience of earning and saving. Kesong Hu, Eve De Rosa & Adam K. Anderson. Nature Communications, volume 9, Article number: 2843 (2018). https://www.nature.com/articles/s41467-018-05201-9/
Abstract: People are often characterized as poor savers. Here we examined whether cues associated with earning and saving have differential salience for attention and action. We first modeled earning and saving after positive and negative variants of monetary reinforcement, i.e., gains versus avoiding loss. Despite their equivalent absolute magnitude in a monetary incentive task, colors predicting saving were judged to appear after those that predicted earning in a temporal-order judgment task. This saving posteriority effect also occurred when savings were framed as earnings that come slightly later. Colors predicting savings, whether they acquired either negative or positive value, persisted in their posteriority. An attentional asymmetry away from money-saved relative to money-earned, potentially contributes to decreased everyday salience and future wealth.
---
Introduction
In the parable of the ant and the grasshopper, the ant’s assiduous collection of food, saving for the winter, contrasts with the grasshopper’s pursuits of immediate gratification. Cast more as grasshopper than ant, humans are often characterized as poor savers. This reputation may be well earned, in particular in America. According to a 2016 analysis of the Federal Reserve’s 2013 survey of consumer finances, the median American working-age couple has saved only $5000 for retirement with 43% of working-age families estimated to have no retirement savings at all1. On a long downward trend, the personal savings rate (expressed as a percentage of disposable personal income, DPI), dropped to <3% of DPI at the close of 20172. Contrasting with a near 96% employment rate, we have ant-like work ethic, yet earnings are rarely converted into savings.
Here we assess, in the context of monetary reinforcement, whether earning and saving reflect an asymmetry in value-derived attention3,4,5, asking whether the attentional scales are tipped in one’s favor. The way we attend has important interactions with value, perception, decision making, and ultimately behavior3. To assess their comparative behavioral and attentional salience, we considered two potential conceptual models for the distinction between earning and saving. First, we modeled earning and saving after positive and negative reinforcement, i.e., gains versus avoiding losses. Second, we considered earning and saving as variants of positive reinforcement in which gains accumulate at the same rate, but differ according to a conceptual framing of manifesting immediately or a short time later.
Inspired by language, one meaning of “to save” is to avoid loss. Saving may represent an aversion to losing one’s earnings. The assessment of gains and losses is central to our most basic physiological needs and drives6. There are evident asymmetries in the weight we place on gains and losses, with potential losses having an incommensurate influence when people evaluate identical outcomes7. In field experiments, monetary incentives framed as losses (“avoid losing A by doing B”) increases factory workers’ productivity relative to those as gains (“gain A by doing B”)8. Such loss aversion has received empirical support from a variety of studies9,10,11,12, and when directly experienced, losses outweigh gains13. Loss aversion is related to biases such as the endowment effect14 and the status quo bias15, suggesting that individuals should place greater value on savings they have already earned. But quite to the contrary, poor saving behavior16 suggests loss aversion is likely not at work in limited savings. One must be motivated to accrue savings before being concerned about losing them.
Loss aversion and related biases are thought to reflect the asymmetric weighting of punishment and reward17. Losses are punishing, resulting in an exaggerated avoidance response, biasing both decisions and the amount of attention devoted to them17,18. While the act of avoiding loss is the removal of punishment, and thus is reinforcing19. Motivation to earn versus save may, more directly, be a comparison of positive and negative variants of reinforcement20, comparing earnings with the avoidance of losing one’s earnings. Positive and negative reinforcement refer to increasing the likelihood of a behavior with the addition or subtraction of an outcome, not their positive or negative utility for an individual. Positive and negative reinforcement have been shown to similarly recruit the reward system, suggesting that both have positive utility21. Nevertheless, they may have asymmetric motivational power12. Psychologically, and in our daily experience, individuals believe they are paid for their performance rather than arranging conditions to avoid moneyless periods of time22. Savings, in this context, should motivate individuals to avoid being without money. Earning and saving should then align with different concerns. Moreover, individuals can differently experience pleasure or utility according to their promotion versus prevention orientation23, through either promoting desired versus preventing undesired outcomes.
On the other hand, the meaning of “to save” could be understood in terms of expected utility in the future, hence currently inaccessible. In line with this, efforts to avoid moneyless periods of time highlight the importance of temporal perspectives on one’s earnings. Maintaining an orientation toward saving may result in temporal discounting of today’s earnings in the future24. Discounting of future value is captured by individuals who prefer $5 now compared to $10 three months from now25,26. Participants often make choices of smaller but immediate rewards relative to rewards that are larger but delayed. Such temporal or delay discounting is also considered a marker of impulsive behavior, assessing the degree to which the subjective value of an offering decreases as a function of delay in its delivery27. While the rate of discounting depends on the individual, it is a fundamental to the representation of value, observed in human and nonhuman animals28. Temporal distance of saving for the future may also modulate value representations such that they are more abstract29. This may cognitively distance individuals from the reality of the undesirable outcomes of not saving, i.e., extended moneyless periods of time. Saving in these contexts reflects an orientation toward the future, as well as the limits of imagination on behavior30. Accordingly, while earning may reflect the here and now, savings may reflect earnings as a discounted and abstract future.
Whether earning and saving reflect varieties of reinforcement or differentially reflect temporal discounting, they involve making a choice between options31. Our nervous system is confronted, at each moment, by choices in terms of where to invest or allocate its resources in the currency of attention3,32. By paying attention, an individual is able to impact the salience33 and value34 of sensory events. While unpleasant events typically evoke relatively stronger changes in affect and attention in both perceptual and decision studies9,12,17,18,35,36, gains also play a similar role3,4,37,38,39. Importantly, value not only alters attention, but attention is also central to value, with attention-boosting34 and inattention-reducing value40. Attention can both follow and influence preference41, predicting consumer choice42. Thus, the choice to what we attend is central to value and behavior17,18,43. While multiple studies have characterized value-derived attention3,4,5, much less is known about how different variants of reinforcement and temporal framing regulate attention. Here we examined how earning and saving, according to different models, regulate the paying of attention. If earning and savings represent differential concerns to the individual, then this should be reflected in attentional choice, having an asymmetric regulatory influence on salience and awareness.
Mirroring how value is scaled relative to time44, time is also scaled relative to attention. Attention shapes not only what is perceived but also when45. Attention can warp the judgment of temporal order, with attended events appearing to occur before non-attended events, called “prior entry”46,47. Similarly, individuals attend to more immediate events and outcomes than those in the more distant future29, and this asymmetry in attention may modulate temporal discounting48. We took advantage of how attention can influence judgments of temporal order to examine how individuals perceive events predicting earning and saving. Just as how individuals may put off saving due to decreased salience and relative inattention, monetarily reinforced colors associated with savings may be less attentionally salient and appear to come later. As a model for earnings and savings, we first examined the power of positive (gains) and negative (avoiding losses) monetary reinforcement of color patches and the relationship between action and attentional salience (experiments 1a–c). In a further study (experiment 2), through distinct temporal framings of positive reinforcement, we modeled earning and saving after gains that come immediately versus gains to come later (i.e., saving for future).
Figure 1 illustrates the core tasks and the example colored circle used as stimuli. Participants started with value reinforcement trials, where equiluminant colors (red, blue, or yellow) were 100% reinforced, or received no reinforcement, for fast and accurate color discriminations. One color was associated with “earning,” gaining 30 cents, and another associated with “saving,” avoiding loss of 30 cents. The task was sufficiently easy to enable reinforcement on the majority trials, whereby earning would increase one’s balance and saving would preserve those earnings. Participants received their performance-based earnings at the end of the experiment. Color-reinforcement associations were counterbalanced across participants. The temporal-order judgment (TOJ) task required participants to judge which of the side-by-side colored stimuli appeared first, when presented in varying temporal proximity (8–98 ms). TOJ trials were pseudo-randomly intermixed with value reinforcement trials to ensure that any acquired salience for colors was maintained throughout. Similar to indifference points in temporal discounting to establish value25,26, we estimated the participant’s point of subjective simultaneity (PSS), which indicates the estimated time interval to perceive the two stimuli as arriving simultaneously, i.e., 50%47,49,50,51.
Fig. 1
Fig. 1
Illustration of the display sequences and target stimuli examples. a Monetary reinforcement task. Exp. 1a involved a color discrimination (red, blue, and yellow), while Exp. 1b and 1c involved a gap side (left and right) discrimination. After response, participants were informed about gain or loss, together with the total cash bonus accrued (in white). b Temporal-order judgment task. Following fixation, colored circles were presented either on the left or on the right side of the fixation followed by a second different color circle, which appeared on the opposite side after a variable SOA (8, 18, 38, 68, and 98 ms). Participants were required to indicate which color (Exp 1a) or which side (Exp 1b and 1c) appeared first
Full size image
Despite their equivalent absolute magnitude in a monetary incentive task, we find that saving results in less behavioral salience and decreased payout. In a temporal-order task, saving-associated color cues are also judged to appear after those that predict earning, consistent with the decreased attentional salience of saving. This saving temporal posteriority effect generalizes to when saving is framed as earnings that come slightly later. Across studies, saving-associated cues persisted in their relative inattention whether the cues acquire negative or positive valence. Thus, saving posteriority is not simply explained by acquired affective value. We conclude that decreased attentional salience related to money-saved relative to money-earned is a fundamental information-processing bias. That saving has less moment-to-moment attention attracting potential may contribute to reduced saving behavior. Attentional interventions to enhance the everyday salience of saving may be gainfully employed to improve saving behavior.
Abstract: People are often characterized as poor savers. Here we examined whether cues associated with earning and saving have differential salience for attention and action. We first modeled earning and saving after positive and negative variants of monetary reinforcement, i.e., gains versus avoiding loss. Despite their equivalent absolute magnitude in a monetary incentive task, colors predicting saving were judged to appear after those that predicted earning in a temporal-order judgment task. This saving posteriority effect also occurred when savings were framed as earnings that come slightly later. Colors predicting savings, whether they acquired either negative or positive value, persisted in their posteriority. An attentional asymmetry away from money-saved relative to money-earned, potentially contributes to decreased everyday salience and future wealth.
---
Introduction
In the parable of the ant and the grasshopper, the ant’s assiduous collection of food, saving for the winter, contrasts with the grasshopper’s pursuits of immediate gratification. Cast more as grasshopper than ant, humans are often characterized as poor savers. This reputation may be well earned, in particular in America. According to a 2016 analysis of the Federal Reserve’s 2013 survey of consumer finances, the median American working-age couple has saved only $5000 for retirement with 43% of working-age families estimated to have no retirement savings at all1. On a long downward trend, the personal savings rate (expressed as a percentage of disposable personal income, DPI), dropped to <3% of DPI at the close of 20172. Contrasting with a near 96% employment rate, we have ant-like work ethic, yet earnings are rarely converted into savings.
Here we assess, in the context of monetary reinforcement, whether earning and saving reflect an asymmetry in value-derived attention3,4,5, asking whether the attentional scales are tipped in one’s favor. The way we attend has important interactions with value, perception, decision making, and ultimately behavior3. To assess their comparative behavioral and attentional salience, we considered two potential conceptual models for the distinction between earning and saving. First, we modeled earning and saving after positive and negative reinforcement, i.e., gains versus avoiding losses. Second, we considered earning and saving as variants of positive reinforcement in which gains accumulate at the same rate, but differ according to a conceptual framing of manifesting immediately or a short time later.
Inspired by language, one meaning of “to save” is to avoid loss. Saving may represent an aversion to losing one’s earnings. The assessment of gains and losses is central to our most basic physiological needs and drives6. There are evident asymmetries in the weight we place on gains and losses, with potential losses having an incommensurate influence when people evaluate identical outcomes7. In field experiments, monetary incentives framed as losses (“avoid losing A by doing B”) increases factory workers’ productivity relative to those as gains (“gain A by doing B”)8. Such loss aversion has received empirical support from a variety of studies9,10,11,12, and when directly experienced, losses outweigh gains13. Loss aversion is related to biases such as the endowment effect14 and the status quo bias15, suggesting that individuals should place greater value on savings they have already earned. But quite to the contrary, poor saving behavior16 suggests loss aversion is likely not at work in limited savings. One must be motivated to accrue savings before being concerned about losing them.
Loss aversion and related biases are thought to reflect the asymmetric weighting of punishment and reward17. Losses are punishing, resulting in an exaggerated avoidance response, biasing both decisions and the amount of attention devoted to them17,18. While the act of avoiding loss is the removal of punishment, and thus is reinforcing19. Motivation to earn versus save may, more directly, be a comparison of positive and negative variants of reinforcement20, comparing earnings with the avoidance of losing one’s earnings. Positive and negative reinforcement refer to increasing the likelihood of a behavior with the addition or subtraction of an outcome, not their positive or negative utility for an individual. Positive and negative reinforcement have been shown to similarly recruit the reward system, suggesting that both have positive utility21. Nevertheless, they may have asymmetric motivational power12. Psychologically, and in our daily experience, individuals believe they are paid for their performance rather than arranging conditions to avoid moneyless periods of time22. Savings, in this context, should motivate individuals to avoid being without money. Earning and saving should then align with different concerns. Moreover, individuals can differently experience pleasure or utility according to their promotion versus prevention orientation23, through either promoting desired versus preventing undesired outcomes.
On the other hand, the meaning of “to save” could be understood in terms of expected utility in the future, hence currently inaccessible. In line with this, efforts to avoid moneyless periods of time highlight the importance of temporal perspectives on one’s earnings. Maintaining an orientation toward saving may result in temporal discounting of today’s earnings in the future24. Discounting of future value is captured by individuals who prefer $5 now compared to $10 three months from now25,26. Participants often make choices of smaller but immediate rewards relative to rewards that are larger but delayed. Such temporal or delay discounting is also considered a marker of impulsive behavior, assessing the degree to which the subjective value of an offering decreases as a function of delay in its delivery27. While the rate of discounting depends on the individual, it is a fundamental to the representation of value, observed in human and nonhuman animals28. Temporal distance of saving for the future may also modulate value representations such that they are more abstract29. This may cognitively distance individuals from the reality of the undesirable outcomes of not saving, i.e., extended moneyless periods of time. Saving in these contexts reflects an orientation toward the future, as well as the limits of imagination on behavior30. Accordingly, while earning may reflect the here and now, savings may reflect earnings as a discounted and abstract future.
Whether earning and saving reflect varieties of reinforcement or differentially reflect temporal discounting, they involve making a choice between options31. Our nervous system is confronted, at each moment, by choices in terms of where to invest or allocate its resources in the currency of attention3,32. By paying attention, an individual is able to impact the salience33 and value34 of sensory events. While unpleasant events typically evoke relatively stronger changes in affect and attention in both perceptual and decision studies9,12,17,18,35,36, gains also play a similar role3,4,37,38,39. Importantly, value not only alters attention, but attention is also central to value, with attention-boosting34 and inattention-reducing value40. Attention can both follow and influence preference41, predicting consumer choice42. Thus, the choice to what we attend is central to value and behavior17,18,43. While multiple studies have characterized value-derived attention3,4,5, much less is known about how different variants of reinforcement and temporal framing regulate attention. Here we examined how earning and saving, according to different models, regulate the paying of attention. If earning and savings represent differential concerns to the individual, then this should be reflected in attentional choice, having an asymmetric regulatory influence on salience and awareness.
Mirroring how value is scaled relative to time44, time is also scaled relative to attention. Attention shapes not only what is perceived but also when45. Attention can warp the judgment of temporal order, with attended events appearing to occur before non-attended events, called “prior entry”46,47. Similarly, individuals attend to more immediate events and outcomes than those in the more distant future29, and this asymmetry in attention may modulate temporal discounting48. We took advantage of how attention can influence judgments of temporal order to examine how individuals perceive events predicting earning and saving. Just as how individuals may put off saving due to decreased salience and relative inattention, monetarily reinforced colors associated with savings may be less attentionally salient and appear to come later. As a model for earnings and savings, we first examined the power of positive (gains) and negative (avoiding losses) monetary reinforcement of color patches and the relationship between action and attentional salience (experiments 1a–c). In a further study (experiment 2), through distinct temporal framings of positive reinforcement, we modeled earning and saving after gains that come immediately versus gains to come later (i.e., saving for future).
Figure 1 illustrates the core tasks and the example colored circle used as stimuli. Participants started with value reinforcement trials, where equiluminant colors (red, blue, or yellow) were 100% reinforced, or received no reinforcement, for fast and accurate color discriminations. One color was associated with “earning,” gaining 30 cents, and another associated with “saving,” avoiding loss of 30 cents. The task was sufficiently easy to enable reinforcement on the majority trials, whereby earning would increase one’s balance and saving would preserve those earnings. Participants received their performance-based earnings at the end of the experiment. Color-reinforcement associations were counterbalanced across participants. The temporal-order judgment (TOJ) task required participants to judge which of the side-by-side colored stimuli appeared first, when presented in varying temporal proximity (8–98 ms). TOJ trials were pseudo-randomly intermixed with value reinforcement trials to ensure that any acquired salience for colors was maintained throughout. Similar to indifference points in temporal discounting to establish value25,26, we estimated the participant’s point of subjective simultaneity (PSS), which indicates the estimated time interval to perceive the two stimuli as arriving simultaneously, i.e., 50%47,49,50,51.
Fig. 1
Fig. 1
Illustration of the display sequences and target stimuli examples. a Monetary reinforcement task. Exp. 1a involved a color discrimination (red, blue, and yellow), while Exp. 1b and 1c involved a gap side (left and right) discrimination. After response, participants were informed about gain or loss, together with the total cash bonus accrued (in white). b Temporal-order judgment task. Following fixation, colored circles were presented either on the left or on the right side of the fixation followed by a second different color circle, which appeared on the opposite side after a variable SOA (8, 18, 38, 68, and 98 ms). Participants were required to indicate which color (Exp 1a) or which side (Exp 1b and 1c) appeared first
Full size image
Despite their equivalent absolute magnitude in a monetary incentive task, we find that saving results in less behavioral salience and decreased payout. In a temporal-order task, saving-associated color cues are also judged to appear after those that predict earning, consistent with the decreased attentional salience of saving. This saving temporal posteriority effect generalizes to when saving is framed as earnings that come slightly later. Across studies, saving-associated cues persisted in their relative inattention whether the cues acquire negative or positive valence. Thus, saving posteriority is not simply explained by acquired affective value. We conclude that decreased attentional salience related to money-saved relative to money-earned is a fundamental information-processing bias. That saving has less moment-to-moment attention attracting potential may contribute to reduced saving behavior. Attentional interventions to enhance the everyday salience of saving may be gainfully employed to improve saving behavior.
Sunday, March 24, 2019
Feeling Alone Among 317 Million Others: Twitter is used to both seek and provide support regarding loneliness; weekend and night-time disclosures are associated with the angriest language
Feeling Alone Among 317 Million Others: Disclosures of Loneliness on Twitter. JamieMahoney et al. Computers in Human Behavior, MAr 24 2019.
https://doi.org/10.1016/j.chb.2019.03.024
Highlights
• Twitter is used to both seek and provide support regarding loneliness.
• Language in these disclosures differ when related to the day and time of disclosure.
• Weekend and night-time disclosures are associated with the angriest language.
• A range of disclosures suggest that user behaviour may develop over time.
Abstract: Increasing numbers of individuals describe themselves as feeling lonely, regardless of age, gender or geographic location. This article investigates how social media users self-disclose feelings of loneliness, and how they seek and provide support to each other. Motivated by related studies in this area, a dataset of 22,477 Twitter posts sent over a one-week period was analyzed using both qualitative and quantitative methods. Through a thematic analysis, we demonstrate that self-disclosure of perceived loneliness takes a variety of forms, from simple statements of “I’m lonely”, through to detailed self-reflections of the underlying causes of loneliness. The analysis also reveals forms of online support provided to those who are feeling lonely. Further, we conducted a quantitative linguistic content analysis of the dataset which revealed patterns in the data, including that ‘lonely’ tweets were significantly more negative than those in a control sample, with levels of negativity fluctuating throughout the week and posts sent at night being more negative than those sent in the daytime.
https://doi.org/10.1016/j.chb.2019.03.024
Highlights
• Twitter is used to both seek and provide support regarding loneliness.
• Language in these disclosures differ when related to the day and time of disclosure.
• Weekend and night-time disclosures are associated with the angriest language.
• A range of disclosures suggest that user behaviour may develop over time.
Abstract: Increasing numbers of individuals describe themselves as feeling lonely, regardless of age, gender or geographic location. This article investigates how social media users self-disclose feelings of loneliness, and how they seek and provide support to each other. Motivated by related studies in this area, a dataset of 22,477 Twitter posts sent over a one-week period was analyzed using both qualitative and quantitative methods. Through a thematic analysis, we demonstrate that self-disclosure of perceived loneliness takes a variety of forms, from simple statements of “I’m lonely”, through to detailed self-reflections of the underlying causes of loneliness. The analysis also reveals forms of online support provided to those who are feeling lonely. Further, we conducted a quantitative linguistic content analysis of the dataset which revealed patterns in the data, including that ‘lonely’ tweets were significantly more negative than those in a control sample, with levels of negativity fluctuating throughout the week and posts sent at night being more negative than those sent in the daytime.
From 2018: How Persistent Low Expected Returns Alter Optimal Life Cycle Saving, Investment, and Retirement Behavior
From 2018: How Persistent Low Expected Returns Alter Optimal Life Cycle Saving, Investment, and Retirement Behavior. Vanya Horneff, Raimond Maurer, Olivia S. Mitchell. NBER Working Paper No. 24311, August 2018. https://www.nber.org/papers/w24311
Abstract: This paper explores how an environment of persistent low returns influences saving, investing, and retirement behaviors, as compared to what in the past had been thought of as more “normal” financial conditions. Our calibrated lifecycle dynamic model with realistic tax, minimum distribution, and Social Security benefit rules produces results that agree with observed saving, work, and claiming age behavior of U.S. households. In particular, our model generates a large peak at the earliest claiming age at 62, as in the data. Also in line with the evidence, our baseline results show a smaller second peak at the (system-defined) Full Retirement Age of 66. In the context of a zero return environment, we show that workers will optimally devote more of their savings to non-retirement accounts and less to 401(k) accounts, since the relative appeal of investing in taxable versus tax-qualified retirement accounts is higher in a low return setting. Finally, we show that people claim Social Security benefits later in a low interest rate environment.
Abstract: This paper explores how an environment of persistent low returns influences saving, investing, and retirement behaviors, as compared to what in the past had been thought of as more “normal” financial conditions. Our calibrated lifecycle dynamic model with realistic tax, minimum distribution, and Social Security benefit rules produces results that agree with observed saving, work, and claiming age behavior of U.S. households. In particular, our model generates a large peak at the earliest claiming age at 62, as in the data. Also in line with the evidence, our baseline results show a smaller second peak at the (system-defined) Full Retirement Age of 66. In the context of a zero return environment, we show that workers will optimally devote more of their savings to non-retirement accounts and less to 401(k) accounts, since the relative appeal of investing in taxable versus tax-qualified retirement accounts is higher in a low return setting. Finally, we show that people claim Social Security benefits later in a low interest rate environment.
Research suggests more people find suicide a reasonable response to dire challenges... No recommendations about what to do and how.
The dangerous shifting cultural narratives around suicide. Julie Phillips. Washington Post, Mar 21 2019. https://www.washingtonpost.com/outlook/the-dangerous-shifting-cultural-narratives-around-suicide/2019/03/21/7277946e-4bf5-11e9-93d0-64dbcf38ba41_story.html?utm_term=.c7c37d383bb5
Research suggests more people find it a reasonable response to dire challenges.
Excerpts:
[...]
The data suggests that white and middle-aged Americans are the demographic groups most at risk for suicide. So in one sense, Krueger’s tragedy fits the prevailing pattern — as did the deaths last year of celebrity chef Anthony Bourdain and fashion designer Kate Spade. In general, suicide rates among whites are about three times higher than among blacks and Latinos.
Between 1999 and 2017, U.S. suicide rates increased by 45 percent for men ages 45 to 64 and by 62 percent for women in that age group. As a result, that cohort surged ahead of the 65-plus age group in absolute terms, with a suicide rate of 19.6 per 100,000 — producing, in effect, a new epidemiology of suicide.
There are many reasons for this rise, all of them important. But one underdiscussed explanation is the subtle loosening of taboos around suicide. Surveys suggest that Americans in recent years are more likely to view it as an acceptable reaction not just to terminal illness but also to life setbacks that are emotionally brutal but survivable. (That doesn’t mean these attitudes played a part in any specific recent suicide.)
Other trends have been more broadly reported. The media attention given to the deaths of high-profile people should not distract from the fact that the rising rates of suicide occur disproportionately among working-class and less-educated Americans. In 2014, the most recent year such breakdowns of data are available, men with only a high school diploma were twice as likely to die by suicide as men with a college degree. And although middle-aged men of all educational groups experienced rising suicide rates during the Great Recession, 2007 to 2010, since then rates for college-educated men have slightly declined while those for men with only a high school diploma have continued to rise.
[...]
Yet a possible cultural component of the suicide epidemic demands close attention, too. Research suggests that Americans are becoming more tolerant and accepting of the practice. In some contexts, many Americans might find this tolerance benign — as with suicide, or even assisted suicide, in the end stages of a fatal disease (although that practice has strong critics, too).
But other attitudinal shifts may be more plainly troubling. One way to track these views is through the General Social Survey (GSS), which, since its inception in 1972, has asked a nationally representative sample of Americans about their attitudes toward suicide. In an analysis of changes in attitudes from the 1982-86 period to the 2010-16 period, Yi Tong, now a medical student at SUNY Downstate College of Medicine, and I found that the share of Americans age 18 or older who said people have the right to end their lives in the case of an incurable disease rose from 46.9 percent to 61.4 percent. The percentage who said that being “tired of living and ready to die” was a reasonable rationale for suicide jumped from 13.7 percent to 19.1 percent. And roughly 11 percent of Americans in the later period said that suicide was acceptable during a financial bankruptcy or if one had “dishonored” one’s family — up from about 7 percent, in both cases.
We know that attitudes toward suicide affect behavior. Elizabeth Luth, of Cornell’s medical school, and I examined some 30 years of data on GSS respondents (about 35 percent of whom had died since their interviews) and found that expressions of suicide acceptability were associated with subsequent death by suicide. Beliefs that suicide is acceptable under certain social circumstances (family dishonor and bankruptcy) had the strongest effects on mortality.
Reducing stigma around suicide can be positive — if it means that distressed people become more likely to seek help, for instance. But when and if the belief spreads that the act is acceptable under some conditions, even terminal illness, that may have ramifications we don’t fully understand. We need more research to figure out what messages destigmatize suicide in the good sense, opening the doors to life-saving conversations, and which ones normalize it as a response to crisis, with deadly consequences.
[...]
Research suggests more people find it a reasonable response to dire challenges.
Excerpts:
[...]
The data suggests that white and middle-aged Americans are the demographic groups most at risk for suicide. So in one sense, Krueger’s tragedy fits the prevailing pattern — as did the deaths last year of celebrity chef Anthony Bourdain and fashion designer Kate Spade. In general, suicide rates among whites are about three times higher than among blacks and Latinos.
Between 1999 and 2017, U.S. suicide rates increased by 45 percent for men ages 45 to 64 and by 62 percent for women in that age group. As a result, that cohort surged ahead of the 65-plus age group in absolute terms, with a suicide rate of 19.6 per 100,000 — producing, in effect, a new epidemiology of suicide.
There are many reasons for this rise, all of them important. But one underdiscussed explanation is the subtle loosening of taboos around suicide. Surveys suggest that Americans in recent years are more likely to view it as an acceptable reaction not just to terminal illness but also to life setbacks that are emotionally brutal but survivable. (That doesn’t mean these attitudes played a part in any specific recent suicide.)
Other trends have been more broadly reported. The media attention given to the deaths of high-profile people should not distract from the fact that the rising rates of suicide occur disproportionately among working-class and less-educated Americans. In 2014, the most recent year such breakdowns of data are available, men with only a high school diploma were twice as likely to die by suicide as men with a college degree. And although middle-aged men of all educational groups experienced rising suicide rates during the Great Recession, 2007 to 2010, since then rates for college-educated men have slightly declined while those for men with only a high school diploma have continued to rise.
[...]
Yet a possible cultural component of the suicide epidemic demands close attention, too. Research suggests that Americans are becoming more tolerant and accepting of the practice. In some contexts, many Americans might find this tolerance benign — as with suicide, or even assisted suicide, in the end stages of a fatal disease (although that practice has strong critics, too).
But other attitudinal shifts may be more plainly troubling. One way to track these views is through the General Social Survey (GSS), which, since its inception in 1972, has asked a nationally representative sample of Americans about their attitudes toward suicide. In an analysis of changes in attitudes from the 1982-86 period to the 2010-16 period, Yi Tong, now a medical student at SUNY Downstate College of Medicine, and I found that the share of Americans age 18 or older who said people have the right to end their lives in the case of an incurable disease rose from 46.9 percent to 61.4 percent. The percentage who said that being “tired of living and ready to die” was a reasonable rationale for suicide jumped from 13.7 percent to 19.1 percent. And roughly 11 percent of Americans in the later period said that suicide was acceptable during a financial bankruptcy or if one had “dishonored” one’s family — up from about 7 percent, in both cases.
We know that attitudes toward suicide affect behavior. Elizabeth Luth, of Cornell’s medical school, and I examined some 30 years of data on GSS respondents (about 35 percent of whom had died since their interviews) and found that expressions of suicide acceptability were associated with subsequent death by suicide. Beliefs that suicide is acceptable under certain social circumstances (family dishonor and bankruptcy) had the strongest effects on mortality.
Reducing stigma around suicide can be positive — if it means that distressed people become more likely to seek help, for instance. But when and if the belief spreads that the act is acceptable under some conditions, even terminal illness, that may have ramifications we don’t fully understand. We need more research to figure out what messages destigmatize suicide in the good sense, opening the doors to life-saving conversations, and which ones normalize it as a response to crisis, with deadly consequences.
[...]
From 2018... Nice guys finish last: When and why agreeableness is associated with economic hardship
Matz, Sandra C., & Gladstone, Joe J. (2018). Nice guys finish last: When and why agreeableness is associated with economic hardship. Journal of Personality and Social Psychology, Oct 11, 2018. http://dx.doi.org/10.1037/pspp0000220
Abstract: Recent research suggests that agreeable individuals experience greater financial hardship than their less agreeable peers. We explore the psychological mechanisms underlying this relationship and provide evidence that it is driven by agreeable individuals considering money to be less important, but not (as previously suggested) by agreeable individuals pursuing more cooperative negotiating styles. Taking an interactionist perspective, we further hypothesize that placing little importance on money—a risk factor for money mismanagement—is more detrimental to the financial health of those agreeable individuals who lack the economic means to compensate for their predisposition. Supporting this proposition, we show that agreeableness is more strongly (and sometimes exclusively) related to financial hardship among low-income individuals. We present evidence from diverse data sources, including 2 online panels (n1 = 636, n2 = 3,155), a nationally representative survey (n3 = 4,170), objective bank account data (n4 = 549), a longitudinal cohort study (n5 = 2,429), and geographically aggregated insolvency and personality measures (n6 = 332,951, n7 = 2,468,897).
Abstract: Recent research suggests that agreeable individuals experience greater financial hardship than their less agreeable peers. We explore the psychological mechanisms underlying this relationship and provide evidence that it is driven by agreeable individuals considering money to be less important, but not (as previously suggested) by agreeable individuals pursuing more cooperative negotiating styles. Taking an interactionist perspective, we further hypothesize that placing little importance on money—a risk factor for money mismanagement—is more detrimental to the financial health of those agreeable individuals who lack the economic means to compensate for their predisposition. Supporting this proposition, we show that agreeableness is more strongly (and sometimes exclusively) related to financial hardship among low-income individuals. We present evidence from diverse data sources, including 2 online panels (n1 = 636, n2 = 3,155), a nationally representative survey (n3 = 4,170), objective bank account data (n4 = 549), a longitudinal cohort study (n5 = 2,429), and geographically aggregated insolvency and personality measures (n6 = 332,951, n7 = 2,468,897).
Saturday, March 23, 2019
Why smarter individuals self-report being prosocial and more moral persons? A study on the mediating roles of empathy and moral identity
Why are smarter individuals more prosocial? A study on the mediating roles of empathy and moral identity. Qingke Guo et al. Intelligence, Volume 75, July–August 2019, Pages 1-8. https://doi.org/10.1016/j.intell.2019.02.006
Highlights
• Intelligence is associated with self-reported prosocial behavior in daily life.
• Higher intelligence is contributive to emotional sensitivity and a greater concern for others.
• Highly intelligent individual is more likely to self-identify as a moral person.
• The intelligence-prosociality association is mediated by perspective taking, empathic concern and moral identity.
Abstract: The purpose of this study is to examine whether there is an association between intelligence and prosocial behavior (PSB), and whether this association is mediated by empathy and moral identity. Chinese version of the Raven's Standard Progressive Matrices, the Self-Report Altruism Scale Distinguished by the Recipient, Interpersonal Reactivity Index, and the Internalization subscale of the Self-Importance of Moral Identity Scale were administered to 518 (N female = 254, M age = 19.79) undergraduate students. The results showed that fluid intelligence was significantly correlated with self-reported PSB; moral identity, perspective taking, and empathic concern could account for the positive association between intelligence and PSB; the mediation effects of moral identity and empathy were consistent across gender.
Highlights
• Intelligence is associated with self-reported prosocial behavior in daily life.
• Higher intelligence is contributive to emotional sensitivity and a greater concern for others.
• Highly intelligent individual is more likely to self-identify as a moral person.
• The intelligence-prosociality association is mediated by perspective taking, empathic concern and moral identity.
Abstract: The purpose of this study is to examine whether there is an association between intelligence and prosocial behavior (PSB), and whether this association is mediated by empathy and moral identity. Chinese version of the Raven's Standard Progressive Matrices, the Self-Report Altruism Scale Distinguished by the Recipient, Interpersonal Reactivity Index, and the Internalization subscale of the Self-Importance of Moral Identity Scale were administered to 518 (N female = 254, M age = 19.79) undergraduate students. The results showed that fluid intelligence was significantly correlated with self-reported PSB; moral identity, perspective taking, and empathic concern could account for the positive association between intelligence and PSB; the mediation effects of moral identity and empathy were consistent across gender.
How to define and measure the extent to which human cognition is rational
Cognitive Success: A Consequentialist Account of Rationality in Cognition. Gerhard Schurz, Ralph Hertwig. Topics in Cognitive Science, January 2019. https://doi.org/10.1111/tops.12410
Abstract: One of the most discussed issues in psychology—presently and in the past—is how to define and measure the extent to which human cognition is rational. The rationality of human cognition is often evaluated in terms of normative standards based on a priori intuitions. Yet this approach has been challenged by two recent developments in psychology that we review in this article: ecological rationality and descriptivism. Going beyond these contributions, we consider it a good moment for psychologists and philosophers to join forces and work toward a new foundation for the definition of rational cognition. We take a first step in this direction by proposing that the rationality of both cognitive and normative systems can be measured in terms of their cognitive success. Cognitive success can be defined and gauged in terms of two factors: ecological validity (the system's validity in conditions in which it is applicable) and the system's applicability (the scope of conditions under which it can be applied). As we show, prominent systems of reasoning—deductive reasoning, Bayesian reasoning, uncertain conditionals, and prediction and choice—perform rather differently on these two factors. Furthermore, we demonstrate that conceptualizing rationality according to its cognitive success offers a new perspective on the time‐honored relationship between the descriptive (“is”) and the normative (“ought”) in psychology and philosophy.
1 How psychologists measure rational cognition
For a number of decades, psychologists have typically employed only one experimental method to study whether human cognition is rational (Lopes, 1991). Their approach—devising two or more alternative hypotheses and a crucial experiment with alternative possible outcomes, each excluding one or more of the hypotheses—has been interpreted as enabling a strong inference (Platt, 1964). In research on the rationality of human cognition this means that the experimental set‐up has been designed such that the data, people's cognitive behavior (reasoning, inference, judgment, or choice), support one of two possible results: Either individuals behave in accord with the chosen benchmark of rationality, or their cognitive behavior, measured against the benchmark, is irrational (and sometimes either deviation from the benchmark has been treated as a sign of irrationality as, for instance, in the case of the conjectures that people neglect base rates or pay too much attention to them or that people suffer from both the gambler's fallacy and the hot‐hand fallacy; see Hertwig & Todd, 2000). Crucially, the benchmarks against which these evaluations are made—and human cognition is found to be rational or not—are commonly assumed to be incontrovertible. That is, the benchmarks are understood to be relatively universal, purpose invariant, content free, and domain general. Their claim to legitimacy often rests on “a priori intuitions”—a notion to which we return later. One of these seemingly incontrovertible benchmarks is the canon of classical logic. Take, for illustration, Wason's influential work on human reasoning (e.g., Wason, 1959, 1960). Far from mincing their words, Wason and Johnson–Laird argued that
a fallacious inference, in fact, is in some ways like both an optical illusion and a pathological delusion. … and like most pathological delusions, we have encountered cases in which the subjects seem to reveal a stubborn resistance to enlightenment. (Wason & Johnson‐Laird, 1972, p. 6)
This dooming verdict is especially notable because long before Wason, other psychologists concerned with the investigation of reasoning processes strongly opposed the use of logic to define rational thought. An example is Wilhelm Wundt, who equally unequivocally argued that
at first it was thought that the surest way would be to take as a foundation for the psychological analysis of the thought‐processes the laws of logical thinking, as they had been laid down from the time of Aristotle… . These norms … only apply to a small part of the thought‐processes. Any attempt to explain, out of these norms, thought … can only lead to an entanglement of the real facts in a net of logical reflections. (1912/1973, pp. 148–149)
Wundt doubted that classical logic could serve as the bedrock for descriptive theories of reasoning beyond a “small part” of cognition. By extension, he rejected logic's normative claim for the bulk of cognition. But even the greats could not agree. Jean Piaget, for instance, brushed Wundt's view aside. Inhelder and Piaget (1958) proposed that the mental structures required to process experience develop in a stage‐like progression from infancy to adolescence. Once children reach the highest stage, they possess “Euclid's understanding of geometry, Newton's … understanding of space, time, and causality, and Kant's understanding of logic” (Flanagan, 1991, p. 145). For these developmental psychologists, logic was a key descriptive and normative foundation of the mind's highest stage of reasoning. Moreover, cognitive psychologists and scientists, from Bruner (Bruner, Goodnow, & Austin, 1956) to Fodor (1975) to Evans (1982), took theory testing based on deductive logic, thus following Popper (1959/2005), as the key to human learning. When Wason (1969) examined adults’ reasoning and found discrepancies from the rules of logical deduction, he and other contemporary cognitive psychologists did not challenge the normativity of logic but inferred that something in his carefully constructed selection task “predisposes people to regress temporarily to less sophisticated modes of cognitive functioning” (p. 478).
Yet Wundt's (1912/1973) rejection of logic as the foundation of cognition experienced a renaissance in psychology, although on the basis of very different arguments. Specifically, the normativity of logic came under attack in two ways in the late 1980s and 1990s. According to Cosmides (1989), natural selection did not evolve general‐purpose cognitive algorithms but rather cognitive algorithms that succeed in solving recurrent adaptive problems, such as the threat of being cheated in a social exchange. From this perspective, reasoning obeys a Darwinian and not a formal deductive logic. The second challenge arose in terms of a probabilistic approach being taken to purportedly logical reasoning tasks (Oaksford & Chater, 1994). According to this view, the conclusion that humans reason irrationally results from comparing “apparently irrational behavior … with an inappropriate logical standard” (Oaksford & Chater, 2001, p. 349). Specifically, people's reasoning is better understood in the Wason selection task in terms of a process of inductive hypothesis testing (and a Bayesian model of optimal data selection) than in terms of an “outmoded falsificationist philosophy of science” (Oaksford & Chater, 1994, p. 608). Consequently, probability theory rather than logic should be the normative benchmark. It is, of course, not without irony here that human reasoning has also been famously observed to deviate from the norms of probability theory (Barbey & Sloman, 2007; Kahneman, 2011; Kahneman & Tversky, 1972). Yet, like in research on reasoning, both the evidence for people's proneness to errors in statistical reasoning (Peterson & Beach, 1967) and the appropriateness of the invoked probabilistic norms for human rationality (Gigerenzer, 1996) have been hotly debated among psychologists as well.
There has thus been a history of opposing views on whether classical logic should serve as a universal benchmark for human rationality. Similar arguments have been raised with regard to probability, coherence, and other benchmarks of rationality (see Arkes, Gigerenzer, & Hertwig, 2016; Hertwig & Volz, 2013). We believe that now is a good moment for psychologists and philosophers to join forces and work toward a new foundation for the definition of rational cognition. This article represents a first step in this direction, with one author being a philosopher and one a psychologist. We begin by briefly outlining two recent developments in psychology—ecological rationality and descriptivism—that contribute to the ongoing debate about appropriate frameworks of rational cognition.
2 Ecological rationality and descriptivism
One development is the concept of “ecological rationality” (Arkes et al., 2016; Gigerenzer, Todd, & the ABC Research Group, 1999; Hertwig, Hoffrage, & the ABC Research Group, 2013; Hertwig, Pleskac, Pachur, & the Center for Adaptive Rationality, in press; A. Kozyreva & R. Hertwig, unpublished data; Todd, Gigerenzer, & the ABC Research Group, 2012). This view endorses the premise that rationality is evaluated against some benchmark but argues that contrary to a frequent assumption in psychology, there are no universal benchmarks. What are treated as universal benchmarks—for instance, consistency, coherence‐based rules, modus ponens, or Bayes's rule in probability theory—do not suffice to evaluate behavior as rational. Instead, rationality should be measured in terms of the organism's success—accurate predictions or competitive decisions—in the world. Ecological rationality thus aims to shift the researcher's methodological strategy from the a priori imposition of content‐free norms to studying the organism's goals and achievements within the context of specific environmental structures as well as the mind's undeniable cognitive constraints. Researchers would thus ask: Under what environmental structure is a given cognitive strategy (e.g., heuristic, rule, routine) for the task at hand more accurate than competing strategies that need more information and computation, and under what structure is it not? A strategy is ecologically rational to the degree that it is adapted, in the context of the task, to the informational and statistical structure of an environment. This also means that any strategy is no longer good or bad, rational or irrational per se, but rather it is or is not adapted to the specific task and environment. In addition, it means that a strategy is commonly being tested against some other strategies that may or may not be even better adapted to the specific task and environment (e.g., Gigerenzer & Brighton, 2009; Spiliopoulos & Hertwig, in press).
Although Elqayam and Evans (2011) classified the concept of ecological rationality among nonnormativist positions, they criticized it for being in danger of committing the dubious inference from “is” to “ought” (see also Elqayam & Over, 2016, p. 46). The position they advocate, descriptivisim (Elqayam & Evans, 2011; Elqayam & Over, 2016), is meant to escape the “is–ought” inference trap. The escape is realized by completely eschewing normative concerns. Elqayam and Over proposed that
the psychology of reasoning and decision making would be better off letting go of normative concerns altogether. Instead of measuring rationality by normative standards, the descriptivist position is that rationality should be measured by the achievement of personal goals. (Elqayam & Over, 2016, p. 7, emphasis added)
To this end, Evans and Over (1996) proposed a distinction between rationality1, measured in terms of achieving one's goals, and rationality2, measured against a priori normative standards, such as classical logic or probability theory. Rationality1 is postulated to be personal and contextual, resulting in instrumental rationality, meaning that an individual behaves in such a way as to achieve his or her personal goals (see also Elqayam, 2012).
In our view, the opposition between descriptivism (rationality1) and normativism (rationality2) that Elqayam and Evans (2011) invoked is misleading because the character of “instrumental rationality” is ambiguous (for other critical objections see Hertwig, Ortmann, & Gigerenzer, 1997). Ordinarily, the assertion that an action is instrumentally rational means that it is rational because it is the appropriate means for some end that, in turn, is assumed to be of value. Thus understood, instrumental rationality does involve a normative dimension insofar as it shifts the normative weight from the end to the means (i.e., to the action; Schurz, 1997, sect. 6.1). There is a second, purely descriptive understanding of instrumental rationality according to which the proposition that an action is instrumentally rational for a given end simply means that the action is appropriate for reaching this end, even if this end is bizarre from a commonsense or intuitive viewpoint. For example, it would sound odd to describe “heavy smoking” as instrumentally rational in regard to the goal of increasing the chances of developing lung cancer or frequent casual sex as instrumentally rational in regard to the goal of contracting a sexually transmitted disease. Yet such descriptive statements would be perfectly fine in this second understanding.
Notwithstanding this criticism, the notions of ecological and instrumental rationality and descriptivism have one thing in common: They object to the reduction of rationality to allegedly universal normative systems, which are, in turn, founded on a priori intuitions that are inaccessible to further justification in terms of their cognitive functionality or success in the world. Next, we turn to the difficulties such intuitions face. To this end, let us dip our toes in some philosophical waters.
3 The problems of justifying rational cognition from a priori norms or intuitions
To appreciate the full force of the foundational issues in question, it helps to briefly consider normative ethics and more specifically the classical distinction between deontological and consequentialist justifications of ethical norms (see Broad, 1967; Frankena, 1963). In deontological frameworks, the justification of norms is rooted in general a priori intuitions about values and duty principles that are assumed to be “good in themselves” (e.g., Kant, 1785/2012). These principles are obligatory, irrespective of the consequences that might follow from our actions. In consequentialist frameworks, in contrast, how correct our moral conduct is will be determined solely by a cost–benefit analysis of the action’s consequences. One example of such a framework is (act) utilitarianism, according to which an action is morally justified if the action’s total good consequences outweigh its total bad consequences (e.g., Mill, 1863/1998). Let us employ the distinction between deontological and consequentialist justification in the context of rational cognition (see Goldman, 1986, p. 97). As in deontological theories of ethics, in apriorist accounts of rational cognition (note that the term deontological is reserved for the domain of ethics), norms are justified by reference to a priori intuitions. Such foundational intuitions could be, for instance, necessity, consistency, or coherence. Generally speaking, a norm or an intuition concerning the rationality of a given cognitive strategy can be described as a priori if either it is considered evident without further justification, or its justification is based on other intuitions that are independent of the consequences of this strategy in a given environment. In contrast, consequentialist accounts of rational cognition justify their benchmarks in terms of what one could call “cognitive success.” This means that these benchmarks acquire “normative legitimacy” through the success of their consequences and not through agreement with some norm such as coherence that is imposed a priori (e.g., transitivity, property alpha, procedural invariance; see table 1 in Arkes et al., 2016).
3.1 Equilibrium justifications and the problem of circularity
In our view, it is highly problematic to enlist a priori intuitions as the foundation for justification of rational norms. Let us explain our concern. After five centuries of failed attempts in the history of rationalist philosophy, including Kant's (1781/1998), to justify principles a priori, there is wide consensus in contemporary epistemology: It is impossible to justify cognitive principles from nothing, which was Kant's understanding of “a priori.” Thus, contemporary philosophers in the rationalist tradition have put the coherence of intuitions at the basis of rationalist justifications that are considered a priori in the sense explained.1
The method of justifying a priori intuitions by the coherence with other intuitions has been called, perhaps somewhat euphemistically and sidestepping the term intuition, the “method of reflective equilibrium” (Cohen, 1981; Goodman, 1955/1983; Rawls, 1971):
The key idea underlying this view of justification is that we “test” various parts of our system of beliefs against the other beliefs we hold, looking for ways in which some of these beliefs support others, seeking coherence among the widest set of beliefs, and revising and refining them at all levels when challenges to some arise from others. For example, a moral principle or moral judgment about a particular case (or, alternatively, a rule of inductive or deductive inference or a particular inference) would be justified if it cohered with the rest of our beliefs about right action (or correct inferences) on due reflection and after appropriate revisions throughout our system of beliefs. By extension of this account, a person who holds a principle or judgment in reflective equilibrium with other relevant beliefs can be said to be justified in believing that principle or judgment. (Daniels, 2018, sect. 1)
From a consequentialist viewpoint, however, there is a vigorous objection to such “equilibrium justifications.” They are circular. In reply to this objection, several philosophers have argued that even circular justifications may have epistemic value (e.g., Goldman, 1999, p. 85; Psillos, 1999, p. 82). However, there are strong counterarguments showing that such hopes are in vain. Before turning to one, let us clarify that we do not deny that certain justification structures that have been called circular in the literature can have epistemic value (see Hahn, 2011); yet these are of a different sort from the circles involved in equilibrium justifications.2
3.2 Circular justifications and the problem of contradictory intuitions
One key counterargument to the view that circular justifications have epistemic value demonstrates that contradictory rules can be pseudojustified by the same circular argument structure. For example, the circular inductive justification of induction goes as follows: Inductions were successful in the past, whence, by induction, they will be successful in the future. If one accepts this justification, then—to avoid inconsistency—one must equally accept a counterinductive justification of counterinduction3 that runs as follows: Counterinductions were not successful in the past, whence by counterinduction they will be successful in the future (see Douven, 2011, sect. 3; Salmon, 1957; Schurz, 2018). Eventually, circular justification may also be given for fundamentalist doctrines, such as the “rule of blind trust in God's voice,” which a person may hold in reflective equilibrium with the intuition that “God's voice in me tells me that I should blindly trust his voice.”
The fact that equilibrium justifications can easily support contradictory intuitions demonstrates that circular justifications are highly problematic. Because of their circularity, these contradictory intuitions cannot be meaningfully correlated with the world but are rather inescapably subjective in nature. A striking example of an intuition‐based account of rationality in psychology and cognitive science is Cohen's (1981) article “Can Human Irrationality Be Experimentally Demonstrated?” According to Google Scholar, this article has been cited a total of 1,414 times (June 23, 2018). The philosopher Cohen vehemently argued against the bleak implications for human rationality implied especially by the research in psychology on probabilistic reasoning (Kahneman & Tversky's heuristics‐and‐biases program; Kahneman, 2011) and deductive reasoning. For Cohen, rules of logical and probabilistic reasoning such as modus ponens, modus tollens, and Bayes's theorem are based on intuitions about correct reasoning. He put it as follows: “The presence of fallacies in reasoning is evaluated by referring to normative criteria which ultimately derive their own credentials from a systematization of the intuitions that agree with them” (1981, p. 317). From this follows, Cohen argued, that if people's reasoning deviates from such rules, then this merely means that they have different intuitions about correct reasoning than logicians or probability theorists do, and therefore experimenters “risk imputing fallacies where none exist” (1981, p. 330).
The subjective nature of intuition‐based justifications raises the problem of how to arbitrate between competing normative systems. Some have diagnosed this arbitration problem as essentially unsolvable because cognitive norms, goes the argument, will necessarily be based on intuitions, without external standards of cognitive success (Elqayam & Evans, 2011). Consequently, an intuition‐based justification of rationality is doomed to result in a strong form of cognitive relativism (“anything goes”)—a position whose consequences the philosopher Stich (1990) worked out.
3.3 Why a consequentialist account of rational cognition is indispensable
What follows from this discussion? First, we do not deny that intuitions are needed in some areas, for example, in ethics where one inevitably must define what counts as intrinsically valuable. However, the appropriateness of cognitive systems should be evaluated not by intuitions but, so we argue, by demonstrations that these systems have successful consequences in the real world. Cognitive success is thus a concept that brings a consequentialist perspective to the justification of norms for rational cognition. Remember consequentialism (as used in ethics) means that the moral rightness of an act depends only on the consequences of that act. By analogy, an act of a cognitive system is rational insofar as its consequences bear success. For instance, the validity of the rule of modus ponens is established not “by intuition” but by the semantic proof of its strictly truth‐preserving nature: If “p” and “p implies q” are true, then “q” will be true as well, no matter the environment you are in. This justification of modus ponens is consequentialist in nature. Cohen (1981, p. 319) objected that the “if–then” of classical logic deviates from the if–then in natural language. Therefore, according to Cohen (1981, p. 319), intuitions need to be invoked to determine the “right” meaning of the conditional. To this argument, the consequentialist reply is that to assume that there is an “objectively right” meaning is a “rationalistic illusion”—there are only more or less cognitively successful meanings and these can change across contexts (see also Hertwig, Benz, & Krauss, 2008; Hertwig & Gigerenzer, 1999). It is well known that the if–then of natural language has a number of different semantic interpretations (cf. Bennett, 2003). The question of which is cognitively most appropriate should be answered not by reference to intuition but by replacing the ambiguous if–then of natural language with semantically well‐defined conditionals (e.g., strict, uncertain, indicative, counterfactual) and investigating their cognitive properties. In later sections we investigate the cognitive success of different systems of strict and uncertain conditional reasoning, with surprising results. Investigations of this sort are impossible as long as these systems are merely evaluated and justified on the basis of intuition, in particular since a growing dissent of intuitions has emerged in the area of conditional reasoning (see Pelletier & Elio, 1997).
In sum, taking intuitions as sacrosanct would hinder empirical research and rational criticism. We suggest that the better justification of norms of rational cognition is consequentialist in nature. Within a consequentialist account, the severe problems of arbitration, cognitive relativism, and the indeterminate correspondence of intuitions and the world are either removed or less grave—at least so we claim. The reason lies in the promise that all normative systems of reasoning can be measured on a commensurable metric that we call cognitive success. What is it and how can it be measured?
4 What is cognitive success?
Next, we propose a consequentialist account of rational cognition. Our account is in line with Quine's naturalized epistemology (1960) but goes beyond it in its explication and applications of the notion of cognitive success, as well as in its new understanding of the interplay between its descriptive and normative components. What distinguishes the present proposal from all naive sorts of pragmatism is that cognitive systems are evaluated in terms of cognitive rather than practical success indices (such as moneymaking). What is measured by cognitive success is the “cognitive part” of rationality. Cognitive rationality is a precondition for practical rationality, but unlike practical rationality, it abstracts from the question of what ends are normatively right or intrinsically good. In contrast, practical rationality, in the philosophical understanding of this concept, attempts to answer this question. For example, knowing the optimal temperature for roasting meat is “cognitively rational,” but the assessment of the practical rationality of roasting meat depends on one's ethical attitude toward a vegetarian versus nonvegetarian diet.
A consequentialist approach to the definition of rational cognition faces two main challenges. First, how can the value of cognitive success be justified without again presupposing normative intuitions, thus inheriting all the problems outlined above? According to philosophical arguments harking back to Hume (1739/40) and Moore (1903), it is impossible to derive norms solely from the “is,” that is, from empirical facts (Schurz, 1997). Consequently, every instrumental justification of particular norms must assume, besides factual information, more general norms. For example, inferring that calisthenics is good from the fact that it improves fitness assumes that fitness is a general norm. Does this argument then not thwart any attempt to ground the notion of cognitive success in anything but, again, normative intuitions?
Although this objection—every instrumental justification of particular norms must assume more general norms—is logically correct and has useful applications in ethics (Schurz, 2014), we argue that it does not apply to psychology and cognitive science for the following reason: Cognitive success is instrumental for all—or at least most4—purposes. Every real‐world decision problem involves, as a part of it, a ubiquitous cognitive task, namely, predicting which of the available actions will have the maximum expected payoff, in light of a given reward function.5 Greater success in this cognitive task will, by and large, lead to greater success in one's actions, independently of the goals pursued (Schurz, 2014). Is the premise that cognitive success is instrumental for almost all purposes really sufficient for the normative justification of cognitive success? Logically speaking, no, because this premise is descriptive and (as explained above) no “ought” can follow from an “is” by rules of logic alone. However, the missing normative premise that fills the logical gap is relatively harmless: We assume that it is by‐and‐large good to help people attain their personal goals. This is indeed a fundamental and widely shared intuition, though not a cognitive but a moral one.
Moreover, the insight that cognitive success is instrumental for almost all practical purposes helps solve the problem of the apparent relativity of instrumental rationality to one's assumed purposes, which for many authors constitutes a fundamental obstacle to the objectivity of this notion (e.g., Stich, 1990, p. 131). We suggest that the purpose‐invariant core of all forms of instrumental rationality is precisely their cognitive rationality (Kornblith, 2002, p. 158). Thus, there are no separate forms of instrumental rationality for cooks, clerks, and pilots, or for right‐wing and left‐wing politicians. What is common to all these applications of instrumental rationality is their cognitive success. This means that cognitive success is not to be mistaken for moral rightness.
This brings us to the second challenge to a consequentialist approach to defining rational cognition: the meaning of cognitive success. The details will depend on the cognitive task at hand. Yet there must be a core meaning of “cognitive success” that is common to all competing systems of rational reasoning; otherwise, it would be impossible to compare them using the same currency. Above, we argued that every real‐world decision problem involves—or can be reformulated in terms of—some kind of prediction problem. On the basis of this premise, we suggest the following definition:
The core meaning of the cognitive success of a system (including algorithms, heuristics, rules) is defined in terms of successful predictions, assuming a comprehensive meaning of prediction that includes, besides the predictions of events or effects, predictions of possible causes (explanatory abductions) and in particular predictions of the utilities of actions (decision problems).
Characterizing a decision problem in terms of a prediction task might seem narrow. Yet much of what people do is predicated on implicit or explicit forecasts about how the future will unfold. Choosing a job, getting married, having children and investing in their education, purchasing an apartment, voting for a party, saving for old age, choosing a medical treatment—all these decisions and many others are reached on the basis of predictions about what the future holds. Moreover, focusing on predictions by no means implies that important cognitive processes are ignored. Since reliable predictions are based on an inductive inference from sufficiently informed premises, they engage various nonpredictive subprocesses such as search, memory retrieval, and language processing. Importantly, the major purpose of the predictive reformulation of decision tasks is to measure their cognitive success on a commensurable scale. For example, consider the decision problem of buying the “best” car (relative to the buyer's preferences) where the buyer encounters two websites offering two competing decision methods, M1 and M2, to potential car buyers. Then the claim that method M1 is more appropriate for a certain group of car buyers (e.g., males between the ages of 20 and 30) amounts to the testable prediction that the degree of future satisfaction of car buyers in this group, having used method M1, is significantly higher than those who used method M2.
Upon closer inspection, the predictive success of a cognitive system or (more generally) a cognitive method depends on two components that are commonly in competition and whose optimization thus involves a trade‐off. In the psychological literature, this trade‐off is reflected in the distinction between the ecological validity of a prediction method (Brunswik, 1952; Gigerenzer et al., 1999) and its applicability.6
More precisely, a method's cognitive success can be factorized into the product of these two components as follows:
cognitive succ = ecological validity x applicability,
where applicability is the percentage of targets for which the method renders a prediction, among all intended targets of prediction, and ecological validity is the sum of scores divided by the number of all predictions rendered, and
score (per prediction) = max - loss,
where max is the maximal score that a perfectly accurate prediction can obtain and loss is a monotonically increasing function of the distance between the predicted and the actually observed value of the event variable. From this it follows that
cognitive succ = sum of scores divided by number of all intended targets of prediction.
Ecological validity and applicability of a cognitive method are in competition. One can increase the ecological validity of a method by having it apply only to those few target domains for which the method's predictions are known to be accurate because, for instance, the method has been fitted to this domain. Likewise, one can increase the applicability of a method by applying it also to target domains for which its error rate remains unknown or even known to be high, or by permitting the method to make a random guess in cases where the algorithm does not reach a decision (i.e., in this sense is not applicable). Also note that the definition of the concept of applicability is related to all intended target domains but not to all possible target domains. Thus, a method's cognitive success cannot be deemed to be low because it does not apply to domains that were never intended to be part of the class of target domains. Consider, for illustration, the analogy of a hammer—its “success” is not diminished by the fact that the hammer is not suitable to drill holes. We also emphasize that a method's class of intended targets domains is not an invitation to propose arbitrary reference classes but rather is empirically inferred in terms of the method's purposes across all users. Thus, a method's cognitive success cannot be arbitrarily boosted by winnowing down its intended targets to “easy ones.”
The score that a method earns for each prediction is its maximally achievable score (max) minus its distance to the observed value (loss). The type of loss function7
and max are specified by type and context of the given task.8 Often max is identified with the greatest possible loss; this entails that min, that is, the minimal score, is zero. If loss is identified with the absolute distance function, max is given as the width of the observable value range. For example, if the task consisted in forecasting the next day's mean temperature with values lying in the range between −20°C and +40°C and the loss function is given as the absolute difference between predicted and actual mean temperature, then max is 60°C. If the task is the prediction of probabilities such as that it will rain tomorrow, then, according to a famous result of Brier (1950), the appropriate loss function is not the absolute but the squared distance between the predicted probability and the truth value of the predicted event; thus, max = 1 (true) and min = 0 (false). In the example of people intending to buy a car, a natural loss function might be the absolute difference between mean degree of satisfaction (in an unbiased sample) with the car type recommended by method M1 and that recommended by method M2, with degree of satisfaction measured on a scale ranging, say, from min = 0 to max = 10.
4.1 Some possible objections to cognitive success
Let us freely admit that intuitions can play a role in determining the details of the scoring function. However, robust results should be largely invariant to changes of the scoring functions (see the section on uncertain conditionals below). Another objection to the concept of cognitive success is that it downplays the role of explanations relative to predictions. This challenge can serve as a further test case for our account. Salmon (1984) argued that what distinguishes explanations from predictions is that, whereas predictions can be based on noncausal correlations, explanations must spell out the causes of the event to be explained. Although we agree, we emphasize that causality can easily be embedded into the concept of cognitive success. What distinguishes a causal from a noncausal correlation between a variable X and another one Y is that the effect of an intervention on X will be transmitted to Y only if X is a cause of Y (this is a consequence of the causal Markov condition; see Pearl, 2009). Thus, the cognitive success of causal information resides in its capacity to predict the consequences of (human) actions.
Another account identifies good explanations with argument patterns that unify many empirical phenomena (Kitcher, 1981). However, it can be shown that empirical unification correlates with empirical confirmation and this, in turn, correlates with predictive success (Schurz & Lambert, 1994). The only notions of explanation that are not and should not be covered by our account are those that make the quality of an explanation dependent on its coherence with “intuitions of understanding” and that are inexplicable in terms of causal or unificatory concerns.
The two core components of our notion of cognitive success, ecological validity and applicability, are related to a number of further important evaluative dimensions:
A method with high ecological validity has a high truth rate9 in those situations where it is applicable; thus, high ecological validity is connected with low risk of error.
A method with high ecological validity may nevertheless have low cognitive success if it can rarely be recruited due to low applicability.
A method with high applicability renders predictions possible across many predictive contexts. High applicability therefore suggests that the method has a high information output.
On the other hand, a method's applicability is inversely related to its cognitive costs, measured in terms of the information input needed and the effort required to process it. The higher the cognitive cost of a method, the more often it will be inapplicable because it exceeds the upper bound of agents’ cognitive resources (see also Payne, Bettman, & Johnson, 1993).
The threefold tensions between risk of error, information output, and cognitive costs create a fitness landscape10
that can explain many facets of the pros and cons of competing systems of rational reasoning. How these cognitive fitness factors interact in concrete cognitive tasks will be discussed next. In particular, the tension between these factors explains why cognitive science needs not a monism but a pluralism of cognitive methods, and why the evaluation of those methods’ advantages and weaknesses should rely not on intuition but on careful comparison of their respective success. Next, we illustrate this point by applying the notion of cognitive success in the domains of classical material conditionals, uncertain conditionals, Bayesian probabilities, and prediction and choice.
4.2 Cognitive success and deductive reasoning
Let us return to classical logic, our introductory example of what many psychologists considered a universal norm of rational cognition in the 20th century. Deductive inferences are, by definition, completely valid—that is, they have maximum ecological validity (1.0): In all situations in which all premises are true, the derived conclusion will invariably be true. Yet this ideal validity of deductive inferences stands in stark contrast to their very low applicability, as emphasized by Wundt (1912/1973; see above). That is, the prevalence of deductive inferences with nontrivial conclusions is low. As an example, consider inferences of propositional logic involving the classical (material) conditional “If P, then Q” (semantically equivalent to “not‐P or Q”). It can be shown that this inference can have a nontrivial conclusion insofar as it is possible to confirm each premise by observations that do not already contain the conclusion. This will be the case if the following condition is satisfied: The verification of the conditional premise “If P, then Q” is based not on the observation of “not‐P” or of “Q,” but rather on an inductively supported belief that expresses (at least implicitly11
) a strict (exceptionless) generality of the form “For all x in a given domain: If P(x), then Q(x)” (see Schurz, 2014, sect. 5.1). Exceptionless regularities (i.e., conditional probabilities of 1.0) are known to be rare in empirical (nonmathematical) domains. What does this mean? It simply means that inferences of propositional logic with nontrivial conclusions are rare. Therefore, their overall cognitive success will be low in these domains, notwithstanding their maximum validity. Only if one could demonstrate that in a specific environment the applicability of deductive reasoning is high could one argue in favor of this system's high cognitive success in this environment. One such environment may be cheater detection, where people can be, under specific circumstances, remarkably successful when measured in terms of modus tollens reasoning (Cosmides & Tooby, 1992). Another domain may be consistency checks in legal reasoning (Arkes et al., 2016).
4.3 Cognitive success and reasoning with uncertain conditionals
Uncertain conditionals are conditionals of the form “If A, then normally B.” They are epistemically acceptable if the associated conditional probability pr(B|A) is “sufficiently” high, that is, higher than a contextually determined threshold α > .5. Systems of probability logic infer further conditionals from sets of uncertain conditionals. There are four well‐known systems of reasoning with uncertain conditionals: O, P, Z, and QC. System O (Hawthorne & Makinson, 2007) is the only system that preserves epistemic acceptability from premises to conclusion for any chosen acceptability threshold. System P is the famous system of probability logic developed by Adams (1975). It guarantees to preserve epistemic acceptability only if the sum of the premises’ conditional uncertainties is smaller than 1.0 minus the acceptability threshold (where uncertainty is defined as 1.0 minus probability; Oaksford & Chater, 2007, p. 111). System Z goes back to Pearl (1990) and makes additional default assumptions that, roughly speaking, maximize the entropy of the distribution under the high‐probability constraints dictated by the premise conditionals (Hill & Paris, 2003). System QC (for “quasi‐classical reasoning”) reasons with uncertain conditionals as if they were exceptionless conditionals of classical logic.
For illustration, assume a small world with only four predicates: “being a bird” (B), “being able to fly” (F), “having wings” (W), and “being male” (M). The known premises are the two uncertain conditionals (a) B ⇒ F (birds can fly) and (b) B ⇒ W (birds have wings), with associated probabilities pr(F|B) = pr(W|B) = .95. System O draws only trivial inferences such as B&(M∨¬M) ⇒ F from Premise a, meaning birds that are either male or not male can fly, with an associated probability of .95. In addition to the previous inference, system P draws the inference B&W ⇒ F from Premises a + b, meaning, birds having wings can fly, with an associated probability of .9. System P does so by applying the law of “cautious monotonicity” and the uncertainty sum rule. In addition to the previous inferences, System Z draws the inferences B&M ⇒ F and B&¬M ⇒ F from Premise a, meaning male birds as well as nonmale birds can fly, with an associated probability of .95. It does so by making the default assumption that the predicates “male” and “being able to fly” are statistically independent (likewise in application to Premise b). Finally, in addition to all previous inferences, System QC draws the “risky” inference of contraposition ¬F ⇒ ¬B, meaning nonflying objects are not birds. This follows from Premise a with an associated probability of .95 (similarly in application to Premise b).
These four systems differ significantly in their predictive power. They become increasingly powerful and, at the same time, more risky and error prone. That is, the applicability (number of derived conclusions) and error probability (number of mistakes made) increase from O to P to Z to QC. From a consequentialist viewpoint, the question is not which of these systems is the right or true one, but which is superior with regard to cognitive success. Schurz and Thorn (2012) performed a cognitive‐success analysis of the Systems O, P, Z, and QC. In their computer simulation an environment with four binary variables a, b, c, d and a randomly generated probability distribution was repeatedly simulated. The possible cases (predictive targets) consisted of all 464 conditionals with conjunctions of one, two, or three unnegated or negated variables in their antecedent or consequent. The task on which the four systems were compared was the derivation of conditionals from four randomly selected base conditionals with conditional probabilities ≥ .7, together with a prediction of their associated conditional probabilities.12
Thus, there were at most 460 conditional probabilities to be predicted. Four different scoring rules for cognitive success were compared. Table 1 presents the results for the ACG (advantage compared to guessing) score, defined as the absolute difference between the predicted and the actual conditional probability for each of the derived conditionals. Although the ordering of the four systems according to their ecological validity is Q > P > Z > QC, their applicability ordering is precisely the inverse, QC > Z > P > Q. The resulting cognitive success ordering is Z > QC > P > O.
Table 1. Cognitive success analysis of four systems of reasoning with uncertain conditionals
System Applicability (% of 460 intended predictions) Sum of Scores (ACG score)a Ecological Validity (range [0, 1]) Cognitive Success (range [0, 1])
O 1.0 4.6 0.92 0.009
P 1.4 5.2 0.82 0.011
Z 10.5 22.5 0.47 0.049
QC 22.9 8.5 0.08 0.018
Note
a For normalization purposes, the ACG scores in table 2 of Schurz and Thorn (2012) were multiplied by 3.
In light of these results, Schurz and Thorn (2012) concluded that System Z achieves the optimal balance in the trade‐off between deriving true and informative conclusions and avoiding false or uninformative ones.13
Schurz and Thorn (2012) and Thorn and Schurz (2014) investigated three additional scoring rules: PIR (price is right), sPIR (subtle price is right), and EU (expected utility). The qualitative orderings of the ecological validity, applicability, and cognitive success of the four systems were the same across all four success measures, demonstrating the robustness of the results.14
4.4 Success and Bayesian probabilities
Bayesian probabilities are internally coherent degrees of subjective beliefs. Following arguments by Ramsey (1926/1990) and De Finetti (1937/1964), coherence is usually justified as follows: If one interprets degrees of beliefs as fair betting quotients, one is guaranteed to never accept a system of bets that exacts a logically guaranteed loss, that is, a “Dutch book.”15
The Dutch book argument is thus indeed a consequentialist justification as it ties the consequences of a person's subjective probabilities back to monetary outcomes. However, what is thus justified is merely the coherence of probabilities, and this means only that they need to satisfy the basic (Kolmogorovian) probability axioms.16 This is indeed a necessary constraint on rational degrees of beliefs, but by itself it is not sufficient for rational degrees of belief to yield cognitive success. The condition of a coherent fair betting quotient depends solely on the gambler's subjective beliefs and preferences. It does not involve any adaptation to the environment, that is, to the true frequency or statistical probability (frequency limit) of the events betted on. Consider, for example, a subjectivist who repeatedly offers betting odds of 1:1 that she will roll a 6 with an unbiased die. She considers this bet to be fair and is equally willing to accept the opposite bet that she will not roll a 6. She is coherent and will remain coherent even after she has lost her entire fortune. She will be puzzled that while everybody readily accepted her first bet, nobody accepted the opposite bet, even though both are equally fair in her view. Thus, if she ignores the frequentistic chances of the events betted on, she will be unable to explain why she lost everything and others won.
As this, admittedly engineered, example illustrates, the problem of subjective degrees of belief is not their low applicability (an individual's beliefs could discriminate between many states of the world) but their potentially low ecological validity. The Bayesian coherence requirement is too weak to exclude cognitively unsuccessful behavior if one's degrees of belief are not connected with objective truth‐chances (i.e., statistical probabilities; Knight, 1921). There are pertinent methods in Bayesian statistics of establishing this connection (less well‐known than the Dutch book arguments), such as Lewis's “principal” principle (1980/1986) or De Finetti's (1937/1964) equivalent “exchangeability” principle. These principles demand that a person's rational degree of belief (Pr) in an event (E) should have the value r, given that all that the person knows is that the statistical probability (pr) of the corresponding event type E is r, more formally, Pr(E | pr(E) = r) = r (for arbitrary r ∈ [0, 1]). One can prove that the satisfaction of this principal principle is equivalent to the assumption of Bayesian statistics that degrees of belief can be represented as weighted averages of statistical probabilities. Subjective probabilities that satisfy this condition are known to converge toward the true statistical frequencies when the evidence increases infinitely, independently of the assumed prior distributions (Gillies, 2000, p. 71ff; Howson & Urbach, 1993, chapter 14; Schurz, 2013, pp. 165, 236f). It is only if this connection between subjective and objective probabilities is established that Bayesian reasoning can be cognitively successful and decisions based on maximization of subjectively expected utility can maximize one's average utility.
4.5 Cognitive success in prediction and choice
Perhaps more than in any other research area in psychology the tension between apriorist and consequentialist accounts of rational cognition unfolded in the debate about the meaning of bounded rationality in general and the role of heuristics in particular. The heuristics‐and‐biases research program (Kahneman, 2011), possibly the most influential research program in psychology of the last five decades, has consistently invoked the rules of probability theory and statistics as a priori norms for human rationality. Deviations from these norms in people's reasoning were taken as manifestation of irrationality. In Kahneman's (2003) portrayal of the program's research, it “attempted to obtain a map of bounded rationality, by exploring the systematic biases that separate the beliefs that people have and the choices they make from the optimal beliefs and choices assumed in rational‐agent models” (p. 1449). Many of the systematic biases were attributed to the operation of heuristics (e.g., availability, representativeness, and anchoring‐and‐adjustment) that although “quite useful,” sometimes “lead to severe and systematic errors” (Tversky & Kahneman, 1974, p. 1124). On this view, a heuristic's rationality is evaluated exclusively on the basis of its conformity to the norms and not in terms of its potential cognitive success.
This changed with the arrival of the ecological rationality research program, which has redefined the normative study of heuristics; by extension, it interprets bounded rationality in terms of the match between a heuristic and an environment, the two blades in Simon's (1990, p. 7) scissors metaphor. On this view, this match determines the performance and thus the cognitive success of a heuristic. In order to measure cognitive success, researchers of heuristics’ ecological rationality have conducted a wide range of tournaments between simple heuristics and complex strategies commonly considered to be normative. These computer simulations encompass, for instance, the analysis of heuristic inferences about real‐world quantities (e.g., which of two cities has a larger population size; Gigerenzer & Brighton, 2009; Gigerenzer & Goldstein, 1996; Katsikopoulos, Schooler, & Hertwig, 2010) and more recently the analysis of choices between uncertain lottery options (Hertwig, Woike, Pachur, & Brandstätter, in press) and of choices in strategic games (Spiliopoulos & Hertwig, in press). For illustration, consider the tournament involving choice strategies choosing between uncertainty lottery options (Hertwig et al., in press). The simulations implemented 20 choice environments (defined by different payoff and probability distributions) and randomly generated 6,000 choice problems per environment. The innovation in this simulation was that all strategies (with the exception of the omniscient expected value model) learned about the properties of each problem by sequentially taking one draw at a time from each of the options per problem. The strategies then chose what they inferred to be the best option after each sample (learning stopped after 50 rounds).
Table 2 presents the cognitive success of each of the six choice strategies. The normative benchmark for human beings is either the omniscient expected value theory, or, more realistically, the sampling‐based expected value theory. In light of the cognitive success measure, Hertwig et al. (in press) concluded that under uncertainty (when all strategies have incomplete knowledge and need to sample the environment), some simple choice heuristics nearly approximate the performance of the sampling‐based expected value theory—even though they may not take entire swaths of information into account. The well‐performing equiprobable heuristic, for instance, ignores all probabilities and merely calculates the mean of all outcomes within each option, then chooses the option with the highest mean. Indeed, the research on ecological rationality has repeatedly demonstrated that simple heuristics, which curtail search for information and reach decisions without complex calculations, can lead to surprisingly good inferences and predictions relative to complex algorithms based on the principles of logic, probability theory, and maximization.
Table 2. Cognitive success analysis of choice strategies in choice environment requiring learning of the properties of the choice options
Strategya Applicability (in %)b Cognitive Success (range [0%, 100%])c
N = 5 N = 20 N = 50
Equiprobable 100 93.1 94.6 93.7
Probable 100 86.4 92.3 93.5
Lexicographic 100 86.4 87.9 88.0
Least‐likely 100 54.2 61.5 64.3
Sampling‐based expected value theoryd 100 94.0 98.3 99.3
Omniscient expected value theory 100 100 100 100
Notes
a All strategies are described in detail in Hertwig et al. (in press).
b In this analysis all strategies were always applicable because they could either select the options or choose randomly.
c Average performance across all 20 choice environments and for N = 5, 20, and 50 samples taken per option (two options with two, four, and eight outcomes) from the environment; the cognitive success metric is normalized such that 100% means that a strategy always selected the option with the higher expected value (as did the omniscient expected value model) and 0% means that a strategy always selected the option with the lowest expected value.
d The sampling‐based expected value theory can also be implemented in terms of a simple heuristic (i.e., natural‐mean heuristic; see Hertwig et al., in press).
The strategies in Table 2 selected an option randomly in cases where their policy and the information available did not render a choice, that is, were not applicable. For this reason their applicability is always 100% and their ecological validity and cognitive success are identical. In other tournaments measuring cognitive success some competing methods have low applicability, whereas others are always applicable. This is particularly the case in tournaments including meta‐inductive selection strategies. The account of meta‐induction (Schurz, in press; Schurz & Thorn, 2016) is in an important sense complementary to the research program of ecological rationality: Meta‐induction is a general meta‐cognitive strategy designed to choose, in each situation in which it is applicable, a locally optimal method from a given toolbox of candidate methods. Two important meta‐inductive strategies are take‐the‐best17and success‐weighting. Take‐the‐best applies in each round of the tournament. It selects the prediction method that is applicable (i.e., renders a prediction) and that has the best success record in the past. Success‐weighting predicts a weighted average of the predictions of those methods that rendered a prediction in the given round of the tournament, with weights reflecting the methods’ past successes. Table 3 presents the results of applying take‐the‐best and success‐weighting to the results of the Monash University footy tipping competition (MUFTC18). The predictive target was forecasting the 3‐valued results (1, 0, or tie) of matches of the Australian Football League. The tournament included the predictions of 1,071 human participants (Table 3 reports the five human forecasters with the highest success rates, Forecasters 1–5) as well as the predictions of the different meta‐induction strategies including take‐the‐best and success‐weighting.
Table 3. Cognitive success analysis of the Monash University footy tipping competition (after 1,514 rounds)a
Predictor Applicability in % Sum of Scores Ecological Validity (range [0, 1]) Cognitive Success (range [0, 1])
Success‐weighting 100 877 0.579 0.579
Take‐the‐best 100 873 0.577 0.577
Forecaster 1 39 839 0.640 0.554
Forecaster 2 27 811 0.637 0.536
Forecaster 3 13 789 0.666 0.521
Forecaster 4 12 789 0.676 0.521
Forecaster 5 13 787 0.658 0.520
Notes
a Target was forecasting the results of 1,514 matches of the Australian Football League over eight seasons from 2005 to 2012. The tournament included the predictions of 1,071 human participants and the predictions of various meta‐induction strategies including take‐the‐best and success‐weighting.
The five best human forecasters displayed high performance only in certain rounds and refrained from making predictions in other rounds. The meta‐inductive strategies utilized the predictions of the best human forecaster in each round, with the result that their applicability was 100% and their cognitive success surpassed that of the best human forecasters (with a slight advantage of success‐weighting over the simpler take‐the‐best strategy).
4.6 What does cognitive success mean for the is–ought relationship?
Cognitive success’ intent to focus on the consequences of rational cognition suggests a new view of the relationship between the normative (“ought”) and the descriptive (“is”) dimensions of theories of reasoning. According to the traditional division of labor, it is the task of armchair philosophy to address normative issues, and that of empirical psychology to answer descriptive questions. From the consequentialist perspective of cognitive success, however, empirical results can become normatively relevant, and normative innovations can suggest new empirical questions (see Corner & Hahn, 2013).19
The relationship between the normative and the descriptive, as conceptualized in different accounts of rationality in reasoning—such as apriorism, descriptivism, or ecological rationality—emerges most clearly from the answers they give to the following question: What should one infer from a conflict between descriptively observed and normatively recommended behavior in the context of a cognitive task?
In general, the answers depend on the theoretical positions adopted. Thus, how scholars respond to the gap between “is” and “ought” is diagnostic with regard to the justification of the rationality norms they endorse. Let us assume that an experiment reveals a divergence between how people reason and how they ought to reason according to some standard rational benchmark such as norms of probability theory, logic, or axioms of rational choice. If a scholar adopts an intuition‐based justification of rational cognition, the normatively recommended behavior is not defined in terms of its cognitive success but by reference to a priori intuitions. Thus, the cognitive behavior observed will be judged to be irrational. Alternatively, a scholar may endorse a strong relativism of different systems of intuition. In this case, however, no strong rationality inferences can be drawn (Cohen, 1981; Shier, 2000, p. 78).
In consequentialist accounts, in contrast, both the empirically observed reasoning and the “reasoning” of the normative systems will be evaluated with regard to cognitive success. For example, if logistic regression were regarded as the normative standard for predicting the value of a criterion based on a set of cues, then the cognitive success of this normative standard, assuming some statistical knowledge base, could be measured against the cognitive success of people's predictive inferences from the same input (see also Gigerenzer & Brighton, 2009). This opens up a new option for responding to conflicts between empirical observations and normative recommendations. The cognitive success of observed reasoning is not necessarily worse than that of the normative system. Indeed, observed reasoning may, in fact, even outperform normative recommendations. As mentioned before, evidence for the latter has been compiled in research on bounded and ecological rationality (Gigerenzer & Brighton, 2009; Gigerenzer, Hertwig, & Pachur, 2011; Hertwig et al., 2013, in press; Todd et al., 2012). If the observed cognitive behavior outperforms the normative system, the consequentialist would need to conclude that the assumed “normative system” is second best and, thus, can no longer be invoked to derive normative recommendations.20
If, however, observed behavior scored lower on cognitive success than the normative system, the consequentialist's conclusion would depend on a second theoretical choice that is open to consequentialists but not to intuition‐based accounts, namely, attitudes toward cognitive adaptationism. This position assumes that human cognition is near‐optimally adapted to its relevant environments. Therefore, a consequentialist proponent of cognitive adaptationism may be inclined to argue that the assumed measure of cognitive success is inappropriate and in need of revision. In contrast, a nonadaptationist consequentialist would conclude, faced with evidence that observed behavior's cognitive success is surpassed by that of the normative system, that human cognition is below par.
Let us explain the relationship between cognitive consequentialism and cognitive adaptationism in more detail. One might think that cognitive consequentialism entails cognitive adaptationism because the former evaluates cognitive systems by their cognitive success and cognitive success entails being well adapted. This reasoning, however, mistakes “is” and “ought.” Cognitive consequentialism makes the normative claim that cognitive systems should be evaluated in terms of their cognitive success. This implies the normative claim that, ceteris paribus, cognitive systems should be well adapted to their environment. However, cognitive adaptationism is not a normative requirement but an empirical thesis, stating that because humans are the product of evolutionary selection processes, they will be cognitively well adapted. This may or may not be the case—an issue to which we return below. For the present discussion, however, it is important to note that cognitive adaptationism is not entailed by the normative requirement of cognitive consequentialism. Consequently, a cognitive consequentialist can be more or less inclined toward the choice to assume cognitive adaptationism. This choice, in turn, will determine the response to an instance in which actual cognition scores lower on cognitive success than the normative system.
4.7 Cognitive consequentialism and the issue of adaptationism
Let us finally discuss cognitive adaptationism as found in Anderson's (1990, 1991a,b) work in more detail because, prima facie, his rational analysis shares significant resemblances with our account of cognitive consequentialism. Anderson's method consists of five iterative steps (Anderson, 1991a, p. 473):
Specify the goals of the cognitive system.
Develop a model of the environment to which the system is adapted.
Make minimal assumptions about computational limitations, such as memory storage and computation time.
Derive the optimal behavior given in (1)–(3) above.
Finally, test empirically whether the predictions of the optimal behavior derived in (4) are confirmed by human cognitive performance; if not, then the task–environment model developed in (1) + (2) has to be revised.
Anderson has applied rational analysis to domains such as memory analysis, categorization theory, causal inference, and problem solving.21Regarding Step 1, in these domains Anderson identifies the goal of the cognitive system as some kind of predictive inference. Thus, there is agreement between Anderson's analysis of cognitive goals and our definition of cognitive success. Also Steps 2 and 3 are consistent with a cognitive success analysis, except that in Step 3 we would argue that “minimal” assumptions should be replaced by realistic assumptions about computational limitations (see Simon, 1990). The first crucial difference to our account appears in Step 4. The consequentialist account does not intend to “derive” the optimal method from the description of the task and environment, because apart from very simple cases this is impossible. In the area of prediction methods, the nonexistence of a universally optimal method is the content of Wolpert's (1996) famous no free lunch theorem (cf. Schurz, 2017). Simon (1991) demonstrated that what does the real work in Anderson's derivations are specific auxiliary assumptions about the cognitive system and its environment. We mostly agree with Simon's critique of Anderson's optimal adaptation hypothesis: All that rational analysis can do is consider all available but not all possible competing methods for a given task and investigate their cognitive success. This is what the consequentialist account suggests.
Cognitive success, unlike optimal adaptation, thus behaves similarly to how Simon portrayed natural selection:
The theory of natural selection is not an optimizing theory for two reasons. First, it can, at best, produce only local optima, because it works by hill‐climbing up the nearest slope. It has no mechanism for jumping from peak to peak… . Second, it selects only among the alternatives that are available to it. (1991, p. 29)
This brings us to Anderson's Step 5. This step presupposes the adaptationist thesis that human cognitive behavior is nearly optimally adapted (Anderson, 1991a, table 1, p. 473). As a general claim, such an “evolutionary optimism” is difficult for several reasons. First, evolutionary selection sometimes produces suboptimal and even dysfunctional adaptations (Ridley, 1993, p. 343f). Second, although genetic evolution optimizes the biological reproduction rate, it is less clear how this process relates to cognition. Anderson acknowledged the fact that evolutionary selection does not find a global optimum but merely a local one. However, there is a world of difference between a global and a local maximum: It can be as large as the difference between a sand hill and Mount Everest. All single constraints on cognitive processes dictated through the biological architecture of humans’ brains (see Jones & Love, 2011) are concealed in this difference.
In contrast to rational analysis, cognitive consequentialism does not imply cognitive adaptationism, though it is compatible with it. Obviously, the human brain is well adapted in many respects, but not in all. Therefore, we suggest that the view of rational cognition that appears most defensible and productive for contemporary cognitive science is that of a cognitive consequentialism that is not bound to strong adaptationist assumptions. In conclusion, the consequentialist account proposes to modify Anderson's Steps 4 and 5 as follows:
4' Derive the consequences of the available competing cognitive methods [given the output of (1)–(3)] and test their cognitive success.
5' Compare the locally optimal method [i.e., the output of (4)] with the actual human behavior.
5'.1 If they agree, recommend the locally optimal method and infer that human cognition is well adapted.
5'.2 If they disagree, two cases are possible:
5'2.1 If human behavior outperforms the locally optimal method, search for better cognitive methods (to thus eventually explain human behavior): Backtrack to (4) and iterate.
5'2.2 If human performance is worse than the locally optimal method, search for local constraints on the mind's cognitive mechanisms that can explain the disagreement. Backtrack to (3), add these constraints, and iterate. At the same time, recommend the locally optimal method as a rational improvement of intuitive human cognition that can be learned by cognitive training.
In other words, at this point cognitive consequentialism potentially has educational implications.
5 Conclusion: New questions about rational cognition
The study of human cognition and its rationality seems inseparable from the question of how successful it is. In psychology and economic research, rationality of human cognition has often been equated with coherence, that is, rules for internal consistency, often defined by propositional logic and probability theory (see Arkes et al., 2016). However, for decades, psychologists have disagreed over how well coherence‐based normative systems describe human cognition and which coherence‐based systems (logic, probability theory, or decision theory) should be granted the status of normative benchmarks for cognition (see our introduction). We have discussed the problems that arise when normative systems are justified by reference to a priori intuitions, as is typically the case. As an alternative and a potential response to some of these problems, we propose a consequentialist account of normative systems. The major tenets of this account are as follows:
Traditional normativism fails because a priori intuitions are inadequate as justifications of norms of rational cognition.
Traditional descriptivism fails because norms of rational cognition are inevitably needed as benchmarks for successful reasoning.
Norms of rational cognition are better justified from a consequentialist perspective, that is, in terms of their cognitive success.
The concept of cognitive success of a cognitive method assumes that all decision tasks can be reformulated as prediction tasks.
Cognitive success is defined as the product of ecological validity and the method's applicability. Determining the available method's cognitive success permits one to compare them on the same scale.
Cognitive consequentialism is related to ecological rationality to the extent that cognitive success depends on the given cognitive task and a specific environment.
We hope and believe that this approach (or a similar consequentialist concept) offers a way to overcome the trite division of labor between the empirical study of the mind and philosophy. This approach raises new and interesting questions. For instance, what does cognitive success imply for research that has drawn strong conclusions about the (ir)rationality of human cognition (e.g., the heuristics‐and‐biases research program; Kahneman, 2011)? Or, assuming that the success of different cognitive methods depends on specific environments and that no method succeeds in all environments, will there be metamethods that are able to select the best method for the environment in question (see “meta‐induction”; Schurz, in press)? Finally, to what extent is cognitive success adaptive in an evolutionary sense, and how does the answer to this question influence the understanding of the relation between is (observed cognitive behavior) and ought (normatively recommended behavior)? We do not have answers to these and other questions, but we hope that we have convinced the reader that it is timely to ask these questions, thus leaving some skirmishes of the “rationality wars” behind us (Samuels, Stich, & Bishop, 2012) and turning the question of what rational cognition is into a less dogmatic and a more empirical one.
Abstract: One of the most discussed issues in psychology—presently and in the past—is how to define and measure the extent to which human cognition is rational. The rationality of human cognition is often evaluated in terms of normative standards based on a priori intuitions. Yet this approach has been challenged by two recent developments in psychology that we review in this article: ecological rationality and descriptivism. Going beyond these contributions, we consider it a good moment for psychologists and philosophers to join forces and work toward a new foundation for the definition of rational cognition. We take a first step in this direction by proposing that the rationality of both cognitive and normative systems can be measured in terms of their cognitive success. Cognitive success can be defined and gauged in terms of two factors: ecological validity (the system's validity in conditions in which it is applicable) and the system's applicability (the scope of conditions under which it can be applied). As we show, prominent systems of reasoning—deductive reasoning, Bayesian reasoning, uncertain conditionals, and prediction and choice—perform rather differently on these two factors. Furthermore, we demonstrate that conceptualizing rationality according to its cognitive success offers a new perspective on the time‐honored relationship between the descriptive (“is”) and the normative (“ought”) in psychology and philosophy.
1 How psychologists measure rational cognition
For a number of decades, psychologists have typically employed only one experimental method to study whether human cognition is rational (Lopes, 1991). Their approach—devising two or more alternative hypotheses and a crucial experiment with alternative possible outcomes, each excluding one or more of the hypotheses—has been interpreted as enabling a strong inference (Platt, 1964). In research on the rationality of human cognition this means that the experimental set‐up has been designed such that the data, people's cognitive behavior (reasoning, inference, judgment, or choice), support one of two possible results: Either individuals behave in accord with the chosen benchmark of rationality, or their cognitive behavior, measured against the benchmark, is irrational (and sometimes either deviation from the benchmark has been treated as a sign of irrationality as, for instance, in the case of the conjectures that people neglect base rates or pay too much attention to them or that people suffer from both the gambler's fallacy and the hot‐hand fallacy; see Hertwig & Todd, 2000). Crucially, the benchmarks against which these evaluations are made—and human cognition is found to be rational or not—are commonly assumed to be incontrovertible. That is, the benchmarks are understood to be relatively universal, purpose invariant, content free, and domain general. Their claim to legitimacy often rests on “a priori intuitions”—a notion to which we return later. One of these seemingly incontrovertible benchmarks is the canon of classical logic. Take, for illustration, Wason's influential work on human reasoning (e.g., Wason, 1959, 1960). Far from mincing their words, Wason and Johnson–Laird argued that
a fallacious inference, in fact, is in some ways like both an optical illusion and a pathological delusion. … and like most pathological delusions, we have encountered cases in which the subjects seem to reveal a stubborn resistance to enlightenment. (Wason & Johnson‐Laird, 1972, p. 6)
This dooming verdict is especially notable because long before Wason, other psychologists concerned with the investigation of reasoning processes strongly opposed the use of logic to define rational thought. An example is Wilhelm Wundt, who equally unequivocally argued that
at first it was thought that the surest way would be to take as a foundation for the psychological analysis of the thought‐processes the laws of logical thinking, as they had been laid down from the time of Aristotle… . These norms … only apply to a small part of the thought‐processes. Any attempt to explain, out of these norms, thought … can only lead to an entanglement of the real facts in a net of logical reflections. (1912/1973, pp. 148–149)
Wundt doubted that classical logic could serve as the bedrock for descriptive theories of reasoning beyond a “small part” of cognition. By extension, he rejected logic's normative claim for the bulk of cognition. But even the greats could not agree. Jean Piaget, for instance, brushed Wundt's view aside. Inhelder and Piaget (1958) proposed that the mental structures required to process experience develop in a stage‐like progression from infancy to adolescence. Once children reach the highest stage, they possess “Euclid's understanding of geometry, Newton's … understanding of space, time, and causality, and Kant's understanding of logic” (Flanagan, 1991, p. 145). For these developmental psychologists, logic was a key descriptive and normative foundation of the mind's highest stage of reasoning. Moreover, cognitive psychologists and scientists, from Bruner (Bruner, Goodnow, & Austin, 1956) to Fodor (1975) to Evans (1982), took theory testing based on deductive logic, thus following Popper (1959/2005), as the key to human learning. When Wason (1969) examined adults’ reasoning and found discrepancies from the rules of logical deduction, he and other contemporary cognitive psychologists did not challenge the normativity of logic but inferred that something in his carefully constructed selection task “predisposes people to regress temporarily to less sophisticated modes of cognitive functioning” (p. 478).
Yet Wundt's (1912/1973) rejection of logic as the foundation of cognition experienced a renaissance in psychology, although on the basis of very different arguments. Specifically, the normativity of logic came under attack in two ways in the late 1980s and 1990s. According to Cosmides (1989), natural selection did not evolve general‐purpose cognitive algorithms but rather cognitive algorithms that succeed in solving recurrent adaptive problems, such as the threat of being cheated in a social exchange. From this perspective, reasoning obeys a Darwinian and not a formal deductive logic. The second challenge arose in terms of a probabilistic approach being taken to purportedly logical reasoning tasks (Oaksford & Chater, 1994). According to this view, the conclusion that humans reason irrationally results from comparing “apparently irrational behavior … with an inappropriate logical standard” (Oaksford & Chater, 2001, p. 349). Specifically, people's reasoning is better understood in the Wason selection task in terms of a process of inductive hypothesis testing (and a Bayesian model of optimal data selection) than in terms of an “outmoded falsificationist philosophy of science” (Oaksford & Chater, 1994, p. 608). Consequently, probability theory rather than logic should be the normative benchmark. It is, of course, not without irony here that human reasoning has also been famously observed to deviate from the norms of probability theory (Barbey & Sloman, 2007; Kahneman, 2011; Kahneman & Tversky, 1972). Yet, like in research on reasoning, both the evidence for people's proneness to errors in statistical reasoning (Peterson & Beach, 1967) and the appropriateness of the invoked probabilistic norms for human rationality (Gigerenzer, 1996) have been hotly debated among psychologists as well.
There has thus been a history of opposing views on whether classical logic should serve as a universal benchmark for human rationality. Similar arguments have been raised with regard to probability, coherence, and other benchmarks of rationality (see Arkes, Gigerenzer, & Hertwig, 2016; Hertwig & Volz, 2013). We believe that now is a good moment for psychologists and philosophers to join forces and work toward a new foundation for the definition of rational cognition. This article represents a first step in this direction, with one author being a philosopher and one a psychologist. We begin by briefly outlining two recent developments in psychology—ecological rationality and descriptivism—that contribute to the ongoing debate about appropriate frameworks of rational cognition.
2 Ecological rationality and descriptivism
One development is the concept of “ecological rationality” (Arkes et al., 2016; Gigerenzer, Todd, & the ABC Research Group, 1999; Hertwig, Hoffrage, & the ABC Research Group, 2013; Hertwig, Pleskac, Pachur, & the Center for Adaptive Rationality, in press; A. Kozyreva & R. Hertwig, unpublished data; Todd, Gigerenzer, & the ABC Research Group, 2012). This view endorses the premise that rationality is evaluated against some benchmark but argues that contrary to a frequent assumption in psychology, there are no universal benchmarks. What are treated as universal benchmarks—for instance, consistency, coherence‐based rules, modus ponens, or Bayes's rule in probability theory—do not suffice to evaluate behavior as rational. Instead, rationality should be measured in terms of the organism's success—accurate predictions or competitive decisions—in the world. Ecological rationality thus aims to shift the researcher's methodological strategy from the a priori imposition of content‐free norms to studying the organism's goals and achievements within the context of specific environmental structures as well as the mind's undeniable cognitive constraints. Researchers would thus ask: Under what environmental structure is a given cognitive strategy (e.g., heuristic, rule, routine) for the task at hand more accurate than competing strategies that need more information and computation, and under what structure is it not? A strategy is ecologically rational to the degree that it is adapted, in the context of the task, to the informational and statistical structure of an environment. This also means that any strategy is no longer good or bad, rational or irrational per se, but rather it is or is not adapted to the specific task and environment. In addition, it means that a strategy is commonly being tested against some other strategies that may or may not be even better adapted to the specific task and environment (e.g., Gigerenzer & Brighton, 2009; Spiliopoulos & Hertwig, in press).
Although Elqayam and Evans (2011) classified the concept of ecological rationality among nonnormativist positions, they criticized it for being in danger of committing the dubious inference from “is” to “ought” (see also Elqayam & Over, 2016, p. 46). The position they advocate, descriptivisim (Elqayam & Evans, 2011; Elqayam & Over, 2016), is meant to escape the “is–ought” inference trap. The escape is realized by completely eschewing normative concerns. Elqayam and Over proposed that
the psychology of reasoning and decision making would be better off letting go of normative concerns altogether. Instead of measuring rationality by normative standards, the descriptivist position is that rationality should be measured by the achievement of personal goals. (Elqayam & Over, 2016, p. 7, emphasis added)
To this end, Evans and Over (1996) proposed a distinction between rationality1, measured in terms of achieving one's goals, and rationality2, measured against a priori normative standards, such as classical logic or probability theory. Rationality1 is postulated to be personal and contextual, resulting in instrumental rationality, meaning that an individual behaves in such a way as to achieve his or her personal goals (see also Elqayam, 2012).
In our view, the opposition between descriptivism (rationality1) and normativism (rationality2) that Elqayam and Evans (2011) invoked is misleading because the character of “instrumental rationality” is ambiguous (for other critical objections see Hertwig, Ortmann, & Gigerenzer, 1997). Ordinarily, the assertion that an action is instrumentally rational means that it is rational because it is the appropriate means for some end that, in turn, is assumed to be of value. Thus understood, instrumental rationality does involve a normative dimension insofar as it shifts the normative weight from the end to the means (i.e., to the action; Schurz, 1997, sect. 6.1). There is a second, purely descriptive understanding of instrumental rationality according to which the proposition that an action is instrumentally rational for a given end simply means that the action is appropriate for reaching this end, even if this end is bizarre from a commonsense or intuitive viewpoint. For example, it would sound odd to describe “heavy smoking” as instrumentally rational in regard to the goal of increasing the chances of developing lung cancer or frequent casual sex as instrumentally rational in regard to the goal of contracting a sexually transmitted disease. Yet such descriptive statements would be perfectly fine in this second understanding.
Notwithstanding this criticism, the notions of ecological and instrumental rationality and descriptivism have one thing in common: They object to the reduction of rationality to allegedly universal normative systems, which are, in turn, founded on a priori intuitions that are inaccessible to further justification in terms of their cognitive functionality or success in the world. Next, we turn to the difficulties such intuitions face. To this end, let us dip our toes in some philosophical waters.
3 The problems of justifying rational cognition from a priori norms or intuitions
To appreciate the full force of the foundational issues in question, it helps to briefly consider normative ethics and more specifically the classical distinction between deontological and consequentialist justifications of ethical norms (see Broad, 1967; Frankena, 1963). In deontological frameworks, the justification of norms is rooted in general a priori intuitions about values and duty principles that are assumed to be “good in themselves” (e.g., Kant, 1785/2012). These principles are obligatory, irrespective of the consequences that might follow from our actions. In consequentialist frameworks, in contrast, how correct our moral conduct is will be determined solely by a cost–benefit analysis of the action’s consequences. One example of such a framework is (act) utilitarianism, according to which an action is morally justified if the action’s total good consequences outweigh its total bad consequences (e.g., Mill, 1863/1998). Let us employ the distinction between deontological and consequentialist justification in the context of rational cognition (see Goldman, 1986, p. 97). As in deontological theories of ethics, in apriorist accounts of rational cognition (note that the term deontological is reserved for the domain of ethics), norms are justified by reference to a priori intuitions. Such foundational intuitions could be, for instance, necessity, consistency, or coherence. Generally speaking, a norm or an intuition concerning the rationality of a given cognitive strategy can be described as a priori if either it is considered evident without further justification, or its justification is based on other intuitions that are independent of the consequences of this strategy in a given environment. In contrast, consequentialist accounts of rational cognition justify their benchmarks in terms of what one could call “cognitive success.” This means that these benchmarks acquire “normative legitimacy” through the success of their consequences and not through agreement with some norm such as coherence that is imposed a priori (e.g., transitivity, property alpha, procedural invariance; see table 1 in Arkes et al., 2016).
3.1 Equilibrium justifications and the problem of circularity
In our view, it is highly problematic to enlist a priori intuitions as the foundation for justification of rational norms. Let us explain our concern. After five centuries of failed attempts in the history of rationalist philosophy, including Kant's (1781/1998), to justify principles a priori, there is wide consensus in contemporary epistemology: It is impossible to justify cognitive principles from nothing, which was Kant's understanding of “a priori.” Thus, contemporary philosophers in the rationalist tradition have put the coherence of intuitions at the basis of rationalist justifications that are considered a priori in the sense explained.1
The method of justifying a priori intuitions by the coherence with other intuitions has been called, perhaps somewhat euphemistically and sidestepping the term intuition, the “method of reflective equilibrium” (Cohen, 1981; Goodman, 1955/1983; Rawls, 1971):
The key idea underlying this view of justification is that we “test” various parts of our system of beliefs against the other beliefs we hold, looking for ways in which some of these beliefs support others, seeking coherence among the widest set of beliefs, and revising and refining them at all levels when challenges to some arise from others. For example, a moral principle or moral judgment about a particular case (or, alternatively, a rule of inductive or deductive inference or a particular inference) would be justified if it cohered with the rest of our beliefs about right action (or correct inferences) on due reflection and after appropriate revisions throughout our system of beliefs. By extension of this account, a person who holds a principle or judgment in reflective equilibrium with other relevant beliefs can be said to be justified in believing that principle or judgment. (Daniels, 2018, sect. 1)
From a consequentialist viewpoint, however, there is a vigorous objection to such “equilibrium justifications.” They are circular. In reply to this objection, several philosophers have argued that even circular justifications may have epistemic value (e.g., Goldman, 1999, p. 85; Psillos, 1999, p. 82). However, there are strong counterarguments showing that such hopes are in vain. Before turning to one, let us clarify that we do not deny that certain justification structures that have been called circular in the literature can have epistemic value (see Hahn, 2011); yet these are of a different sort from the circles involved in equilibrium justifications.2
3.2 Circular justifications and the problem of contradictory intuitions
One key counterargument to the view that circular justifications have epistemic value demonstrates that contradictory rules can be pseudojustified by the same circular argument structure. For example, the circular inductive justification of induction goes as follows: Inductions were successful in the past, whence, by induction, they will be successful in the future. If one accepts this justification, then—to avoid inconsistency—one must equally accept a counterinductive justification of counterinduction3 that runs as follows: Counterinductions were not successful in the past, whence by counterinduction they will be successful in the future (see Douven, 2011, sect. 3; Salmon, 1957; Schurz, 2018). Eventually, circular justification may also be given for fundamentalist doctrines, such as the “rule of blind trust in God's voice,” which a person may hold in reflective equilibrium with the intuition that “God's voice in me tells me that I should blindly trust his voice.”
The fact that equilibrium justifications can easily support contradictory intuitions demonstrates that circular justifications are highly problematic. Because of their circularity, these contradictory intuitions cannot be meaningfully correlated with the world but are rather inescapably subjective in nature. A striking example of an intuition‐based account of rationality in psychology and cognitive science is Cohen's (1981) article “Can Human Irrationality Be Experimentally Demonstrated?” According to Google Scholar, this article has been cited a total of 1,414 times (June 23, 2018). The philosopher Cohen vehemently argued against the bleak implications for human rationality implied especially by the research in psychology on probabilistic reasoning (Kahneman & Tversky's heuristics‐and‐biases program; Kahneman, 2011) and deductive reasoning. For Cohen, rules of logical and probabilistic reasoning such as modus ponens, modus tollens, and Bayes's theorem are based on intuitions about correct reasoning. He put it as follows: “The presence of fallacies in reasoning is evaluated by referring to normative criteria which ultimately derive their own credentials from a systematization of the intuitions that agree with them” (1981, p. 317). From this follows, Cohen argued, that if people's reasoning deviates from such rules, then this merely means that they have different intuitions about correct reasoning than logicians or probability theorists do, and therefore experimenters “risk imputing fallacies where none exist” (1981, p. 330).
The subjective nature of intuition‐based justifications raises the problem of how to arbitrate between competing normative systems. Some have diagnosed this arbitration problem as essentially unsolvable because cognitive norms, goes the argument, will necessarily be based on intuitions, without external standards of cognitive success (Elqayam & Evans, 2011). Consequently, an intuition‐based justification of rationality is doomed to result in a strong form of cognitive relativism (“anything goes”)—a position whose consequences the philosopher Stich (1990) worked out.
3.3 Why a consequentialist account of rational cognition is indispensable
What follows from this discussion? First, we do not deny that intuitions are needed in some areas, for example, in ethics where one inevitably must define what counts as intrinsically valuable. However, the appropriateness of cognitive systems should be evaluated not by intuitions but, so we argue, by demonstrations that these systems have successful consequences in the real world. Cognitive success is thus a concept that brings a consequentialist perspective to the justification of norms for rational cognition. Remember consequentialism (as used in ethics) means that the moral rightness of an act depends only on the consequences of that act. By analogy, an act of a cognitive system is rational insofar as its consequences bear success. For instance, the validity of the rule of modus ponens is established not “by intuition” but by the semantic proof of its strictly truth‐preserving nature: If “p” and “p implies q” are true, then “q” will be true as well, no matter the environment you are in. This justification of modus ponens is consequentialist in nature. Cohen (1981, p. 319) objected that the “if–then” of classical logic deviates from the if–then in natural language. Therefore, according to Cohen (1981, p. 319), intuitions need to be invoked to determine the “right” meaning of the conditional. To this argument, the consequentialist reply is that to assume that there is an “objectively right” meaning is a “rationalistic illusion”—there are only more or less cognitively successful meanings and these can change across contexts (see also Hertwig, Benz, & Krauss, 2008; Hertwig & Gigerenzer, 1999). It is well known that the if–then of natural language has a number of different semantic interpretations (cf. Bennett, 2003). The question of which is cognitively most appropriate should be answered not by reference to intuition but by replacing the ambiguous if–then of natural language with semantically well‐defined conditionals (e.g., strict, uncertain, indicative, counterfactual) and investigating their cognitive properties. In later sections we investigate the cognitive success of different systems of strict and uncertain conditional reasoning, with surprising results. Investigations of this sort are impossible as long as these systems are merely evaluated and justified on the basis of intuition, in particular since a growing dissent of intuitions has emerged in the area of conditional reasoning (see Pelletier & Elio, 1997).
In sum, taking intuitions as sacrosanct would hinder empirical research and rational criticism. We suggest that the better justification of norms of rational cognition is consequentialist in nature. Within a consequentialist account, the severe problems of arbitration, cognitive relativism, and the indeterminate correspondence of intuitions and the world are either removed or less grave—at least so we claim. The reason lies in the promise that all normative systems of reasoning can be measured on a commensurable metric that we call cognitive success. What is it and how can it be measured?
4 What is cognitive success?
Next, we propose a consequentialist account of rational cognition. Our account is in line with Quine's naturalized epistemology (1960) but goes beyond it in its explication and applications of the notion of cognitive success, as well as in its new understanding of the interplay between its descriptive and normative components. What distinguishes the present proposal from all naive sorts of pragmatism is that cognitive systems are evaluated in terms of cognitive rather than practical success indices (such as moneymaking). What is measured by cognitive success is the “cognitive part” of rationality. Cognitive rationality is a precondition for practical rationality, but unlike practical rationality, it abstracts from the question of what ends are normatively right or intrinsically good. In contrast, practical rationality, in the philosophical understanding of this concept, attempts to answer this question. For example, knowing the optimal temperature for roasting meat is “cognitively rational,” but the assessment of the practical rationality of roasting meat depends on one's ethical attitude toward a vegetarian versus nonvegetarian diet.
A consequentialist approach to the definition of rational cognition faces two main challenges. First, how can the value of cognitive success be justified without again presupposing normative intuitions, thus inheriting all the problems outlined above? According to philosophical arguments harking back to Hume (1739/40) and Moore (1903), it is impossible to derive norms solely from the “is,” that is, from empirical facts (Schurz, 1997). Consequently, every instrumental justification of particular norms must assume, besides factual information, more general norms. For example, inferring that calisthenics is good from the fact that it improves fitness assumes that fitness is a general norm. Does this argument then not thwart any attempt to ground the notion of cognitive success in anything but, again, normative intuitions?
Although this objection—every instrumental justification of particular norms must assume more general norms—is logically correct and has useful applications in ethics (Schurz, 2014), we argue that it does not apply to psychology and cognitive science for the following reason: Cognitive success is instrumental for all—or at least most4—purposes. Every real‐world decision problem involves, as a part of it, a ubiquitous cognitive task, namely, predicting which of the available actions will have the maximum expected payoff, in light of a given reward function.5 Greater success in this cognitive task will, by and large, lead to greater success in one's actions, independently of the goals pursued (Schurz, 2014). Is the premise that cognitive success is instrumental for almost all purposes really sufficient for the normative justification of cognitive success? Logically speaking, no, because this premise is descriptive and (as explained above) no “ought” can follow from an “is” by rules of logic alone. However, the missing normative premise that fills the logical gap is relatively harmless: We assume that it is by‐and‐large good to help people attain their personal goals. This is indeed a fundamental and widely shared intuition, though not a cognitive but a moral one.
Moreover, the insight that cognitive success is instrumental for almost all practical purposes helps solve the problem of the apparent relativity of instrumental rationality to one's assumed purposes, which for many authors constitutes a fundamental obstacle to the objectivity of this notion (e.g., Stich, 1990, p. 131). We suggest that the purpose‐invariant core of all forms of instrumental rationality is precisely their cognitive rationality (Kornblith, 2002, p. 158). Thus, there are no separate forms of instrumental rationality for cooks, clerks, and pilots, or for right‐wing and left‐wing politicians. What is common to all these applications of instrumental rationality is their cognitive success. This means that cognitive success is not to be mistaken for moral rightness.
This brings us to the second challenge to a consequentialist approach to defining rational cognition: the meaning of cognitive success. The details will depend on the cognitive task at hand. Yet there must be a core meaning of “cognitive success” that is common to all competing systems of rational reasoning; otherwise, it would be impossible to compare them using the same currency. Above, we argued that every real‐world decision problem involves—or can be reformulated in terms of—some kind of prediction problem. On the basis of this premise, we suggest the following definition:
The core meaning of the cognitive success of a system (including algorithms, heuristics, rules) is defined in terms of successful predictions, assuming a comprehensive meaning of prediction that includes, besides the predictions of events or effects, predictions of possible causes (explanatory abductions) and in particular predictions of the utilities of actions (decision problems).
Characterizing a decision problem in terms of a prediction task might seem narrow. Yet much of what people do is predicated on implicit or explicit forecasts about how the future will unfold. Choosing a job, getting married, having children and investing in their education, purchasing an apartment, voting for a party, saving for old age, choosing a medical treatment—all these decisions and many others are reached on the basis of predictions about what the future holds. Moreover, focusing on predictions by no means implies that important cognitive processes are ignored. Since reliable predictions are based on an inductive inference from sufficiently informed premises, they engage various nonpredictive subprocesses such as search, memory retrieval, and language processing. Importantly, the major purpose of the predictive reformulation of decision tasks is to measure their cognitive success on a commensurable scale. For example, consider the decision problem of buying the “best” car (relative to the buyer's preferences) where the buyer encounters two websites offering two competing decision methods, M1 and M2, to potential car buyers. Then the claim that method M1 is more appropriate for a certain group of car buyers (e.g., males between the ages of 20 and 30) amounts to the testable prediction that the degree of future satisfaction of car buyers in this group, having used method M1, is significantly higher than those who used method M2.
Upon closer inspection, the predictive success of a cognitive system or (more generally) a cognitive method depends on two components that are commonly in competition and whose optimization thus involves a trade‐off. In the psychological literature, this trade‐off is reflected in the distinction between the ecological validity of a prediction method (Brunswik, 1952; Gigerenzer et al., 1999) and its applicability.6
More precisely, a method's cognitive success can be factorized into the product of these two components as follows:
cognitive succ = ecological validity x applicability,
where applicability is the percentage of targets for which the method renders a prediction, among all intended targets of prediction, and ecological validity is the sum of scores divided by the number of all predictions rendered, and
score (per prediction) = max - loss,
where max is the maximal score that a perfectly accurate prediction can obtain and loss is a monotonically increasing function of the distance between the predicted and the actually observed value of the event variable. From this it follows that
cognitive succ = sum of scores divided by number of all intended targets of prediction.
Ecological validity and applicability of a cognitive method are in competition. One can increase the ecological validity of a method by having it apply only to those few target domains for which the method's predictions are known to be accurate because, for instance, the method has been fitted to this domain. Likewise, one can increase the applicability of a method by applying it also to target domains for which its error rate remains unknown or even known to be high, or by permitting the method to make a random guess in cases where the algorithm does not reach a decision (i.e., in this sense is not applicable). Also note that the definition of the concept of applicability is related to all intended target domains but not to all possible target domains. Thus, a method's cognitive success cannot be deemed to be low because it does not apply to domains that were never intended to be part of the class of target domains. Consider, for illustration, the analogy of a hammer—its “success” is not diminished by the fact that the hammer is not suitable to drill holes. We also emphasize that a method's class of intended targets domains is not an invitation to propose arbitrary reference classes but rather is empirically inferred in terms of the method's purposes across all users. Thus, a method's cognitive success cannot be arbitrarily boosted by winnowing down its intended targets to “easy ones.”
The score that a method earns for each prediction is its maximally achievable score (max) minus its distance to the observed value (loss). The type of loss function7
and max are specified by type and context of the given task.8 Often max is identified with the greatest possible loss; this entails that min, that is, the minimal score, is zero. If loss is identified with the absolute distance function, max is given as the width of the observable value range. For example, if the task consisted in forecasting the next day's mean temperature with values lying in the range between −20°C and +40°C and the loss function is given as the absolute difference between predicted and actual mean temperature, then max is 60°C. If the task is the prediction of probabilities such as that it will rain tomorrow, then, according to a famous result of Brier (1950), the appropriate loss function is not the absolute but the squared distance between the predicted probability and the truth value of the predicted event; thus, max = 1 (true) and min = 0 (false). In the example of people intending to buy a car, a natural loss function might be the absolute difference between mean degree of satisfaction (in an unbiased sample) with the car type recommended by method M1 and that recommended by method M2, with degree of satisfaction measured on a scale ranging, say, from min = 0 to max = 10.
4.1 Some possible objections to cognitive success
Let us freely admit that intuitions can play a role in determining the details of the scoring function. However, robust results should be largely invariant to changes of the scoring functions (see the section on uncertain conditionals below). Another objection to the concept of cognitive success is that it downplays the role of explanations relative to predictions. This challenge can serve as a further test case for our account. Salmon (1984) argued that what distinguishes explanations from predictions is that, whereas predictions can be based on noncausal correlations, explanations must spell out the causes of the event to be explained. Although we agree, we emphasize that causality can easily be embedded into the concept of cognitive success. What distinguishes a causal from a noncausal correlation between a variable X and another one Y is that the effect of an intervention on X will be transmitted to Y only if X is a cause of Y (this is a consequence of the causal Markov condition; see Pearl, 2009). Thus, the cognitive success of causal information resides in its capacity to predict the consequences of (human) actions.
Another account identifies good explanations with argument patterns that unify many empirical phenomena (Kitcher, 1981). However, it can be shown that empirical unification correlates with empirical confirmation and this, in turn, correlates with predictive success (Schurz & Lambert, 1994). The only notions of explanation that are not and should not be covered by our account are those that make the quality of an explanation dependent on its coherence with “intuitions of understanding” and that are inexplicable in terms of causal or unificatory concerns.
The two core components of our notion of cognitive success, ecological validity and applicability, are related to a number of further important evaluative dimensions:
A method with high ecological validity has a high truth rate9 in those situations where it is applicable; thus, high ecological validity is connected with low risk of error.
A method with high ecological validity may nevertheless have low cognitive success if it can rarely be recruited due to low applicability.
A method with high applicability renders predictions possible across many predictive contexts. High applicability therefore suggests that the method has a high information output.
On the other hand, a method's applicability is inversely related to its cognitive costs, measured in terms of the information input needed and the effort required to process it. The higher the cognitive cost of a method, the more often it will be inapplicable because it exceeds the upper bound of agents’ cognitive resources (see also Payne, Bettman, & Johnson, 1993).
The threefold tensions between risk of error, information output, and cognitive costs create a fitness landscape10
that can explain many facets of the pros and cons of competing systems of rational reasoning. How these cognitive fitness factors interact in concrete cognitive tasks will be discussed next. In particular, the tension between these factors explains why cognitive science needs not a monism but a pluralism of cognitive methods, and why the evaluation of those methods’ advantages and weaknesses should rely not on intuition but on careful comparison of their respective success. Next, we illustrate this point by applying the notion of cognitive success in the domains of classical material conditionals, uncertain conditionals, Bayesian probabilities, and prediction and choice.
4.2 Cognitive success and deductive reasoning
Let us return to classical logic, our introductory example of what many psychologists considered a universal norm of rational cognition in the 20th century. Deductive inferences are, by definition, completely valid—that is, they have maximum ecological validity (1.0): In all situations in which all premises are true, the derived conclusion will invariably be true. Yet this ideal validity of deductive inferences stands in stark contrast to their very low applicability, as emphasized by Wundt (1912/1973; see above). That is, the prevalence of deductive inferences with nontrivial conclusions is low. As an example, consider inferences of propositional logic involving the classical (material) conditional “If P, then Q” (semantically equivalent to “not‐P or Q”). It can be shown that this inference can have a nontrivial conclusion insofar as it is possible to confirm each premise by observations that do not already contain the conclusion. This will be the case if the following condition is satisfied: The verification of the conditional premise “If P, then Q” is based not on the observation of “not‐P” or of “Q,” but rather on an inductively supported belief that expresses (at least implicitly11
) a strict (exceptionless) generality of the form “For all x in a given domain: If P(x), then Q(x)” (see Schurz, 2014, sect. 5.1). Exceptionless regularities (i.e., conditional probabilities of 1.0) are known to be rare in empirical (nonmathematical) domains. What does this mean? It simply means that inferences of propositional logic with nontrivial conclusions are rare. Therefore, their overall cognitive success will be low in these domains, notwithstanding their maximum validity. Only if one could demonstrate that in a specific environment the applicability of deductive reasoning is high could one argue in favor of this system's high cognitive success in this environment. One such environment may be cheater detection, where people can be, under specific circumstances, remarkably successful when measured in terms of modus tollens reasoning (Cosmides & Tooby, 1992). Another domain may be consistency checks in legal reasoning (Arkes et al., 2016).
4.3 Cognitive success and reasoning with uncertain conditionals
Uncertain conditionals are conditionals of the form “If A, then normally B.” They are epistemically acceptable if the associated conditional probability pr(B|A) is “sufficiently” high, that is, higher than a contextually determined threshold α > .5. Systems of probability logic infer further conditionals from sets of uncertain conditionals. There are four well‐known systems of reasoning with uncertain conditionals: O, P, Z, and QC. System O (Hawthorne & Makinson, 2007) is the only system that preserves epistemic acceptability from premises to conclusion for any chosen acceptability threshold. System P is the famous system of probability logic developed by Adams (1975). It guarantees to preserve epistemic acceptability only if the sum of the premises’ conditional uncertainties is smaller than 1.0 minus the acceptability threshold (where uncertainty is defined as 1.0 minus probability; Oaksford & Chater, 2007, p. 111). System Z goes back to Pearl (1990) and makes additional default assumptions that, roughly speaking, maximize the entropy of the distribution under the high‐probability constraints dictated by the premise conditionals (Hill & Paris, 2003). System QC (for “quasi‐classical reasoning”) reasons with uncertain conditionals as if they were exceptionless conditionals of classical logic.
For illustration, assume a small world with only four predicates: “being a bird” (B), “being able to fly” (F), “having wings” (W), and “being male” (M). The known premises are the two uncertain conditionals (a) B ⇒ F (birds can fly) and (b) B ⇒ W (birds have wings), with associated probabilities pr(F|B) = pr(W|B) = .95. System O draws only trivial inferences such as B&(M∨¬M) ⇒ F from Premise a, meaning birds that are either male or not male can fly, with an associated probability of .95. In addition to the previous inference, system P draws the inference B&W ⇒ F from Premises a + b, meaning, birds having wings can fly, with an associated probability of .9. System P does so by applying the law of “cautious monotonicity” and the uncertainty sum rule. In addition to the previous inferences, System Z draws the inferences B&M ⇒ F and B&¬M ⇒ F from Premise a, meaning male birds as well as nonmale birds can fly, with an associated probability of .95. It does so by making the default assumption that the predicates “male” and “being able to fly” are statistically independent (likewise in application to Premise b). Finally, in addition to all previous inferences, System QC draws the “risky” inference of contraposition ¬F ⇒ ¬B, meaning nonflying objects are not birds. This follows from Premise a with an associated probability of .95 (similarly in application to Premise b).
These four systems differ significantly in their predictive power. They become increasingly powerful and, at the same time, more risky and error prone. That is, the applicability (number of derived conclusions) and error probability (number of mistakes made) increase from O to P to Z to QC. From a consequentialist viewpoint, the question is not which of these systems is the right or true one, but which is superior with regard to cognitive success. Schurz and Thorn (2012) performed a cognitive‐success analysis of the Systems O, P, Z, and QC. In their computer simulation an environment with four binary variables a, b, c, d and a randomly generated probability distribution was repeatedly simulated. The possible cases (predictive targets) consisted of all 464 conditionals with conjunctions of one, two, or three unnegated or negated variables in their antecedent or consequent. The task on which the four systems were compared was the derivation of conditionals from four randomly selected base conditionals with conditional probabilities ≥ .7, together with a prediction of their associated conditional probabilities.12
Thus, there were at most 460 conditional probabilities to be predicted. Four different scoring rules for cognitive success were compared. Table 1 presents the results for the ACG (advantage compared to guessing) score, defined as the absolute difference between the predicted and the actual conditional probability for each of the derived conditionals. Although the ordering of the four systems according to their ecological validity is Q > P > Z > QC, their applicability ordering is precisely the inverse, QC > Z > P > Q. The resulting cognitive success ordering is Z > QC > P > O.
Table 1. Cognitive success analysis of four systems of reasoning with uncertain conditionals
System Applicability (% of 460 intended predictions) Sum of Scores (ACG score)a Ecological Validity (range [0, 1]) Cognitive Success (range [0, 1])
O 1.0 4.6 0.92 0.009
P 1.4 5.2 0.82 0.011
Z 10.5 22.5 0.47 0.049
QC 22.9 8.5 0.08 0.018
Note
a For normalization purposes, the ACG scores in table 2 of Schurz and Thorn (2012) were multiplied by 3.
In light of these results, Schurz and Thorn (2012) concluded that System Z achieves the optimal balance in the trade‐off between deriving true and informative conclusions and avoiding false or uninformative ones.13
Schurz and Thorn (2012) and Thorn and Schurz (2014) investigated three additional scoring rules: PIR (price is right), sPIR (subtle price is right), and EU (expected utility). The qualitative orderings of the ecological validity, applicability, and cognitive success of the four systems were the same across all four success measures, demonstrating the robustness of the results.14
4.4 Success and Bayesian probabilities
Bayesian probabilities are internally coherent degrees of subjective beliefs. Following arguments by Ramsey (1926/1990) and De Finetti (1937/1964), coherence is usually justified as follows: If one interprets degrees of beliefs as fair betting quotients, one is guaranteed to never accept a system of bets that exacts a logically guaranteed loss, that is, a “Dutch book.”15
The Dutch book argument is thus indeed a consequentialist justification as it ties the consequences of a person's subjective probabilities back to monetary outcomes. However, what is thus justified is merely the coherence of probabilities, and this means only that they need to satisfy the basic (Kolmogorovian) probability axioms.16 This is indeed a necessary constraint on rational degrees of beliefs, but by itself it is not sufficient for rational degrees of belief to yield cognitive success. The condition of a coherent fair betting quotient depends solely on the gambler's subjective beliefs and preferences. It does not involve any adaptation to the environment, that is, to the true frequency or statistical probability (frequency limit) of the events betted on. Consider, for example, a subjectivist who repeatedly offers betting odds of 1:1 that she will roll a 6 with an unbiased die. She considers this bet to be fair and is equally willing to accept the opposite bet that she will not roll a 6. She is coherent and will remain coherent even after she has lost her entire fortune. She will be puzzled that while everybody readily accepted her first bet, nobody accepted the opposite bet, even though both are equally fair in her view. Thus, if she ignores the frequentistic chances of the events betted on, she will be unable to explain why she lost everything and others won.
As this, admittedly engineered, example illustrates, the problem of subjective degrees of belief is not their low applicability (an individual's beliefs could discriminate between many states of the world) but their potentially low ecological validity. The Bayesian coherence requirement is too weak to exclude cognitively unsuccessful behavior if one's degrees of belief are not connected with objective truth‐chances (i.e., statistical probabilities; Knight, 1921). There are pertinent methods in Bayesian statistics of establishing this connection (less well‐known than the Dutch book arguments), such as Lewis's “principal” principle (1980/1986) or De Finetti's (1937/1964) equivalent “exchangeability” principle. These principles demand that a person's rational degree of belief (Pr) in an event (E) should have the value r, given that all that the person knows is that the statistical probability (pr) of the corresponding event type E is r, more formally, Pr(E | pr(E) = r) = r (for arbitrary r ∈ [0, 1]). One can prove that the satisfaction of this principal principle is equivalent to the assumption of Bayesian statistics that degrees of belief can be represented as weighted averages of statistical probabilities. Subjective probabilities that satisfy this condition are known to converge toward the true statistical frequencies when the evidence increases infinitely, independently of the assumed prior distributions (Gillies, 2000, p. 71ff; Howson & Urbach, 1993, chapter 14; Schurz, 2013, pp. 165, 236f). It is only if this connection between subjective and objective probabilities is established that Bayesian reasoning can be cognitively successful and decisions based on maximization of subjectively expected utility can maximize one's average utility.
4.5 Cognitive success in prediction and choice
Perhaps more than in any other research area in psychology the tension between apriorist and consequentialist accounts of rational cognition unfolded in the debate about the meaning of bounded rationality in general and the role of heuristics in particular. The heuristics‐and‐biases research program (Kahneman, 2011), possibly the most influential research program in psychology of the last five decades, has consistently invoked the rules of probability theory and statistics as a priori norms for human rationality. Deviations from these norms in people's reasoning were taken as manifestation of irrationality. In Kahneman's (2003) portrayal of the program's research, it “attempted to obtain a map of bounded rationality, by exploring the systematic biases that separate the beliefs that people have and the choices they make from the optimal beliefs and choices assumed in rational‐agent models” (p. 1449). Many of the systematic biases were attributed to the operation of heuristics (e.g., availability, representativeness, and anchoring‐and‐adjustment) that although “quite useful,” sometimes “lead to severe and systematic errors” (Tversky & Kahneman, 1974, p. 1124). On this view, a heuristic's rationality is evaluated exclusively on the basis of its conformity to the norms and not in terms of its potential cognitive success.
This changed with the arrival of the ecological rationality research program, which has redefined the normative study of heuristics; by extension, it interprets bounded rationality in terms of the match between a heuristic and an environment, the two blades in Simon's (1990, p. 7) scissors metaphor. On this view, this match determines the performance and thus the cognitive success of a heuristic. In order to measure cognitive success, researchers of heuristics’ ecological rationality have conducted a wide range of tournaments between simple heuristics and complex strategies commonly considered to be normative. These computer simulations encompass, for instance, the analysis of heuristic inferences about real‐world quantities (e.g., which of two cities has a larger population size; Gigerenzer & Brighton, 2009; Gigerenzer & Goldstein, 1996; Katsikopoulos, Schooler, & Hertwig, 2010) and more recently the analysis of choices between uncertain lottery options (Hertwig, Woike, Pachur, & Brandstätter, in press) and of choices in strategic games (Spiliopoulos & Hertwig, in press). For illustration, consider the tournament involving choice strategies choosing between uncertainty lottery options (Hertwig et al., in press). The simulations implemented 20 choice environments (defined by different payoff and probability distributions) and randomly generated 6,000 choice problems per environment. The innovation in this simulation was that all strategies (with the exception of the omniscient expected value model) learned about the properties of each problem by sequentially taking one draw at a time from each of the options per problem. The strategies then chose what they inferred to be the best option after each sample (learning stopped after 50 rounds).
Table 2 presents the cognitive success of each of the six choice strategies. The normative benchmark for human beings is either the omniscient expected value theory, or, more realistically, the sampling‐based expected value theory. In light of the cognitive success measure, Hertwig et al. (in press) concluded that under uncertainty (when all strategies have incomplete knowledge and need to sample the environment), some simple choice heuristics nearly approximate the performance of the sampling‐based expected value theory—even though they may not take entire swaths of information into account. The well‐performing equiprobable heuristic, for instance, ignores all probabilities and merely calculates the mean of all outcomes within each option, then chooses the option with the highest mean. Indeed, the research on ecological rationality has repeatedly demonstrated that simple heuristics, which curtail search for information and reach decisions without complex calculations, can lead to surprisingly good inferences and predictions relative to complex algorithms based on the principles of logic, probability theory, and maximization.
Table 2. Cognitive success analysis of choice strategies in choice environment requiring learning of the properties of the choice options
Strategya Applicability (in %)b Cognitive Success (range [0%, 100%])c
N = 5 N = 20 N = 50
Equiprobable 100 93.1 94.6 93.7
Probable 100 86.4 92.3 93.5
Lexicographic 100 86.4 87.9 88.0
Least‐likely 100 54.2 61.5 64.3
Sampling‐based expected value theoryd 100 94.0 98.3 99.3
Omniscient expected value theory 100 100 100 100
Notes
a All strategies are described in detail in Hertwig et al. (in press).
b In this analysis all strategies were always applicable because they could either select the options or choose randomly.
c Average performance across all 20 choice environments and for N = 5, 20, and 50 samples taken per option (two options with two, four, and eight outcomes) from the environment; the cognitive success metric is normalized such that 100% means that a strategy always selected the option with the higher expected value (as did the omniscient expected value model) and 0% means that a strategy always selected the option with the lowest expected value.
d The sampling‐based expected value theory can also be implemented in terms of a simple heuristic (i.e., natural‐mean heuristic; see Hertwig et al., in press).
The strategies in Table 2 selected an option randomly in cases where their policy and the information available did not render a choice, that is, were not applicable. For this reason their applicability is always 100% and their ecological validity and cognitive success are identical. In other tournaments measuring cognitive success some competing methods have low applicability, whereas others are always applicable. This is particularly the case in tournaments including meta‐inductive selection strategies. The account of meta‐induction (Schurz, in press; Schurz & Thorn, 2016) is in an important sense complementary to the research program of ecological rationality: Meta‐induction is a general meta‐cognitive strategy designed to choose, in each situation in which it is applicable, a locally optimal method from a given toolbox of candidate methods. Two important meta‐inductive strategies are take‐the‐best17and success‐weighting. Take‐the‐best applies in each round of the tournament. It selects the prediction method that is applicable (i.e., renders a prediction) and that has the best success record in the past. Success‐weighting predicts a weighted average of the predictions of those methods that rendered a prediction in the given round of the tournament, with weights reflecting the methods’ past successes. Table 3 presents the results of applying take‐the‐best and success‐weighting to the results of the Monash University footy tipping competition (MUFTC18). The predictive target was forecasting the 3‐valued results (1, 0, or tie) of matches of the Australian Football League. The tournament included the predictions of 1,071 human participants (Table 3 reports the five human forecasters with the highest success rates, Forecasters 1–5) as well as the predictions of the different meta‐induction strategies including take‐the‐best and success‐weighting.
Table 3. Cognitive success analysis of the Monash University footy tipping competition (after 1,514 rounds)a
Predictor Applicability in % Sum of Scores Ecological Validity (range [0, 1]) Cognitive Success (range [0, 1])
Success‐weighting 100 877 0.579 0.579
Take‐the‐best 100 873 0.577 0.577
Forecaster 1 39 839 0.640 0.554
Forecaster 2 27 811 0.637 0.536
Forecaster 3 13 789 0.666 0.521
Forecaster 4 12 789 0.676 0.521
Forecaster 5 13 787 0.658 0.520
Notes
a Target was forecasting the results of 1,514 matches of the Australian Football League over eight seasons from 2005 to 2012. The tournament included the predictions of 1,071 human participants and the predictions of various meta‐induction strategies including take‐the‐best and success‐weighting.
The five best human forecasters displayed high performance only in certain rounds and refrained from making predictions in other rounds. The meta‐inductive strategies utilized the predictions of the best human forecaster in each round, with the result that their applicability was 100% and their cognitive success surpassed that of the best human forecasters (with a slight advantage of success‐weighting over the simpler take‐the‐best strategy).
4.6 What does cognitive success mean for the is–ought relationship?
Cognitive success’ intent to focus on the consequences of rational cognition suggests a new view of the relationship between the normative (“ought”) and the descriptive (“is”) dimensions of theories of reasoning. According to the traditional division of labor, it is the task of armchair philosophy to address normative issues, and that of empirical psychology to answer descriptive questions. From the consequentialist perspective of cognitive success, however, empirical results can become normatively relevant, and normative innovations can suggest new empirical questions (see Corner & Hahn, 2013).19
The relationship between the normative and the descriptive, as conceptualized in different accounts of rationality in reasoning—such as apriorism, descriptivism, or ecological rationality—emerges most clearly from the answers they give to the following question: What should one infer from a conflict between descriptively observed and normatively recommended behavior in the context of a cognitive task?
In general, the answers depend on the theoretical positions adopted. Thus, how scholars respond to the gap between “is” and “ought” is diagnostic with regard to the justification of the rationality norms they endorse. Let us assume that an experiment reveals a divergence between how people reason and how they ought to reason according to some standard rational benchmark such as norms of probability theory, logic, or axioms of rational choice. If a scholar adopts an intuition‐based justification of rational cognition, the normatively recommended behavior is not defined in terms of its cognitive success but by reference to a priori intuitions. Thus, the cognitive behavior observed will be judged to be irrational. Alternatively, a scholar may endorse a strong relativism of different systems of intuition. In this case, however, no strong rationality inferences can be drawn (Cohen, 1981; Shier, 2000, p. 78).
In consequentialist accounts, in contrast, both the empirically observed reasoning and the “reasoning” of the normative systems will be evaluated with regard to cognitive success. For example, if logistic regression were regarded as the normative standard for predicting the value of a criterion based on a set of cues, then the cognitive success of this normative standard, assuming some statistical knowledge base, could be measured against the cognitive success of people's predictive inferences from the same input (see also Gigerenzer & Brighton, 2009). This opens up a new option for responding to conflicts between empirical observations and normative recommendations. The cognitive success of observed reasoning is not necessarily worse than that of the normative system. Indeed, observed reasoning may, in fact, even outperform normative recommendations. As mentioned before, evidence for the latter has been compiled in research on bounded and ecological rationality (Gigerenzer & Brighton, 2009; Gigerenzer, Hertwig, & Pachur, 2011; Hertwig et al., 2013, in press; Todd et al., 2012). If the observed cognitive behavior outperforms the normative system, the consequentialist would need to conclude that the assumed “normative system” is second best and, thus, can no longer be invoked to derive normative recommendations.20
If, however, observed behavior scored lower on cognitive success than the normative system, the consequentialist's conclusion would depend on a second theoretical choice that is open to consequentialists but not to intuition‐based accounts, namely, attitudes toward cognitive adaptationism. This position assumes that human cognition is near‐optimally adapted to its relevant environments. Therefore, a consequentialist proponent of cognitive adaptationism may be inclined to argue that the assumed measure of cognitive success is inappropriate and in need of revision. In contrast, a nonadaptationist consequentialist would conclude, faced with evidence that observed behavior's cognitive success is surpassed by that of the normative system, that human cognition is below par.
Let us explain the relationship between cognitive consequentialism and cognitive adaptationism in more detail. One might think that cognitive consequentialism entails cognitive adaptationism because the former evaluates cognitive systems by their cognitive success and cognitive success entails being well adapted. This reasoning, however, mistakes “is” and “ought.” Cognitive consequentialism makes the normative claim that cognitive systems should be evaluated in terms of their cognitive success. This implies the normative claim that, ceteris paribus, cognitive systems should be well adapted to their environment. However, cognitive adaptationism is not a normative requirement but an empirical thesis, stating that because humans are the product of evolutionary selection processes, they will be cognitively well adapted. This may or may not be the case—an issue to which we return below. For the present discussion, however, it is important to note that cognitive adaptationism is not entailed by the normative requirement of cognitive consequentialism. Consequently, a cognitive consequentialist can be more or less inclined toward the choice to assume cognitive adaptationism. This choice, in turn, will determine the response to an instance in which actual cognition scores lower on cognitive success than the normative system.
4.7 Cognitive consequentialism and the issue of adaptationism
Let us finally discuss cognitive adaptationism as found in Anderson's (1990, 1991a,b) work in more detail because, prima facie, his rational analysis shares significant resemblances with our account of cognitive consequentialism. Anderson's method consists of five iterative steps (Anderson, 1991a, p. 473):
Specify the goals of the cognitive system.
Develop a model of the environment to which the system is adapted.
Make minimal assumptions about computational limitations, such as memory storage and computation time.
Derive the optimal behavior given in (1)–(3) above.
Finally, test empirically whether the predictions of the optimal behavior derived in (4) are confirmed by human cognitive performance; if not, then the task–environment model developed in (1) + (2) has to be revised.
Anderson has applied rational analysis to domains such as memory analysis, categorization theory, causal inference, and problem solving.21Regarding Step 1, in these domains Anderson identifies the goal of the cognitive system as some kind of predictive inference. Thus, there is agreement between Anderson's analysis of cognitive goals and our definition of cognitive success. Also Steps 2 and 3 are consistent with a cognitive success analysis, except that in Step 3 we would argue that “minimal” assumptions should be replaced by realistic assumptions about computational limitations (see Simon, 1990). The first crucial difference to our account appears in Step 4. The consequentialist account does not intend to “derive” the optimal method from the description of the task and environment, because apart from very simple cases this is impossible. In the area of prediction methods, the nonexistence of a universally optimal method is the content of Wolpert's (1996) famous no free lunch theorem (cf. Schurz, 2017). Simon (1991) demonstrated that what does the real work in Anderson's derivations are specific auxiliary assumptions about the cognitive system and its environment. We mostly agree with Simon's critique of Anderson's optimal adaptation hypothesis: All that rational analysis can do is consider all available but not all possible competing methods for a given task and investigate their cognitive success. This is what the consequentialist account suggests.
Cognitive success, unlike optimal adaptation, thus behaves similarly to how Simon portrayed natural selection:
The theory of natural selection is not an optimizing theory for two reasons. First, it can, at best, produce only local optima, because it works by hill‐climbing up the nearest slope. It has no mechanism for jumping from peak to peak… . Second, it selects only among the alternatives that are available to it. (1991, p. 29)
This brings us to Anderson's Step 5. This step presupposes the adaptationist thesis that human cognitive behavior is nearly optimally adapted (Anderson, 1991a, table 1, p. 473). As a general claim, such an “evolutionary optimism” is difficult for several reasons. First, evolutionary selection sometimes produces suboptimal and even dysfunctional adaptations (Ridley, 1993, p. 343f). Second, although genetic evolution optimizes the biological reproduction rate, it is less clear how this process relates to cognition. Anderson acknowledged the fact that evolutionary selection does not find a global optimum but merely a local one. However, there is a world of difference between a global and a local maximum: It can be as large as the difference between a sand hill and Mount Everest. All single constraints on cognitive processes dictated through the biological architecture of humans’ brains (see Jones & Love, 2011) are concealed in this difference.
In contrast to rational analysis, cognitive consequentialism does not imply cognitive adaptationism, though it is compatible with it. Obviously, the human brain is well adapted in many respects, but not in all. Therefore, we suggest that the view of rational cognition that appears most defensible and productive for contemporary cognitive science is that of a cognitive consequentialism that is not bound to strong adaptationist assumptions. In conclusion, the consequentialist account proposes to modify Anderson's Steps 4 and 5 as follows:
4' Derive the consequences of the available competing cognitive methods [given the output of (1)–(3)] and test their cognitive success.
5' Compare the locally optimal method [i.e., the output of (4)] with the actual human behavior.
5'.1 If they agree, recommend the locally optimal method and infer that human cognition is well adapted.
5'.2 If they disagree, two cases are possible:
5'2.1 If human behavior outperforms the locally optimal method, search for better cognitive methods (to thus eventually explain human behavior): Backtrack to (4) and iterate.
5'2.2 If human performance is worse than the locally optimal method, search for local constraints on the mind's cognitive mechanisms that can explain the disagreement. Backtrack to (3), add these constraints, and iterate. At the same time, recommend the locally optimal method as a rational improvement of intuitive human cognition that can be learned by cognitive training.
In other words, at this point cognitive consequentialism potentially has educational implications.
5 Conclusion: New questions about rational cognition
The study of human cognition and its rationality seems inseparable from the question of how successful it is. In psychology and economic research, rationality of human cognition has often been equated with coherence, that is, rules for internal consistency, often defined by propositional logic and probability theory (see Arkes et al., 2016). However, for decades, psychologists have disagreed over how well coherence‐based normative systems describe human cognition and which coherence‐based systems (logic, probability theory, or decision theory) should be granted the status of normative benchmarks for cognition (see our introduction). We have discussed the problems that arise when normative systems are justified by reference to a priori intuitions, as is typically the case. As an alternative and a potential response to some of these problems, we propose a consequentialist account of normative systems. The major tenets of this account are as follows:
Traditional normativism fails because a priori intuitions are inadequate as justifications of norms of rational cognition.
Traditional descriptivism fails because norms of rational cognition are inevitably needed as benchmarks for successful reasoning.
Norms of rational cognition are better justified from a consequentialist perspective, that is, in terms of their cognitive success.
The concept of cognitive success of a cognitive method assumes that all decision tasks can be reformulated as prediction tasks.
Cognitive success is defined as the product of ecological validity and the method's applicability. Determining the available method's cognitive success permits one to compare them on the same scale.
Cognitive consequentialism is related to ecological rationality to the extent that cognitive success depends on the given cognitive task and a specific environment.
We hope and believe that this approach (or a similar consequentialist concept) offers a way to overcome the trite division of labor between the empirical study of the mind and philosophy. This approach raises new and interesting questions. For instance, what does cognitive success imply for research that has drawn strong conclusions about the (ir)rationality of human cognition (e.g., the heuristics‐and‐biases research program; Kahneman, 2011)? Or, assuming that the success of different cognitive methods depends on specific environments and that no method succeeds in all environments, will there be metamethods that are able to select the best method for the environment in question (see “meta‐induction”; Schurz, in press)? Finally, to what extent is cognitive success adaptive in an evolutionary sense, and how does the answer to this question influence the understanding of the relation between is (observed cognitive behavior) and ought (normatively recommended behavior)? We do not have answers to these and other questions, but we hope that we have convinced the reader that it is timely to ask these questions, thus leaving some skirmishes of the “rationality wars” behind us (Samuels, Stich, & Bishop, 2012) and turning the question of what rational cognition is into a less dogmatic and a more empirical one.
Subscribe to:
Posts (Atom)