Psychopathic Personality Traits in the Military: An Examination of the Levenson Self-Report Psychopathy Scales in a Novel Sample. Joye C. Anestis et al. Assessment, https://doi.org/10.1177/1073191117719511
Abstract: The Levenson Self-Report Psychopathy Scale is a short, self-report measure initially developed to assess psychopathic traits in noninstitutionalized samples. The present study aimed to explore factor structure and convergent and discriminant validity of the Levenson Self-Report Psychopathy Scale in a large U.S. military sample (90.7% Army National Guard). Factor analytic data, regression, and correlational analyses point to the superiority of Brinkley, Diamond, Magaletta, and Heigel’s three-factor model in this sample. Implications for theory and the study of psychopathic personality traits in a military sample are discussed.
Keywords: psychopathy, assessment, military, self-report, Levenson
---
[...] an important next step for this line of research is to examine how psychopathic personality traits help and/or hurt service members in discharging their duty. On the one hand, it could be argued that psychopathic personality traits may be adaptive in the military. For example, National Guard and Reserve members are more likely than other service members to develop problems during and after deployment (Hotopf et al., 2006; Iversen et al., 2009; Milliken, Auchterlonie, & Hoge, 2007), yet recent research has noted that the interpersonal.affective deficits seen in psychopathy are protective against the development of PTSD symptoms in a sample of combat-exposed Army National Guard members (J. C. Anestis, Harrop, Anestis, & Green, 2017). Additionally, entering military service often requires a period of physical separation from home.this transition might be easier for individuals with psychopathic traits who have less intense connections to others. Military culture emphasizes traits such as authoritarianism, leadership, and secrecy (Hall, 2011; Strom et al., 2012), areas of potential strength for someone possessing psychopathic personality traits. Military service may even be particularly important for individuals with psychopathic traits who engaged in criminal activity prior to enlistment. Prior research points to a negative relationship between military service and criminal activity, and this negative relationship has been shown to be stronger for those who engaged in criminal activity prior to enlistment than those who did not (e.g., Maruna & Roy, 2007). Thus, military service may serve as a .turning point. for these at-risk adolescents (Teachman & Tedrow, 2016). The hypothesized adaptive function of psychopathic personality traits in the military may also be related to the literature on resilience, as personality factors related to psychopathy have also been found to be related to postdeployment psychological resiliency (e.g., agreeableness, conscientiousness, emotional stability; Lee, Sudom, & Zamorski, 2013). At the same time, psychopathic personality traits may be detrimental to successful military service. Military culture is collectivist in nature and places emphasis on values such as loyalty, teamwork, obedience, and discipline (Strom et al., 2012). Individuals with psychopathic traits may struggle in this context and be at higher risk for discipline problems and discharge (e.g., Fiedler, Oltmanns, & Turkheimer, 2004). Future research should explore the likely multifaceted function of these personality traits as they relate to military service, particularly the likelihood that the relationship is curvilinear (e.g., certain psychopathic personality traits may be adaptive or related to resilience up to a certain level at which point they become maladaptive). Furthermore, from the perspective of the Two-Process (Patrick & Bernat, 2009) or Dual Pathway (Fowles & Dindo, 2009) models of psychopathy, the highly structured and intense doctrination of military training may moderate expression of the impulsive deficits and externalizing tendencies of psychopathy, and commitment to a group and a code of honor may mitigate expression of the interpersonal.affective deficits, allowing members of the military expressing psychopathy-related traits to function better than forensic/offender samples demonstrating comparable mean trait expression.
Wednesday, May 16, 2018
Lottery losers behave significantly more dishonestly than lottery winners; dishonesty monotonically increases with the size of loss incurred in the lottery; winning a lottery has not the same effect on dishonesty as winning a competition
Losing a Real-Life Lottery and Dishonest Behavior. Erez Sinivera, Gideon Yaniv. Journal of Behavioral and Experimental Economics, https://doi.org/10.1016/j.socec.2018.05.005
Highlight
• We investigate the effect of winning and losing a real-life lottery on dishonesty
• Lottery losers behave significantly more dishonestly than lottery winners
• Dishonesty monotonically increases with the size of loss incurred in the lottery
• Winning a lottery has not the same effect on dishonesty as winning a competition
Abstract: We report the results of an experiment destined to examine the effect of winning and losing a real-life scratch-card lottery on subsequent dishonest behavior. People who were observed purchasing scratch cards at selling kiosks were offered, upon completing scratching their cards and discovering whether (and how much) they have won or lost, to participate in a simple task with monetary payoffs and an opportunity to increase their pay by acting dishonestly. The results reveal that lottery losers behave significantly more dishonestly than lottery winners and that honesty monotonically increases with the net profit derived from the lottery (amount won minus lottery price). It thus follows that winning a lottery has not the same effect on moral disengagement as winning a competition which has been shown in the literature to engender dishonest behavior.
Key words: Scratch-Card Lottery; Lottery Winners; Lottery Losers; Dishonest Behavior
Highlight
• We investigate the effect of winning and losing a real-life lottery on dishonesty
• Lottery losers behave significantly more dishonestly than lottery winners
• Dishonesty monotonically increases with the size of loss incurred in the lottery
• Winning a lottery has not the same effect on dishonesty as winning a competition
Abstract: We report the results of an experiment destined to examine the effect of winning and losing a real-life scratch-card lottery on subsequent dishonest behavior. People who were observed purchasing scratch cards at selling kiosks were offered, upon completing scratching their cards and discovering whether (and how much) they have won or lost, to participate in a simple task with monetary payoffs and an opportunity to increase their pay by acting dishonestly. The results reveal that lottery losers behave significantly more dishonestly than lottery winners and that honesty monotonically increases with the net profit derived from the lottery (amount won minus lottery price). It thus follows that winning a lottery has not the same effect on moral disengagement as winning a competition which has been shown in the literature to engender dishonest behavior.
Key words: Scratch-Card Lottery; Lottery Winners; Lottery Losers; Dishonest Behavior
Tuesday, May 15, 2018
Are Sex Differences in Mating Strategies Overrated? Sociosexual Orientation as a Dominant Predictor in Online Dating Strategies
Are Sex Differences in Mating Strategies Overrated? Sociosexual Orientation as a Dominant Predictor in Online Dating Strategies. Lara Hallam, Charlotte J. S. De Backer, Maryanne L. Fisher, Michel Walrave. Evolutionary Psychological Science, https://link.springer.com/article/10.1007/s40806-018-0150-z
Abstract: Past research has extensively focused on sex differences in online dating strategies but has largely neglected sex-related individual difference variables such as sociosexuality. Sociosexuality (i.e., a measure of the number of restrictions people place on sexual relationships) gained attention in the 1990s among social and evolutionary psychologists, but has not been fully embraced by social scientists investigating interpersonal relationships and individual differences. Our aim is to investigate whether previously documented sex differences in mating strategies can be partially explained by sociosexuality, as a proximate manifestation of sex, by replicating a study about motives to use online dating applications, using an online survey. A first MANCOVA analysis (N = 254 online daters) not controlling for sociosexuality showed a significant main effect for age and sex. Adding sociosexuality to this analysis, a significant main effect of sociosexuality appeared indicating that individuals with a preference for unrestricted sexual relationships are more motivated to use online dating for reasons related to casual sex, whereas individuals who prefer restricted sexual relationships are more motivated to use online dating to find romance. Interestingly, the original main effect for sex and the significant interactions were eliminated. We argue that in social scientific research, scholars should pay more attention to sociosexuality when doing research about mating strategies.
Abstract: Past research has extensively focused on sex differences in online dating strategies but has largely neglected sex-related individual difference variables such as sociosexuality. Sociosexuality (i.e., a measure of the number of restrictions people place on sexual relationships) gained attention in the 1990s among social and evolutionary psychologists, but has not been fully embraced by social scientists investigating interpersonal relationships and individual differences. Our aim is to investigate whether previously documented sex differences in mating strategies can be partially explained by sociosexuality, as a proximate manifestation of sex, by replicating a study about motives to use online dating applications, using an online survey. A first MANCOVA analysis (N = 254 online daters) not controlling for sociosexuality showed a significant main effect for age and sex. Adding sociosexuality to this analysis, a significant main effect of sociosexuality appeared indicating that individuals with a preference for unrestricted sexual relationships are more motivated to use online dating for reasons related to casual sex, whereas individuals who prefer restricted sexual relationships are more motivated to use online dating to find romance. Interestingly, the original main effect for sex and the significant interactions were eliminated. We argue that in social scientific research, scholars should pay more attention to sociosexuality when doing research about mating strategies.
"Cousin Marriage Is Not Choice: Muslim Marriage and Underdevelopment"
Edlund, Lena. 2018. "Cousin Marriage Is Not Choice: Muslim Marriage and Underdevelopment." AEA Papers and Proceedings, 108():353-57. DOI: 10.1257/pandp.20181084
Abstract: According to classical Muslim marriage law, a woman needs her guardian's (viz. father's) consent to marry. However, the resulting marriage payment, the mahr, is hers. This split bill may lie behind the high rates of consanguineous marriage in the Muslim world, where country estimates range from 20 to 60 percent. Cousin marriage can stem from a form of barter in which fathers contribute daughters to an extended family bridal pool against sons' right to draw from the same pool. In the resulting system, women are robbed of their mahr and sons marry by guarding their sisters' "honor" heeding clan elders.
Abstract: According to classical Muslim marriage law, a woman needs her guardian's (viz. father's) consent to marry. However, the resulting marriage payment, the mahr, is hers. This split bill may lie behind the high rates of consanguineous marriage in the Muslim world, where country estimates range from 20 to 60 percent. Cousin marriage can stem from a form of barter in which fathers contribute daughters to an extended family bridal pool against sons' right to draw from the same pool. In the resulting system, women are robbed of their mahr and sons marry by guarding their sisters' "honor" heeding clan elders.
Unexamined assumptions and unintended consequences of routine screening for depression
Unexamined assumptions and unintended consequences of routine screening for depression. Lisa Cosgrove et al. Journal of Psychosomatic Research, Volume 109, June 2018, Pages 9-11. https://doi.org/10.1016/j.jpsychores.2018.03.007
1. Assumption 1: The condition has a detectable early asymptomatic stage, but is progressive and, without early treatment, there will be worse health outcomes
2. Assumption 2: In the absence of screening, patients will not be identified and treated
3. Assumption 3: Depression treatments are effective for patients who screen positive but have not reported symptoms
4. Unintended consequence 1: overdiagnosis and overtreatment
5. Unintended consequence 2: the nocebo effect
6. Unintended consequence 3: misuse of resources
7. Conclusion
The therapeutic imperative in medicine means that we are good at rushing to do things that might “save lives” but not good at not doing, or undoing [30] (p348).
Sensible health care policy should be congruent with evidence. As Mangin astutely noted, our goodhearted desire to “do something” often undermines our ability to interrogate our assumptions and accept empirical evidence. Before implementing any screening program there must be high-quality evidence from randomized controlled trials (RCTs) that the program will result in sufficiently large improvements in health to justify both the harms incurred and the use of scarce healthcare resources.
Helping people who struggle with depression is a critically important public health issue. But screening for depression, over and above clinical observation, active listening and questioning, will lead to over-diagnosis and over-treatment, unnecessarily create illness identities in some people, and exacerbate health disparities by reducing our capacity to care for those with more severe mental health problems—the ones, often from disadvantaged groups—who need the care the most.
1. Assumption 1: The condition has a detectable early asymptomatic stage, but is progressive and, without early treatment, there will be worse health outcomes
2. Assumption 2: In the absence of screening, patients will not be identified and treated
3. Assumption 3: Depression treatments are effective for patients who screen positive but have not reported symptoms
4. Unintended consequence 1: overdiagnosis and overtreatment
5. Unintended consequence 2: the nocebo effect
6. Unintended consequence 3: misuse of resources
7. Conclusion
The therapeutic imperative in medicine means that we are good at rushing to do things that might “save lives” but not good at not doing, or undoing [30] (p348).
Sensible health care policy should be congruent with evidence. As Mangin astutely noted, our goodhearted desire to “do something” often undermines our ability to interrogate our assumptions and accept empirical evidence. Before implementing any screening program there must be high-quality evidence from randomized controlled trials (RCTs) that the program will result in sufficiently large improvements in health to justify both the harms incurred and the use of scarce healthcare resources.
Helping people who struggle with depression is a critically important public health issue. But screening for depression, over and above clinical observation, active listening and questioning, will lead to over-diagnosis and over-treatment, unnecessarily create illness identities in some people, and exacerbate health disparities by reducing our capacity to care for those with more severe mental health problems—the ones, often from disadvantaged groups—who need the care the most.
Research shows that “evidence-based” therapies are weak treatments. Their benefits are trivial. Most patients do not get well. Even the trivial benefits do not last.
Where Is the Evidence for “Evidence-Based” Therapy? Jonathan Shedler. Psychiatric Clinics of North America, Volume 41, Issue 2, June 2018, Pages 319-329. https://doi.org/10.1016/j.psc.2018.02.001
Buzzword. noun. An important-sounding u sually technical word or phrase often oflittle meaning used chiefly to impress.
“Evidence-based therapy” has become a marketing buzzword. The term “evidence based” comes from medicine. It gained attention in the 1990s and was initially a call for critical thinking. Proponents of evidence-based medicine recognized that “We’ve always done it this way” is poor justification for medical decisions. Medical decisions should integrate individual clinical expertise, patients’ values and preferences, and relevant scientific research.1
But the term evidence based has come to mean something very different for psychotherapy. It has been appropriated to promote a specific ideology and agenda. It is now used as a code word for manualized therapy—most often brief, one-sizefits- all forms of cognitive behavior therapy (CBT). “Manualized” means the therapy is conducted by following an instruction manual. The treatments are often standardized or scripted in ways that leave little room for addressing the needs of individual patients.
Behind the “evidence-based” therapy movement lies a master narrative that increasingly dominates the mental health landscape. The master narrative goes something like this: “In the dark ages, therapists practiced unproven, unscientific therapy. Evidence-based therapies are scientifically proven and superior.” The narrative has become a justification for all-out attacks on traditional talk therapy—that is, therapy aimed at fostering self-examination and self-understanding in the context of an ongoing, meaningful therapy relationship.
Here is a small sample of what proponents of “evidence-based” therapy say in public: “The empirically supported psychotherapies are still not widely practiced. As a result, many patients do not have access to adequate treatment” (emphasis added).2 Note the linguistic sleight-of-hand: If the therapy is not “evidence based” (read, manualized), it is inadequate. Other proponents of “evidence-based” therapies go further in denigrating relationship-based, insight-oriented therapy: “The disconnect between what clinicians do and what science has discovered is an unconscionable embarrassment.”3 The news media promulgate the master narrative. The Washington Post ran an article titled “Is your therapist a little behind the times?” which likened traditional talk therapy to pre-scientific medicine when “healers commonly used ineffective and often injurious practices such as blistering, purging and bleeding.” Newsweek sounded a similar note with an article titled, “Ignoring the evidence: Why do Psychologists reject science?”
Note how the language leads to a form of McCarthyism. Because proponents of brief, manualized therapies have appropriated the term “evidence-based,” it has become nearly impossible to have an intelligent discussion about what constitutes good therapy. Anyone who questions “evidence-based” therapy risks being branded anti-evidence and anti-science.
One might assume, in light of the strong claims for “evidence-based” therapies and the public denigration of other therapies, that there must be extremely strong scientific evidence for their benefits. There is not. There is a yawning chasm between what we are told research shows and what research actually shows. Empirical research actually shows that “evidence-based” therapies are ineffective for most patients most of the time. First, I discuss what empirical research really shows. I then take a closer look at troubling practices in “evidence-based” therapy research.
PART I: WHAT RESEARCH REALLY SHOWS
Research shows that “evidence-based” therapies are weak treatments. Their benefits are trivial. Most patients do not get well. Even the trivial benefits do not last.
Buzzword. noun. An important-sounding u sually technical word or phrase often oflittle meaning used chiefly to impress.
“Evidence-based therapy” has become a marketing buzzword. The term “evidence based” comes from medicine. It gained attention in the 1990s and was initially a call for critical thinking. Proponents of evidence-based medicine recognized that “We’ve always done it this way” is poor justification for medical decisions. Medical decisions should integrate individual clinical expertise, patients’ values and preferences, and relevant scientific research.1
But the term evidence based has come to mean something very different for psychotherapy. It has been appropriated to promote a specific ideology and agenda. It is now used as a code word for manualized therapy—most often brief, one-sizefits- all forms of cognitive behavior therapy (CBT). “Manualized” means the therapy is conducted by following an instruction manual. The treatments are often standardized or scripted in ways that leave little room for addressing the needs of individual patients.
Behind the “evidence-based” therapy movement lies a master narrative that increasingly dominates the mental health landscape. The master narrative goes something like this: “In the dark ages, therapists practiced unproven, unscientific therapy. Evidence-based therapies are scientifically proven and superior.” The narrative has become a justification for all-out attacks on traditional talk therapy—that is, therapy aimed at fostering self-examination and self-understanding in the context of an ongoing, meaningful therapy relationship.
Here is a small sample of what proponents of “evidence-based” therapy say in public: “The empirically supported psychotherapies are still not widely practiced. As a result, many patients do not have access to adequate treatment” (emphasis added).2 Note the linguistic sleight-of-hand: If the therapy is not “evidence based” (read, manualized), it is inadequate. Other proponents of “evidence-based” therapies go further in denigrating relationship-based, insight-oriented therapy: “The disconnect between what clinicians do and what science has discovered is an unconscionable embarrassment.”3 The news media promulgate the master narrative. The Washington Post ran an article titled “Is your therapist a little behind the times?” which likened traditional talk therapy to pre-scientific medicine when “healers commonly used ineffective and often injurious practices such as blistering, purging and bleeding.” Newsweek sounded a similar note with an article titled, “Ignoring the evidence: Why do Psychologists reject science?”
Note how the language leads to a form of McCarthyism. Because proponents of brief, manualized therapies have appropriated the term “evidence-based,” it has become nearly impossible to have an intelligent discussion about what constitutes good therapy. Anyone who questions “evidence-based” therapy risks being branded anti-evidence and anti-science.
One might assume, in light of the strong claims for “evidence-based” therapies and the public denigration of other therapies, that there must be extremely strong scientific evidence for their benefits. There is not. There is a yawning chasm between what we are told research shows and what research actually shows. Empirical research actually shows that “evidence-based” therapies are ineffective for most patients most of the time. First, I discuss what empirical research really shows. I then take a closer look at troubling practices in “evidence-based” therapy research.
PART I: WHAT RESEARCH REALLY SHOWS
Research shows that “evidence-based” therapies are weak treatments. Their benefits are trivial. Most patients do not get well. Even the trivial benefits do not last.
The neuronal circuitry associated with higher intelligence is organized in a sparse and efficient manner, fostering more directed information processing and less cortical activity during reasoning
Diffusion markers of dendritic density and arborization in gray matter predict differences in intelligence. Erhan Genç, Christoph Fraenz, Caroline Schlüter, Patrick Friedrich, Rüdiger Hossiep, Manuel C. Voelkle, Josef M. Ling, Onur Güntürkün & Rex E. Jung. Nature Communications, volume 9, Article number: 1905 (2018), doi:10.1038/s41467-018-04268-8
Abstract: Previous research has demonstrated that individuals with higher intelligence are more likely to have larger gray matter volume in brain areas predominantly located in parieto-frontal regions. These findings were usually interpreted to mean that individuals with more cortical brain volume possess more neurons and thus exhibit more computational capacity during reasoning. In addition, neuroimaging studies have shown that intelligent individuals, despite their larger brains, tend to exhibit lower rates of brain activity during reasoning. However, the microstructural architecture underlying both observations remains unclear. By combining advanced multi-shell diffusion tensor imaging with a culture-fair matrix-reasoning test, we found that higher intelligence in healthy individuals is related to lower values of dendritic density and arborization. These results suggest that the neuronal circuitry associated with higher intelligence is organized in a sparse and efficient manner, fostering more directed information processing and less cortical activity during reasoning.
Abstract: Previous research has demonstrated that individuals with higher intelligence are more likely to have larger gray matter volume in brain areas predominantly located in parieto-frontal regions. These findings were usually interpreted to mean that individuals with more cortical brain volume possess more neurons and thus exhibit more computational capacity during reasoning. In addition, neuroimaging studies have shown that intelligent individuals, despite their larger brains, tend to exhibit lower rates of brain activity during reasoning. However, the microstructural architecture underlying both observations remains unclear. By combining advanced multi-shell diffusion tensor imaging with a culture-fair matrix-reasoning test, we found that higher intelligence in healthy individuals is related to lower values of dendritic density and arborization. These results suggest that the neuronal circuitry associated with higher intelligence is organized in a sparse and efficient manner, fostering more directed information processing and less cortical activity during reasoning.
Patients with troublesome alcohol history had a significantly lower prevalence of cardiovascular disease events, even after adjusting for demographic and traditional risk factors, despite higher tobacco use & male sex predominance
Cardiovascular Events in Alcoholic Syndrome With Alcohol Withdrawal History: Results From the National Inpatient Sample. Parasuram Krishnamoorthy, Aditi Kalla, Vincent M. Figueredo. The American Journal of the Medical Sciences, Volume 355, Issue 5, May 2018, Pages 425-427. https://doi.org/10.1016/j.amjms.2018.01.005
Abstract
Background: Epidemiologic studies suggest reduced cardiovascular disease (CVD) events with moderate alcohol consumption. However, heavy and binge drinking may be associated with higher CVD risk. Utilizing the Nationwide Inpatient Sample, we studied the association between a troublesome alcohol history (TAH), defined as those with diagnoses of both chronic alcohol syndrome and acute withdrawal history and CVD events.
Methods: Patients >18 years with diagnoses of both chronic alcohol syndrome and acute withdrawal using the International Classification of Diseases-Ninth Edition-Clinical Modification (ICD-9-CM) codes 303.9 and 291.81, were identified in the Nationwide Inpatient Sample 2009-2010 database. Demographics, including age and sex, as well as CVD event rates were collected.
Results: Patients with TAH were more likely to be male, with a smoking history and have hypertension, with less diabetes, hyperlipidemia and obesity. After multimodal adjusted regression analysis, odds of coronary artery disease, acute coronary syndrome, in-hospital death and heart failure were significantly lower in patients with TAH when compared to the general discharge patient population.
Conclusions: Utilizing a large inpatient database, patients with TAH had a significantly lower prevalence of CVD events, even after adjusting for demographic and traditional risk factors, despite higher tobacco use and male sex predominance, when compared to the general patient population.
Abstract
Background: Epidemiologic studies suggest reduced cardiovascular disease (CVD) events with moderate alcohol consumption. However, heavy and binge drinking may be associated with higher CVD risk. Utilizing the Nationwide Inpatient Sample, we studied the association between a troublesome alcohol history (TAH), defined as those with diagnoses of both chronic alcohol syndrome and acute withdrawal history and CVD events.
Methods: Patients >18 years with diagnoses of both chronic alcohol syndrome and acute withdrawal using the International Classification of Diseases-Ninth Edition-Clinical Modification (ICD-9-CM) codes 303.9 and 291.81, were identified in the Nationwide Inpatient Sample 2009-2010 database. Demographics, including age and sex, as well as CVD event rates were collected.
Results: Patients with TAH were more likely to be male, with a smoking history and have hypertension, with less diabetes, hyperlipidemia and obesity. After multimodal adjusted regression analysis, odds of coronary artery disease, acute coronary syndrome, in-hospital death and heart failure were significantly lower in patients with TAH when compared to the general discharge patient population.
Conclusions: Utilizing a large inpatient database, patients with TAH had a significantly lower prevalence of CVD events, even after adjusting for demographic and traditional risk factors, despite higher tobacco use and male sex predominance, when compared to the general patient population.
Is Accurate, Positive, or Inflated Self-perception Most Advantageous for Psychological Adjustment? Better Inflated
Humberg, Sarah, Michael Dufner, Felix D Schönbrodt, Katharina Geukes, Roos Hutteman, Albrecht Kuefner, Maarten van Zalk, Jaap J Denissen, Steffen Nestler, and Mitja Back 2018. “Preprint of "is Accurate, Positive, or Inflated Self-perception Most Advantageous for Psychological Adjustment? A Competitive Test of Key Hypotheses"”. PsyArXiv. April 15. doi:10.17605/OSF.IO/9W3BH. Final version: Journal of Personality and Social Psychology, 116(5), 835-859. http://dx.doi.org/10.1037/pspp0000204
Abstract: Empirical research on the (mal-)adaptiveness of favorable self-perceptions, self-enhancement, and self-knowledge has typically applied a classical null-hypothesis testing approach and provided mixed and even contradictory findings. Using data from five studies (laboratory and field, total N = 2,823), we employed an information-theoretic approach combined with Response Surface Analysis to provide the first competitive test of six popular hypotheses: that more favorable self-perceptions are adaptive versus maladaptive (Hypotheses 1 and 2: Positivity of self-view hypotheses), that higher levels of self-enhancement (i.e., a higher discrepancy of self-viewed and objectively assessed ability) are adaptive versus maladaptive (Hypotheses 3 and 4: Self-enhancement hypotheses), that accurate self-perceptions are adaptive (Hypothesis 5: Self-knowledge hypothesis), and that a slight degree of self-enhancement is adaptive (Hypothesis 6: Optimal margin hypothesis). We considered self-perceptions and objective ability measures in two content domains (reasoning ability, vocabulary knowledge) and investigated six indicators of intra- and interpersonal psychological adjustment. Results showed that most adjustment indicators were best predicted by the positivity of self-perceptions, there were some specific self-enhancement effects, and evidence generally spoke against the self-knowledge and optimal margin hypotheses. Our results highlight the need for comprehensive simultaneous tests of competing hypotheses. Implications for the understanding of underlying processes are discussed.
---
Altogether, the SK Hypothesis (Self-Knowledge H.) was unable to compete against the other hypotheses for any of the regarded outcome categories: Each analysis suggested that it was unlikely that SK effects underlie the empirical data.19 That is, persons with accurate knowledge of their intelligence did not seem to be better adjusted than persons with less accurate self-perceptions (Allport, 1937; Higgins, 1996; Jahoda, 1958). Similarly, our findings did not support the conjecture that persons who see their intelligence slightly more positively than it really is are better adjusted (OM Hypothesis; Baumeister, 1989).
Conclusions
In the present article, we theoretically disentangled all central hypotheses on the adaptiveness of self-perceptions, highlighted the need for a simultaneous empirical evaluation of these hypotheses, presented a methodological framework to this aim, and employed it to five substantive datasets. With some exceptions, the rule “the higher self-perceived intelligence, the better adjusted” seemed to hold for most outcomes we considered. By contrast, we found that individual differences in neither the accuracy of self-perceptions nor an optimal margin of self-viewed versus real ability predicted intra- or interpersonal adjustment. Similarly, intellectual self-enhancement was largely found to be unrelated to the considered adjustment indicators, with two exceptions (i.e., SE concerning reasoning ability seemed detrimental for peer-perceived communal attributes; SE concerning vocabulary knowledge seemed beneficial for some self-perceived adjustment indicators). We hope that future research will make use of the approach outlined here to replicate and extend our results, thereby shedding more light on the intra- and interpersonal consequences of self-perceptions.
Abstract: Empirical research on the (mal-)adaptiveness of favorable self-perceptions, self-enhancement, and self-knowledge has typically applied a classical null-hypothesis testing approach and provided mixed and even contradictory findings. Using data from five studies (laboratory and field, total N = 2,823), we employed an information-theoretic approach combined with Response Surface Analysis to provide the first competitive test of six popular hypotheses: that more favorable self-perceptions are adaptive versus maladaptive (Hypotheses 1 and 2: Positivity of self-view hypotheses), that higher levels of self-enhancement (i.e., a higher discrepancy of self-viewed and objectively assessed ability) are adaptive versus maladaptive (Hypotheses 3 and 4: Self-enhancement hypotheses), that accurate self-perceptions are adaptive (Hypothesis 5: Self-knowledge hypothesis), and that a slight degree of self-enhancement is adaptive (Hypothesis 6: Optimal margin hypothesis). We considered self-perceptions and objective ability measures in two content domains (reasoning ability, vocabulary knowledge) and investigated six indicators of intra- and interpersonal psychological adjustment. Results showed that most adjustment indicators were best predicted by the positivity of self-perceptions, there were some specific self-enhancement effects, and evidence generally spoke against the self-knowledge and optimal margin hypotheses. Our results highlight the need for comprehensive simultaneous tests of competing hypotheses. Implications for the understanding of underlying processes are discussed.
---
Altogether, the SK Hypothesis (Self-Knowledge H.) was unable to compete against the other hypotheses for any of the regarded outcome categories: Each analysis suggested that it was unlikely that SK effects underlie the empirical data.19 That is, persons with accurate knowledge of their intelligence did not seem to be better adjusted than persons with less accurate self-perceptions (Allport, 1937; Higgins, 1996; Jahoda, 1958). Similarly, our findings did not support the conjecture that persons who see their intelligence slightly more positively than it really is are better adjusted (OM Hypothesis; Baumeister, 1989).
Conclusions
In the present article, we theoretically disentangled all central hypotheses on the adaptiveness of self-perceptions, highlighted the need for a simultaneous empirical evaluation of these hypotheses, presented a methodological framework to this aim, and employed it to five substantive datasets. With some exceptions, the rule “the higher self-perceived intelligence, the better adjusted” seemed to hold for most outcomes we considered. By contrast, we found that individual differences in neither the accuracy of self-perceptions nor an optimal margin of self-viewed versus real ability predicted intra- or interpersonal adjustment. Similarly, intellectual self-enhancement was largely found to be unrelated to the considered adjustment indicators, with two exceptions (i.e., SE concerning reasoning ability seemed detrimental for peer-perceived communal attributes; SE concerning vocabulary knowledge seemed beneficial for some self-perceived adjustment indicators). We hope that future research will make use of the approach outlined here to replicate and extend our results, thereby shedding more light on the intra- and interpersonal consequences of self-perceptions.
Testosterone may influence social behavior by increasing the frequency of words related to aggression, sexuality, & status, & it may alter the quality of interactions with an intimate partner by amplifying emotions via swearing
Preliminary evidence that androgen signaling is correlated with men's everyday language. Jennifer S. Mascaro et al. American Journal of Human Biology, https://doi.org/10.1002/ajhb.23136
Objectives: Testosterone (T) has an integral, albeit complex, relationship with social behavior, especially in the domains of aggression and competition. However, examining this relationship in humans is challenging given the often covert and subtle nature of human aggression and status‐seeking. The present study aimed to investigate whether T levels and genetic polymorphisms in the AR gene are associated with social behavior assessed via natural language use.
Methods: We used unobtrusive, behavioral, real‐world ambulatory assessments of men in partnered heterosexual relationships to examine the relationship between plasma T levels, variation in the androgen receptor (AR) gene, and spontaneous, everyday language in three interpersonal contexts: with romantic partners, with co‐workers, and with their children.
Results: Men's T levels were positively correlated with their use of achievement words with their children, and the number of AR CAG trinucleotide repeats was inversely correlated with their use of anger and reward words with their children. T levels were positively correlated with sexual language and with use of swear words in the presence of their partner, but not in the presence of co‐workers or children.
Conclusions: Together, these results suggest that T may influence social behavior by increasing the frequency of words related to aggression, sexuality, and status, and that it may alter the quality of interactions with an intimate partner by amplifying emotions via swearing.
Objectives: Testosterone (T) has an integral, albeit complex, relationship with social behavior, especially in the domains of aggression and competition. However, examining this relationship in humans is challenging given the often covert and subtle nature of human aggression and status‐seeking. The present study aimed to investigate whether T levels and genetic polymorphisms in the AR gene are associated with social behavior assessed via natural language use.
Methods: We used unobtrusive, behavioral, real‐world ambulatory assessments of men in partnered heterosexual relationships to examine the relationship between plasma T levels, variation in the androgen receptor (AR) gene, and spontaneous, everyday language in three interpersonal contexts: with romantic partners, with co‐workers, and with their children.
Results: Men's T levels were positively correlated with their use of achievement words with their children, and the number of AR CAG trinucleotide repeats was inversely correlated with their use of anger and reward words with their children. T levels were positively correlated with sexual language and with use of swear words in the presence of their partner, but not in the presence of co‐workers or children.
Conclusions: Together, these results suggest that T may influence social behavior by increasing the frequency of words related to aggression, sexuality, and status, and that it may alter the quality of interactions with an intimate partner by amplifying emotions via swearing.
The religiosity-moral self-image link was most strongly explained by personality traits and individual differences in prosociality/empathy, rather than a desirability bias; the link is minimally accounted for by impression management
Religion and moral self-image: The contributions of prosocial behavior, socially desirable responding, and personality. Sarah J. Ward, Laura A. King. Personality and Individual Differences, Volume 131, 1 September 2018, Pages 222–231. https://doi.org/10.1016/j.paid.2018.04.028
Highlights
• The religiosity-moral self-image link was most strongly explained by prosocial traits.
• This association was only minimally accounted for by impression management.
• Even when under a fake lie detector, religious people still reported high moral self-image.
Abstract: Often, the high moral self-image held by religious people is viewed with skepticism. Three studies examined the contributions of socially desirable responding (SDR), personality traits, prosocial behavior, and individual differences in prosocial tendencies to the association between religiosity and moral self-image. In Studies 1 and 2 (N's = 346, 507), personality traits (agreeableness, conscientiousness) and individual differences in empathy/prosociality were the strongest explanatory variables for religiosity's association with moral self-image measures; SDR and prosocial behavior contributed more weakly to this association. In Study 3 (N = 180), the effect of a bogus pipeline manipulation on moral self-image was moderated by religiosity. Among the highly religious, moral self-image remained high even in the bogus pipeline condition. These studies show that the association between religiosity and moral self-image is most strongly explained by personality traits and individual differences in prosociality/empathy, rather than a desirability response bias.
Keywords: Religion; Morality; Moral self-image; Prosociality
Highlights
• The religiosity-moral self-image link was most strongly explained by prosocial traits.
• This association was only minimally accounted for by impression management.
• Even when under a fake lie detector, religious people still reported high moral self-image.
Abstract: Often, the high moral self-image held by religious people is viewed with skepticism. Three studies examined the contributions of socially desirable responding (SDR), personality traits, prosocial behavior, and individual differences in prosocial tendencies to the association between religiosity and moral self-image. In Studies 1 and 2 (N's = 346, 507), personality traits (agreeableness, conscientiousness) and individual differences in empathy/prosociality were the strongest explanatory variables for religiosity's association with moral self-image measures; SDR and prosocial behavior contributed more weakly to this association. In Study 3 (N = 180), the effect of a bogus pipeline manipulation on moral self-image was moderated by religiosity. Among the highly religious, moral self-image remained high even in the bogus pipeline condition. These studies show that the association between religiosity and moral self-image is most strongly explained by personality traits and individual differences in prosociality/empathy, rather than a desirability response bias.
Keywords: Religion; Morality; Moral self-image; Prosociality
Monday, May 14, 2018
Sex differences in human brain pain pathways are present from birth: More sensitiviy in girls
The distribution of pain activity across the human neonatal brain is sex dependent. Madeleine Verriotis et al. NeuroImage, https://doi.org/10.1016/j.neuroimage.2018.05.030
Highlights
• Noxious stimulation causes widespread pain related potentials in the neonatal brain.
• This widespread pain response is more likely to occur in female babies.
• Brain responses to touch do not differ between male and female babies.
• Sex differences in human brain pain pathways are present from birth.
Abstract: In adults, there are differences between male and female structural and functional brain connectivity, specifically for those regions involved in pain processing. This may partly explain the observed sex differences in pain sensitivity, tolerance, and inhibitory control, and in the development of chronic pain. However, it is not known if these differences exist from birth. Cortical activity in response to a painful stimulus can be observed in the human neonatal brain, but this nociceptive activity continues to develop in the postnatal period and is qualitatively different from that of adults, partly due to the considerable cortical maturation during this time. This research aimed to investigate the effects of sex and prematurity on the magnitude and spatial distribution pattern of the long-latency nociceptive event-related potential (nERP) using electroencephalography (EEG). We measured the cortical response time-locked to a clinically required heel lance in 81 neonates born between 29 and 42 weeks gestational age (median postnatal age 4 days). The results show that heel lance results in a spatially widespread nERP response in the majority of newborns. Importantly, a widespread pattern is significantly more likely to occur in females, irrespective of gestational age at birth. This effect is not observed for short latency somatosensory waveforms in the same infants, indicating that it is selective for the nociceptive component of the response. These results suggest the early onset of a greater anatomical and functional connectivity reported in the adult female brain, and indicate the presence of pain-related sex differences from birth.
Keywords: Pain; EEG; Nociception; Sex; Neonatal; Brain
Highlights
• Noxious stimulation causes widespread pain related potentials in the neonatal brain.
• This widespread pain response is more likely to occur in female babies.
• Brain responses to touch do not differ between male and female babies.
• Sex differences in human brain pain pathways are present from birth.
Abstract: In adults, there are differences between male and female structural and functional brain connectivity, specifically for those regions involved in pain processing. This may partly explain the observed sex differences in pain sensitivity, tolerance, and inhibitory control, and in the development of chronic pain. However, it is not known if these differences exist from birth. Cortical activity in response to a painful stimulus can be observed in the human neonatal brain, but this nociceptive activity continues to develop in the postnatal period and is qualitatively different from that of adults, partly due to the considerable cortical maturation during this time. This research aimed to investigate the effects of sex and prematurity on the magnitude and spatial distribution pattern of the long-latency nociceptive event-related potential (nERP) using electroencephalography (EEG). We measured the cortical response time-locked to a clinically required heel lance in 81 neonates born between 29 and 42 weeks gestational age (median postnatal age 4 days). The results show that heel lance results in a spatially widespread nERP response in the majority of newborns. Importantly, a widespread pattern is significantly more likely to occur in females, irrespective of gestational age at birth. This effect is not observed for short latency somatosensory waveforms in the same infants, indicating that it is selective for the nociceptive component of the response. These results suggest the early onset of a greater anatomical and functional connectivity reported in the adult female brain, and indicate the presence of pain-related sex differences from birth.
Keywords: Pain; EEG; Nociception; Sex; Neonatal; Brain
Male Sexlessness is Rising, But Not for the Reasons Incels Claim
Male Sexlessness is Rising, But Not for the Reasons Incels Claim. Lyman Stone. Institute of Family Studies, May 2018. https://ifstudies.org/blog/male-sexlessness-is-rising-but-not-for-the-reasons-incels-claim
A recent terrorist attack in Toronto, which left 10 people dead, has brought global attention to the “incel” movement, which stands for “involuntarily celibate.” The term refers to a growing number of people, particularly young men, who feel shut out of any possibility for romance, and have formed a community based around mourning their celibacy, supporting each other, and, in some cases, stoking a culture of impotent bitterness and rage at the wider world. In a few cases, this rage has spilled over in the form of terrorist attacks by “incels.” While the incels’ misogyny deserves to be called out and condemned, their ideas are unlikely to just go away. As such, the question must be posed: is the incel account of modern sexual life correct or not?
Incel communities tend to believe a few key facts about modern mating practices. First, they tend to believe women have become very sexually promiscuous over time, and indeed that virtually all women are highly promiscuous. The nickname incels use for an attractive, sexually available woman is “Stacy.” Second, they believe a small number of males dominate the market for romance, and that their dominance is growing. They call these alpha-males “Chads.” Finally, they tend to argue that the market for sex is winner-take-all, with a few “Chads” conquering all the “Stacies.” The allegedly handsome and masculine Chads are helped along by social media, Tinder, and an allegedly vacuous and appearance-focused dating scene, such that modern society gives Chads excessive amounts of sex while leaving a growing number of males with no sexual partner at all. These left out men are the incels.
This view is basically wrong. But it turns out to be wrong in an interesting and informative way.
How Much Sex Are People Having?
First of all, we may wonder about the actual trends in sexual behavior. Using data from the General Social Survey (GSS), it’s possible to estimate about how often people of different groups have sex. For this article, I will focus on individuals aged 22-35 who have never been married, and particularly males within that group.
Most groups of people age 22-35 have broadly similar amounts of sex; probably something like 60-100 sexual encounters per year. Never-married people have the least sex, about 60-80 encounters per year, while ever-married people have more sex, about 70-110 encounters per year, on average. Historically, never-married men have reported higher sexual frequency than never-married women. However, in the 2014 and 2016 GSS samples, that changed: never-married men now report slightly lower sexual frequency than never-married women. This is mostly because men are reporting less sex, not that women are reporting more sex. Female sexual frequency is essentially unchanged since 2000. In other words, a key piece of the incel story about rising female promiscuity just isn’t there.
But sexual frequency may be dominated by “Chads” and “Stacies.” What we really want to know is what share of these men and women have not had any sex. The graph below shows what share of these young men and women had not had sex at all in the last 12 months, by their sex and marital status. .
[Full text and charts at the link above.]
A recent terrorist attack in Toronto, which left 10 people dead, has brought global attention to the “incel” movement, which stands for “involuntarily celibate.” The term refers to a growing number of people, particularly young men, who feel shut out of any possibility for romance, and have formed a community based around mourning their celibacy, supporting each other, and, in some cases, stoking a culture of impotent bitterness and rage at the wider world. In a few cases, this rage has spilled over in the form of terrorist attacks by “incels.” While the incels’ misogyny deserves to be called out and condemned, their ideas are unlikely to just go away. As such, the question must be posed: is the incel account of modern sexual life correct or not?
Incel communities tend to believe a few key facts about modern mating practices. First, they tend to believe women have become very sexually promiscuous over time, and indeed that virtually all women are highly promiscuous. The nickname incels use for an attractive, sexually available woman is “Stacy.” Second, they believe a small number of males dominate the market for romance, and that their dominance is growing. They call these alpha-males “Chads.” Finally, they tend to argue that the market for sex is winner-take-all, with a few “Chads” conquering all the “Stacies.” The allegedly handsome and masculine Chads are helped along by social media, Tinder, and an allegedly vacuous and appearance-focused dating scene, such that modern society gives Chads excessive amounts of sex while leaving a growing number of males with no sexual partner at all. These left out men are the incels.
This view is basically wrong. But it turns out to be wrong in an interesting and informative way.
How Much Sex Are People Having?
First of all, we may wonder about the actual trends in sexual behavior. Using data from the General Social Survey (GSS), it’s possible to estimate about how often people of different groups have sex. For this article, I will focus on individuals aged 22-35 who have never been married, and particularly males within that group.
Most groups of people age 22-35 have broadly similar amounts of sex; probably something like 60-100 sexual encounters per year. Never-married people have the least sex, about 60-80 encounters per year, while ever-married people have more sex, about 70-110 encounters per year, on average. Historically, never-married men have reported higher sexual frequency than never-married women. However, in the 2014 and 2016 GSS samples, that changed: never-married men now report slightly lower sexual frequency than never-married women. This is mostly because men are reporting less sex, not that women are reporting more sex. Female sexual frequency is essentially unchanged since 2000. In other words, a key piece of the incel story about rising female promiscuity just isn’t there.
But sexual frequency may be dominated by “Chads” and “Stacies.” What we really want to know is what share of these men and women have not had any sex. The graph below shows what share of these young men and women had not had sex at all in the last 12 months, by their sex and marital status. .
[Full text and charts at the link above.]
Putting the “Sex” into “Sexuality”: Understanding Online Pornography using an Evolutionary Framework
Putting the “Sex” into “Sexuality”: Understanding Online Pornography using an Evolutionary Framework. Catherine Salmon, Maryanne L. Fisher. EvoS Journal, 2018, NEEPS XI, pp. 1-15. -1-. http://evostudies.org/evos-journal/about-the-journal/
ABSTRACT: One encounters an obvious problem when using an evolutionary framework to understand online pornography. On the one hand, theories of sex specificity in mating strategies and evolved human nature lead to the prediction that there are commonalities and universals in the content people would seek in online pornography. That is, due to the fact that men have faced a distinct set of issues over the duration of human evolution, research suggests general tendencies in mate preferences, and presumably in the types of pornography that men therefore consume. Likewise, women have dealt with sex-specific challenges during human evolutionary history, resulting in patterns of mate preferences that are reflected in the types of online pornography they consume. Consequently, although the sexes likely differ in the content they prefer, there also should be a rather limited range of material that addresses male and female evolved heritages. Looking online, however, we can immediately ascertain that this limited focus is not the case, and hence, the dilemma. There is a wide range of pornographic material available online, to the extent that we are left with no option but to agree with Rule 34: "If it exists, there is porn of it." This problem demands a solution; how can there be evolved tendencies and yet such diversity in the content of online pornography? We review who the consumers of online pornography are, how frequently they consume it, and the type of content that is most commonly consumed. Our goal is to address the issue of common sexual interests and the diversity of online pornography. We discuss not just sex-specific content but also the variety of interests that are seen within online pornography and erotic literature.
KEYWORDS: Mate Preferences, Pornography, Internet, Sex Differences, Sexual Selection
ABSTRACT: One encounters an obvious problem when using an evolutionary framework to understand online pornography. On the one hand, theories of sex specificity in mating strategies and evolved human nature lead to the prediction that there are commonalities and universals in the content people would seek in online pornography. That is, due to the fact that men have faced a distinct set of issues over the duration of human evolution, research suggests general tendencies in mate preferences, and presumably in the types of pornography that men therefore consume. Likewise, women have dealt with sex-specific challenges during human evolutionary history, resulting in patterns of mate preferences that are reflected in the types of online pornography they consume. Consequently, although the sexes likely differ in the content they prefer, there also should be a rather limited range of material that addresses male and female evolved heritages. Looking online, however, we can immediately ascertain that this limited focus is not the case, and hence, the dilemma. There is a wide range of pornographic material available online, to the extent that we are left with no option but to agree with Rule 34: "If it exists, there is porn of it." This problem demands a solution; how can there be evolved tendencies and yet such diversity in the content of online pornography? We review who the consumers of online pornography are, how frequently they consume it, and the type of content that is most commonly consumed. Our goal is to address the issue of common sexual interests and the diversity of online pornography. We discuss not just sex-specific content but also the variety of interests that are seen within online pornography and erotic literature.
KEYWORDS: Mate Preferences, Pornography, Internet, Sex Differences, Sexual Selection
A model of the dynamics of household vegetarian and vegan rates in the U.K.: A persistent vegetarian campaign has a significantly positive effect on the rate of vegan consumption
A model of the dynamics of household vegetarian and vegan rates in the U.K. James Waters. Appetite, https://doi.org/10.1016/j.appet.2018.05.017
Abstract: Although there are many studies of determinants of vegetarianism and veganism, there have been no previous studies of how their rates in a population jointly change over time. In this paper, we present a flexible model of vegetarian and vegan dietary choices, and derive the joint dynamics of rates of consumption. We fit our model to a pseudo-panel with 23 years of U.K. household data, and find that while vegetarian rates are largely determined by current household characteristics, vegan rates are additionally influenced by their own lagged value. We solve for equilibrium rates of vegetarianism and veganism, show that rates of consumption return to their equilibrium levels following a temporary event which changes those rates, and estimate the effects of campaigns to promote non-meat diets. We find that a persistent vegetarian campaign has a significantly positive effect on the rate of vegan consumption, in answer to an active debate among vegan campaigners.
Keywords: Vegetarianism; Veganism; Food choice; Dietary change; Social influence; Animal advocacy
---
Strange... See this (Rolf Degen): 84 percent of all vegetarians return to meat https://plus.google.com/101046916407340625977/posts/JPsRvnMtbYo
Abstract: Although there are many studies of determinants of vegetarianism and veganism, there have been no previous studies of how their rates in a population jointly change over time. In this paper, we present a flexible model of vegetarian and vegan dietary choices, and derive the joint dynamics of rates of consumption. We fit our model to a pseudo-panel with 23 years of U.K. household data, and find that while vegetarian rates are largely determined by current household characteristics, vegan rates are additionally influenced by their own lagged value. We solve for equilibrium rates of vegetarianism and veganism, show that rates of consumption return to their equilibrium levels following a temporary event which changes those rates, and estimate the effects of campaigns to promote non-meat diets. We find that a persistent vegetarian campaign has a significantly positive effect on the rate of vegan consumption, in answer to an active debate among vegan campaigners.
Keywords: Vegetarianism; Veganism; Food choice; Dietary change; Social influence; Animal advocacy
---
Strange... See this (Rolf Degen): 84 percent of all vegetarians return to meat https://plus.google.com/101046916407340625977/posts/JPsRvnMtbYo
The Goldilocks Placebo Effect: Placebo Effects Are Stronger When People Select a Treatment from an Optimal Number of Choices
The Goldilocks Placebo Effect: Placebo Effects Are Stronger When People Select a Treatment from an Optimal Number of Choices. Rebecca J. Hafner, Mathew P. White and Simon J. Handley. The American Journal of Psychology, Vol. 131, No. 2 (Summer 2018), pp. 175-184. http://www.jstor.org/stable/10.5406/amerjpsyc.131.2.0175
Abstract: People are often more satisfied with a choice (e.g., chocolates, pens) when the number of options in the choice set is “just right” (e.g., 10–12), neither too few (e.g., 2–4) nor too many (e.g., 30–40). We investigated this “Goldilocks effect” in the context of a placebo treatment. Participants reporting nonspecific complaints (e.g., headaches) chose one of Bach's 38 Flower Essences from a choice set of 2 (low choice), 12 (optimal choice), or 38 (full choice) options to use for a 2-week period. Replicating earlier findings in the novel context of a health-related choice, participants were initially more satisfied with the essence they selected when presented with 12 versus either 2 or 38 options. More importantly, self-reported symptoms were significantly lower 2 weeks later in the optimal (12) versus nonoptimal choice conditions (2 and 38). Because there is no known active ingredient in Bach's Flower Essences, we refer to this as the Goldilocks placebo effect. Supporting a counterfactual thinking account of the Goldilocks effect, and despite significantly fewer symptoms after 2 weeks, those in the optimal choice set condition were no longer significantly more satisfied with their choice at the end of testing. Implications for medical practice, especially patient choice, are discussed.
Abstract: People are often more satisfied with a choice (e.g., chocolates, pens) when the number of options in the choice set is “just right” (e.g., 10–12), neither too few (e.g., 2–4) nor too many (e.g., 30–40). We investigated this “Goldilocks effect” in the context of a placebo treatment. Participants reporting nonspecific complaints (e.g., headaches) chose one of Bach's 38 Flower Essences from a choice set of 2 (low choice), 12 (optimal choice), or 38 (full choice) options to use for a 2-week period. Replicating earlier findings in the novel context of a health-related choice, participants were initially more satisfied with the essence they selected when presented with 12 versus either 2 or 38 options. More importantly, self-reported symptoms were significantly lower 2 weeks later in the optimal (12) versus nonoptimal choice conditions (2 and 38). Because there is no known active ingredient in Bach's Flower Essences, we refer to this as the Goldilocks placebo effect. Supporting a counterfactual thinking account of the Goldilocks effect, and despite significantly fewer symptoms after 2 weeks, those in the optimal choice set condition were no longer significantly more satisfied with their choice at the end of testing. Implications for medical practice, especially patient choice, are discussed.
How Many Atheists Are There? Indirect estimate is 26%
How Many Atheists Are There? Will M. Gervais, Maxine B. Najle. Social Psychological and Personality Science, https://doi.org/10.1177/1948550617707015
Abstract: One crucible for theories of religion is their ability to predict and explain the patterns of belief and disbelief. Yet, religious nonbelief is often heavily stigmatized, potentially leading many atheists to refrain from outing themselves even in anonymous polls. We used the unmatched count technique and Bayesian estimation to indirectly estimate atheist prevalence in two nationally representative samples of 2,000 U.S. adults apiece. Widely cited telephone polls (e.g., Gallup, Pew) suggest U.S. atheist prevalence of only 3–11%. In contrast, our most credible indirect estimate is 26% (albeit with considerable estimate and method uncertainty). Our data and model predict that atheist prevalence exceeds 11% with greater than .99 probability and exceeds 20% with roughly .8 probability. Prevalence estimates of 11% were even less credible than estimates of 40%, and all intermediate estimates were more credible. Some popular theoretical approaches to religious cognition may require heavy revision to accommodate actual levels of religious disbelief.
Keywords: religion, atheism, social desirability, stigma, Bayesian estimation
Abstract: One crucible for theories of religion is their ability to predict and explain the patterns of belief and disbelief. Yet, religious nonbelief is often heavily stigmatized, potentially leading many atheists to refrain from outing themselves even in anonymous polls. We used the unmatched count technique and Bayesian estimation to indirectly estimate atheist prevalence in two nationally representative samples of 2,000 U.S. adults apiece. Widely cited telephone polls (e.g., Gallup, Pew) suggest U.S. atheist prevalence of only 3–11%. In contrast, our most credible indirect estimate is 26% (albeit with considerable estimate and method uncertainty). Our data and model predict that atheist prevalence exceeds 11% with greater than .99 probability and exceeds 20% with roughly .8 probability. Prevalence estimates of 11% were even less credible than estimates of 40%, and all intermediate estimates were more credible. Some popular theoretical approaches to religious cognition may require heavy revision to accommodate actual levels of religious disbelief.
Keywords: religion, atheism, social desirability, stigma, Bayesian estimation
Sunday, May 13, 2018
Elite chess players live longer than the general population and have a similar survival advantage to elite competitors in physical sports
Longevity of outstanding sporting achievers: Mind versus muscle. An Tran-Duy, David C. Smerdon, Philip M. Clarke. PLOS, https://doi.org/10.1371/journal.pone.0196938
Abstract
Background: While there is strong evidence showing the survival advantage of elite athletes, much less is known about those engaged in mind sports such as chess. This study aimed to examine the overall as well as regional survival of International Chess Grandmasters (GMs) with a reference to the general population, and compare relative survival (RS) of GMs with that of Olympic medallists (OMs).
Methods: Information on 1,208 GMs and 15,157 OMs, respectively, from 28 countries were extracted from the publicly available data sources. The Kaplan-Meier method was used to estimate the survival rates of the GMs. A Cox proportional hazards model was used to adjust the survival for region, year at risk, age at risk and sex, and to estimate the life expectancy of the GMs. The RS rate was computed by matching each GM or OM by year at risk, age at risk and sex to the life table of the country the individual represented.
Results: The survival rates of GMs at 30 and 60 years since GM title achievement were 87% and 15%, respectively. The life expectancy of GMs at the age of 30 years (which is near the average age when they attained a GM title) was 53.6 ([95% CI]: 47.7–58.5) years, which is significantly greater than the overall weighted mean life expectancy of 45.9 years for the general population. Compared to Eastern Europe, GMs in North America (HR [95% CI]: 0.51 [0.29–0.88]) and Western Europe (HR [95% CI]: 0.53 [0.34–0.83]) had a longer lifespan. The RS analysis showed that both GMs and OMs had a significant survival advantage over the general population, and there was no statistically significant difference in the RS of GMs (RS [95% CI]: 1.14 [1.08–1.20]) compared to OMs: (RS [95% CI]: 1.09 [1.07–1.11]) at 30 years.
Conclusion: Elite chess players live longer than the general population and have a similar survival advantage to elite competitors in physical sports.
Abstract
Background: While there is strong evidence showing the survival advantage of elite athletes, much less is known about those engaged in mind sports such as chess. This study aimed to examine the overall as well as regional survival of International Chess Grandmasters (GMs) with a reference to the general population, and compare relative survival (RS) of GMs with that of Olympic medallists (OMs).
Methods: Information on 1,208 GMs and 15,157 OMs, respectively, from 28 countries were extracted from the publicly available data sources. The Kaplan-Meier method was used to estimate the survival rates of the GMs. A Cox proportional hazards model was used to adjust the survival for region, year at risk, age at risk and sex, and to estimate the life expectancy of the GMs. The RS rate was computed by matching each GM or OM by year at risk, age at risk and sex to the life table of the country the individual represented.
Results: The survival rates of GMs at 30 and 60 years since GM title achievement were 87% and 15%, respectively. The life expectancy of GMs at the age of 30 years (which is near the average age when they attained a GM title) was 53.6 ([95% CI]: 47.7–58.5) years, which is significantly greater than the overall weighted mean life expectancy of 45.9 years for the general population. Compared to Eastern Europe, GMs in North America (HR [95% CI]: 0.51 [0.29–0.88]) and Western Europe (HR [95% CI]: 0.53 [0.34–0.83]) had a longer lifespan. The RS analysis showed that both GMs and OMs had a significant survival advantage over the general population, and there was no statistically significant difference in the RS of GMs (RS [95% CI]: 1.14 [1.08–1.20]) compared to OMs: (RS [95% CI]: 1.09 [1.07–1.11]) at 30 years.
Conclusion: Elite chess players live longer than the general population and have a similar survival advantage to elite competitors in physical sports.
Gender differences in Everyday Risk Taking: An Observational Study of Pedestrians in Newcastle upon Tyne
Gender differences in Everyday Risk Taking: An Observational Study of Pedestrians in Newcastle upon Tyne. Eryn O'Dowd, Thomas V Pollet. Letters on Evolutionary Behavioral Science, Vol 9, No 1 (2018). http://lebs.hbesj.org/index.php/lebs/article/view/lebs.2018.65
Abstract: Evolutionary psychologists have demonstrated that there are evolved differences in risk taking between men and women. Potentially, these also play out in every day behaviours, such as in traffic. We hypothesised that (perceived) gender would influence using a pedestrian crossing. In addition, we also explored if the presence of a contextual factor, presence of daylight, could modify risk taking behaviour. 558 pedestrians were directly observed and their use of a crossing near a Metro station in a large city in the North East of England was coded. Using logistic regression, we found evidence that women more inclined than men to use the crossing. We found no evidence for a contextual effect of daylight or an interaction between daylight and gender on use of the crossing. We discuss the limitations and implications of this finding with reference to literature on risk taking.
Abstract: Evolutionary psychologists have demonstrated that there are evolved differences in risk taking between men and women. Potentially, these also play out in every day behaviours, such as in traffic. We hypothesised that (perceived) gender would influence using a pedestrian crossing. In addition, we also explored if the presence of a contextual factor, presence of daylight, could modify risk taking behaviour. 558 pedestrians were directly observed and their use of a crossing near a Metro station in a large city in the North East of England was coded. Using logistic regression, we found evidence that women more inclined than men to use the crossing. We found no evidence for a contextual effect of daylight or an interaction between daylight and gender on use of the crossing. We discuss the limitations and implications of this finding with reference to literature on risk taking.
From 2012, Status quo maintenance has several mechanisms; loss aversion, regret avoidance, repeated exposure, rationalization and assumption of goodness due to mere existence and longevity create a preference for existing states
From 2012: Bias in Favor of the Status Quo. Scott Eidelman, Christian S. Crandall. Social and Personality Psychology Compass, https://doi.org/10.1111/j.1751-9004.2012.00427.x
Abstract: People favor the existing and longstanding states of the world. Rational explanations for status quo maintenance are complemented by a number of non‐rational mechanisms; loss aversion, regret avoidance, repeated exposure, and rationalization create a preference for existing states. We show that the status quo also benefits from a simple assumption of goodness due to mere existence and longevity; people treat existence as a prima facie case for goodness, aesthetic and ethical Longevity increases this preference. These biases operate heuristically, forming barriers to cognitive and social change.
Check also From 2010: The longer something is thought to exist, the better it is evaluated
Abstract: People favor the existing and longstanding states of the world. Rational explanations for status quo maintenance are complemented by a number of non‐rational mechanisms; loss aversion, regret avoidance, repeated exposure, and rationalization create a preference for existing states. We show that the status quo also benefits from a simple assumption of goodness due to mere existence and longevity; people treat existence as a prima facie case for goodness, aesthetic and ethical Longevity increases this preference. These biases operate heuristically, forming barriers to cognitive and social change.
Check also From 2010: The longer something is thought to exist, the better it is evaluated
From 2010: Longer is better. Scott Eidelman, Jennifer Pattershall, Christian S.Crandallb. Journal of Experimental Social Psychology, Volume 46, Issue 6, November 2010, Pages 993-998. https://www.bipartisanalliance.com/2018/05/from-2010-longer-something-is-thought.html
From 2010: The longer something is thought to exist, the better it is evaluated
From 2010: Longer is better. Scott Eidelman, Jennifer Pattershall, Christian S.Crandallb. Journal of Experimental Social Psychology, Volume 46, Issue 6, November 2010, Pages 993-998. https://doi.org/10.1016/j.jesp.2010.07.008
Abstract: The longer something is thought to exist, the better it is evaluated. In Study 1, participants preferred an existing university requirement over an alternative; this pattern was more pronounced when the existing requirement was said to be in place for a longer period of time. In Study 2, participants rated acupuncture more favorably as a function of how old the practice was described. Aesthetic judgments of art (Study 3) and nature (Study 4) were also positively affected by time in existence, as were gustatory evaluations of an edible consumer good (Study 5). Features of the research designs argue against mere exposure, loss aversion, and rational inference as explanations for these findings. Instead, time in existence seems to operate as a heuristic; longer means better.
Abstract: The longer something is thought to exist, the better it is evaluated. In Study 1, participants preferred an existing university requirement over an alternative; this pattern was more pronounced when the existing requirement was said to be in place for a longer period of time. In Study 2, participants rated acupuncture more favorably as a function of how old the practice was described. Aesthetic judgments of art (Study 3) and nature (Study 4) were also positively affected by time in existence, as were gustatory evaluations of an edible consumer good (Study 5). Features of the research designs argue against mere exposure, loss aversion, and rational inference as explanations for these findings. Instead, time in existence seems to operate as a heuristic; longer means better.
Yoga and meditation are highly popular. Purportedly, they foster well-being by “quieting the ego” or, more specifically, curtailing self-enhancement. We observed that, instead, they boost self-enhancement
Gebauer, Jochen, Nehrlich, A.D., Stahlberg, D., Sedikides, Constantine, Hackenschmidt, D, Schick, D, Stegmaie, C A, Windfelder, C. C, Bruk, A and Mander, J V (2018) Mind-body practices and the self: yoga and meditation do not quiet the ego, but instead boost self-enhancement. Psychological Science, 1-22. (In Press). https://eprints.soton.ac.uk/id/eprint/420273
Abstract: Mind-body practices enjoy immense public and scientific interest. Yoga and meditation are highly popular. Purportedly, they foster well-being by “quieting the ego” or, more specifically, curtailing self-enhancement. However, this ego-quieting effect contradicts an apparent psychological universal, the self-centrality principle. According to this principle, practicing any skill renders it self-central, and self-centrality breeds self-enhancement. We examined those opposing predictions in the first tests of mind-body practices’ self-enhancement effects. Experiment 1 followed 93 yoga students over 15 weeks, assessing self-centrality and self-enhancement after yoga practice (yoga condition, n = 246) and without practice (control condition, n = 231). Experiment 2 followed 162 meditators over 4 weeks (meditation condition: n = 246; control condition: n = 245). Self-enhancement was higher in the yoga (Experiment 1) and meditation (Experiment 2) conditions, and those effects were mediated by greater self-centrality. Additionally, greater self-enhancement mediated mind-body practices’ well-being benefits. Evidently, neither yoga nor meditation quiet the ego; instead, they boost self-enhancement.
---
Supplemental
S1. We assessed agentic narcissism with the Narcissistic Personality Inventory (Raskin & Terry, 1988), the most widely used measure of agentic narcissism (Gebauer, Sedikides, Verplanken, & Holland, 2012). We administered a 4-item short-form, analogous to our assessment of communal narcissism (see Experiment 1’s Method section in the main text). We selected items with a good item-total correlation, adequate content-breadth, and high face-validity. The four items were: “I like having authority over people,” “I am more capable than other people,” “I think I am a special person,” and “I like to be the center of attention” (1=does not apply at all, 7=applies completely) (.63≤ɑs≤.77, ɑ average=.71). We intermixed items assessing agentic and communal narcissism.
[...]
S2. In Experiment 1, we assessed self-centrality and self-enhancement with the following items.
Self-centrality: “Executing correctly the asanas (yoga positions) that we were taught is...,” “Focusing mindfully on the exercises across the whole yoga class is...,” “Holding the asanas (yoga positions) as long as we were taught is...,” and “Integrating the content taught in the yoga class into my everyday life is...” (1=not at all central to me, 11=central to me).
Better-than-average: “In comparison to the average participant of my yoga class, my ability to execute correctly the asanas (yoga positions) that we were taught is...,” “In comparison to the average participant of my yoga class, my ability to focus mindfully on the exercises across the whole yoga class is...,” “In comparison to the average participant of my yoga class, my ability to hold the asanas (yoga positions) as long as we were taught is...,” and “In comparison to the average participant of my yoga class, my ability to integrate the content taught in the yoga class into my everyday life is...” The rating-scale ranged from 1 (well below average) via 6 (average) to 11 (well above average).
Communal narcissism: “I have a very positive influence on others,” “I will be well known for the good deeds I will have done,” “I am the most caring person in my social surrounding,” and “I am going to bring peace and justice to the world” (1=does not apply at all, 7=applies completely).
Self-esteem: “At the moment, I have high self-esteem” (1=does not apply at all, 7=applies completely).
[...]
S7. In Experiment 2, we assessed self-centrality, self-enhancement, and well-being with the following items.
Self-centrality: The items started with the stem “How central is it for you...” and continued as follows: “...to be a loving person?,” “...to be free from hatred?,” “...to be a kindhearted person?,” “...to be free from greed?,” “...to be a caring person?,” “...to be free from bias?,” “...to be an understanding person?,” “...to be free from envy?,” “...to be a helpful person?,” “...to be free from egotism?” (1=not at all central me, 81=very central to me).
Better-than-average: The items started with the stem “In comparison to the average participant of this study,...” and continued as follows: “...I am a loving person,” “...I am free from hatred,” “...I am a kindhearted person,” “...I am free from greed,” “...I am a caring person,” “...I am free from bias,” “...I am an understanding person,” “...I am free from envy,” “...I am a helpful person,” “...I am free from egotism” (1=very much below average, 81=very much above average).
Communal narcissism: We used the full 16-item Communal Narcissism Inventory, which can be found in Gebauer et al. (2012).
Self-esteem: We used the full 10-item Self-Esteem Scale, which can be found in Rosenberg (1965).
Hedonic well-being: We used the following nine items to assess hedonic well-being’s affective component. “I am happy,” “I am anxious” (reverse-coded), “I feel satisfied,” “I am depressed” (reverse-coded), “I feel positive,” “I am frustrated” (reverse-coded), “I am cheerful,” “I am upset” (reverse-coded), and “I feel blue” (reverse-coded). We used the full 5-item Satisfaction with Life Scale to assess hedonic well-being’s cognitive component (1=absolutely wrong, 81=absolutely right), and the items of that scale can be found in Diener, Emmons, Larsen, and Griffin (1985).
Eudemonic well-being: “I judge myself by what I think is important, not by the values of what others think is important,” “The demands of everyday life often get me down,” “For me, life has been a continuous process of learning, changing, and growth,” “Maintaining close relationships has been difficult and frustrating for me,” “Some people wander aimlessly through life, but I am not one of them,” “In many ways, I feel disappointed about my achievements in life,” “I tend to be influenced by people with strong opinions,” “In general, I feel I am in charge of the situation in which I live,” “I gave up trying to make big improvements or changes in my life a long time ago,” “People would describe me as a giving person, willing to share my time with others,” “I sometimes feel as if I’ve done all there is to do in life,” “I like most aspects of my personality” (1=absolutely wrong, 81=absolutely right).
Experiment 2 contained two additional dependent variables. We included them for a different project, and they are irrelevant to the present article (i.e., they did not tap into self-centrality, self-enhancement, or well-being). One measure was Neff’s (2003) Self-Compassion Scale in its 6-item short-form (Dyllick-Brenzinger, 2010). The other measure contained 10 vignettes. Each briefly described an ambiguous behavior that can be interpreted as a display of weakness or strength. For example, one vignette read: “If I am the first to apologize after a fight with my relationship partner, I display...” (1=weakness, 81=strength). Experiment 2 was the first study to administer this newly devised measure.
[...]
S10. Parallel to Experiment 1 (see S5), we tested the alternative explanation that the findings are driven by meditation beginners, who may have not yet acquired the necessary experience and skill for meditation to unfold its ego-quieting effect. Hence, we examined the cross-level interactions between meditation (vs. control) expertise (i.e., years of practice) on self-centrality and on self-enhancement (g-factor). Expertise neither moderated the meditation effect on self-centrality, B=-.05, 95% CI [-.16, .05], SE=.05, t=-1.00, nor the meditation effect on self-enhancement, B=.001, 95% CI [-.09, .09], SE=.05, t=0.03. Once again, the results clearly favor the SCP-universal hypothesis over its alternative explanation.
Abstract: Mind-body practices enjoy immense public and scientific interest. Yoga and meditation are highly popular. Purportedly, they foster well-being by “quieting the ego” or, more specifically, curtailing self-enhancement. However, this ego-quieting effect contradicts an apparent psychological universal, the self-centrality principle. According to this principle, practicing any skill renders it self-central, and self-centrality breeds self-enhancement. We examined those opposing predictions in the first tests of mind-body practices’ self-enhancement effects. Experiment 1 followed 93 yoga students over 15 weeks, assessing self-centrality and self-enhancement after yoga practice (yoga condition, n = 246) and without practice (control condition, n = 231). Experiment 2 followed 162 meditators over 4 weeks (meditation condition: n = 246; control condition: n = 245). Self-enhancement was higher in the yoga (Experiment 1) and meditation (Experiment 2) conditions, and those effects were mediated by greater self-centrality. Additionally, greater self-enhancement mediated mind-body practices’ well-being benefits. Evidently, neither yoga nor meditation quiet the ego; instead, they boost self-enhancement.
---
Supplemental
S1. We assessed agentic narcissism with the Narcissistic Personality Inventory (Raskin & Terry, 1988), the most widely used measure of agentic narcissism (Gebauer, Sedikides, Verplanken, & Holland, 2012). We administered a 4-item short-form, analogous to our assessment of communal narcissism (see Experiment 1’s Method section in the main text). We selected items with a good item-total correlation, adequate content-breadth, and high face-validity. The four items were: “I like having authority over people,” “I am more capable than other people,” “I think I am a special person,” and “I like to be the center of attention” (1=does not apply at all, 7=applies completely) (.63≤ɑs≤.77, ɑ average=.71). We intermixed items assessing agentic and communal narcissism.
[...]
S2. In Experiment 1, we assessed self-centrality and self-enhancement with the following items.
Self-centrality: “Executing correctly the asanas (yoga positions) that we were taught is...,” “Focusing mindfully on the exercises across the whole yoga class is...,” “Holding the asanas (yoga positions) as long as we were taught is...,” and “Integrating the content taught in the yoga class into my everyday life is...” (1=not at all central to me, 11=central to me).
Better-than-average: “In comparison to the average participant of my yoga class, my ability to execute correctly the asanas (yoga positions) that we were taught is...,” “In comparison to the average participant of my yoga class, my ability to focus mindfully on the exercises across the whole yoga class is...,” “In comparison to the average participant of my yoga class, my ability to hold the asanas (yoga positions) as long as we were taught is...,” and “In comparison to the average participant of my yoga class, my ability to integrate the content taught in the yoga class into my everyday life is...” The rating-scale ranged from 1 (well below average) via 6 (average) to 11 (well above average).
Communal narcissism: “I have a very positive influence on others,” “I will be well known for the good deeds I will have done,” “I am the most caring person in my social surrounding,” and “I am going to bring peace and justice to the world” (1=does not apply at all, 7=applies completely).
Self-esteem: “At the moment, I have high self-esteem” (1=does not apply at all, 7=applies completely).
[...]
S7. In Experiment 2, we assessed self-centrality, self-enhancement, and well-being with the following items.
Self-centrality: The items started with the stem “How central is it for you...” and continued as follows: “...to be a loving person?,” “...to be free from hatred?,” “...to be a kindhearted person?,” “...to be free from greed?,” “...to be a caring person?,” “...to be free from bias?,” “...to be an understanding person?,” “...to be free from envy?,” “...to be a helpful person?,” “...to be free from egotism?” (1=not at all central me, 81=very central to me).
Better-than-average: The items started with the stem “In comparison to the average participant of this study,...” and continued as follows: “...I am a loving person,” “...I am free from hatred,” “...I am a kindhearted person,” “...I am free from greed,” “...I am a caring person,” “...I am free from bias,” “...I am an understanding person,” “...I am free from envy,” “...I am a helpful person,” “...I am free from egotism” (1=very much below average, 81=very much above average).
Communal narcissism: We used the full 16-item Communal Narcissism Inventory, which can be found in Gebauer et al. (2012).
Self-esteem: We used the full 10-item Self-Esteem Scale, which can be found in Rosenberg (1965).
Hedonic well-being: We used the following nine items to assess hedonic well-being’s affective component. “I am happy,” “I am anxious” (reverse-coded), “I feel satisfied,” “I am depressed” (reverse-coded), “I feel positive,” “I am frustrated” (reverse-coded), “I am cheerful,” “I am upset” (reverse-coded), and “I feel blue” (reverse-coded). We used the full 5-item Satisfaction with Life Scale to assess hedonic well-being’s cognitive component (1=absolutely wrong, 81=absolutely right), and the items of that scale can be found in Diener, Emmons, Larsen, and Griffin (1985).
Eudemonic well-being: “I judge myself by what I think is important, not by the values of what others think is important,” “The demands of everyday life often get me down,” “For me, life has been a continuous process of learning, changing, and growth,” “Maintaining close relationships has been difficult and frustrating for me,” “Some people wander aimlessly through life, but I am not one of them,” “In many ways, I feel disappointed about my achievements in life,” “I tend to be influenced by people with strong opinions,” “In general, I feel I am in charge of the situation in which I live,” “I gave up trying to make big improvements or changes in my life a long time ago,” “People would describe me as a giving person, willing to share my time with others,” “I sometimes feel as if I’ve done all there is to do in life,” “I like most aspects of my personality” (1=absolutely wrong, 81=absolutely right).
Experiment 2 contained two additional dependent variables. We included them for a different project, and they are irrelevant to the present article (i.e., they did not tap into self-centrality, self-enhancement, or well-being). One measure was Neff’s (2003) Self-Compassion Scale in its 6-item short-form (Dyllick-Brenzinger, 2010). The other measure contained 10 vignettes. Each briefly described an ambiguous behavior that can be interpreted as a display of weakness or strength. For example, one vignette read: “If I am the first to apologize after a fight with my relationship partner, I display...” (1=weakness, 81=strength). Experiment 2 was the first study to administer this newly devised measure.
[...]
S10. Parallel to Experiment 1 (see S5), we tested the alternative explanation that the findings are driven by meditation beginners, who may have not yet acquired the necessary experience and skill for meditation to unfold its ego-quieting effect. Hence, we examined the cross-level interactions between meditation (vs. control) expertise (i.e., years of practice) on self-centrality and on self-enhancement (g-factor). Expertise neither moderated the meditation effect on self-centrality, B=-.05, 95% CI [-.16, .05], SE=.05, t=-1.00, nor the meditation effect on self-enhancement, B=.001, 95% CI [-.09, .09], SE=.05, t=0.03. Once again, the results clearly favor the SCP-universal hypothesis over its alternative explanation.
Saturday, May 12, 2018
Adult Human Hippocampus: No New Neurons in Sight
Adult Human Hippocampus: No New Neurons in Sight. Jon I Arellano, Brian Harding, Jean-Leon Thomas. Cerebral Cortex, bhy106, https://doi.org/10.1093/cercor/bhy106
Abstract: In this issue of Cerebral Cortex, Cipriani et al. are following up on the recent report of Sorrels et al. to add novel immunohistological observations indicating that, unlike rodents, adult and aging humans do not acquire new neurons in the hippocampus. The common finding emerging from these 2 different, but almost simultaneous studies is highly significant because the dentate gyrus of the hippocampus was, until recently, considered as the only structure in the human brain that may continue neurogenesis throughout the full life span.
Keywords: adult neurogenesis, dentate gyrus, human hippocampus
---
During the lifetime of most vertebrate animals, there is continuous neuronal addition and/or turnover, but this seemingly useful capacity decreases drastically during evolution (e.g., Jacobson 1970). The classical neuroanatomists generally believed that, after the developmental period which ends after puberty or sometimes during adolescence, the human neuronal assembly becomes stabilized (e.g., Ramon y Cajal 1913-1914). However, a paper published 2 decades ago in Nature Medicine (Eriksson et al. 1998) reported the detection in brain neural cells of deoxybromouridine (BrdU) initially administered to cancer patients for diagnostic purposes. This finding convinced a great number of scientists and lay people that the hippocampus was not different in humans than in other mammalian species, as it seemed to also generate new neurons during the entire life span. This possibility has been considered by many as a promise for endogenous cell replacement therapies for aging and neurological diseases as well as for CNS injury repair.
Although some studies have suggested caution in the interpretation of BrdU, showing a toxic effect and its incorporation into non-dividing cells damaged by drugs or exposed to hypoxia/ ischemia (e.g., Kuan et al. 2004; Breunig et al. 2007; Spector and Johanson 2007; Duque and Rakic 2015), the report by Eriksson et al. sparkled the field, and was followed by a number of studies that tried to ratify those results using immunohistochemical methods to identify markers of neurogenesis in postmortem human tissue. Those markers were aimed to progenitors (GFAP, Nestin, vimentin, Sox2), proliferating cells (Ki67, MCM2, PCNA), and immature neurons (DCX, PSANCAM, Tuj1). However, those studies have produced heterogeneous, inconclusive, sometimes contradictory results. One important obstacle is the ability to obtain well-preserved human tissues with a short postmortem delay, that may allow to obtain clear, reliable immunostaining. Another caveat is that many of those reports studied only one marker of neurogenesis, producing inconclusive results. For example, using only Ki67 or MCM2 to identify proliferating cells without further characterization, is not a reliable method to assess neurogenesis, as the labeled cells might be producing oligodendrocytes or microglial cells (Reif et al. 2006; Knoth et al. 2010). The use of PCNA has added a lot of confusion to the field, as it is an inconsistent and unreliable marker of proliferation (Reif et al. 2006; Sanai et al. 2007). Also, morphological analysis of the cells labeled is a must, as for example, the markers of progenitors are shared with reactive astrocytes. Also DCX and PSANCAM expression has been reported in small cells with scant cytoplasm (Knoth et al. 2010; Jin et al. 2004), a morphology that is not expected in immature, migratory neurons.
Spalding et al. (2013) used an alternative technique to assess cell renewal, the neuronal content of C14 in the hippocampus. The increased levels of C14 in hippocampal neurons were interpreted as the consequence of a high and sustained level of neurogenesis in the dentate gyrus along life, in spite of the difficulty to reconcile this data with other studies in human (Knoth et al. 2010) and rodents (Ben Abdallah et al. 2010) showing that hippocampal neurogenesis has an early exponential decline before reaching low, stable adult levels.
At the time of writing this commentary, a new report has been published supporting the model that neurogenesis in the human hippocampus persists throughout adulthood (Boldrini et al. 2018). The authors of this study have, however, based their conclusion on disputable interpretations of immunelabeled cell types: some of the DCX- and PSA-NCAM-positive entities shown belong to the category of small and rounded cells described before, far from the typical elongated morphology of newly generated neurons. Also, cells identified as neuronal progenitors are Nestin- and GFAP-positive cells that do not have the characteristic polarized, radial-like morphology of progenitors and rather look very much like astrocytes.
It is clear that, irrespective of the caveats, there is widespread enthusiasm by the prospect of adult neurogenesis in humans and its therapeutic possibilities. As far as we know, there has been only one report published in 2016 (Dennis et al. 2016) reporting that hippocampal neurogenesis in adult humans is negligible. The authors found only an insignificant number of proliferating progenitors that corresponded to microglial cells and scarce DCX-expressing cells in the adult human hippocampus.
It is therefore quite a coincidence that, almost simultaneously, 2 independent papers coming from different parts of the world have used a similar approach and methodology leading to converging results and the following similar conclusions: hippocampal neurogenesis in humans decays exponentially during childhood and is absent or negligible in the adult. Those 2 papers are Sorrells et al. (2018) from the lab of Alvarez-Buylla in USA published in March in Nature, and the study by Cipriani and coworkers from the Adle-Biassette’s lab in France published in this issue of Cerebral Cortex (2018; 27: 000–000).
Cipriani et al. used a large battery of antibodies to identify progenitors, cell proliferation and differentiating neurons as well as their glial and vascular environment in the human hippocampus from early gestation to aging adults. As expected, they found abundant proliferating progenitors and newly generated neurons in the hippocampus during gestation, but they also observed a sharp decline of all the neurogenic markers after birth. It is worth to note that the analysis of hippocampal tissues at gestational and perinatal stages clearly assessed the presence of numerous proliferating progenitors and immature neurons. But those numbers decreased rapidly in early infancy, and by the age of 7, the authors detected only a few progenitors without significant proliferation and no DCX+ and Tuj1+ colabeled cells. In adults, a single cell co-expressing Nestin and Ki67 was found out of 19 samples, and only a few cells expressing DCX displaying a non-neuronal morphology (small nucleus, scant cytoplasm) were detected.
Interestingly, Cipriani et al. show almost identical results to those of Sorrells, Paredes and coworkers. Both 2 papers provide a high quality analysis of developmental and adult neurogenesis in the human hippocampus, while Sorrells et al. performed an exhaustive analysis, including transcriptome data that is, without a doubt, the most comprehensive to date. They combined the use of well-preserved human postmortem material and surgically resected hippocampi to assess for possible postmortem effects, and of an extensive battery of antibodies completed by electron microscopy analyses. As a key condition to the reliability of their study, they followed stringent criteria to identify differentiating neurons, based on the cellular morphology and the colocalization of DCX and PSA-NCAM. According to both Cipriani et al. and Sorrells, Paredes et al., DCX+ differentiating neurons, were absent from adult hippocampus samples. Both found however, small DCX+ cells with scant cytoplasm in adult samples, that were found by Sorrells, Paredes et al. to express oligodendroglial and microglial markers, expanding previous data of DCX expression in glial cells (Verwer et al. 2007; Zhang et al. 2014). Additionally, they analyzed the formation of the subgranular zone (SGZ) in humans, but they concluded that actually there is no SGZ compartment in the human dentate gyrus comparable to the SGZ of rodents. The SGZ is an important player, as it is the specific niche where postnatal progenitors coalesce and proliferate in all other mammals exhibiting adult neurogenesis. Thus, the lack of SGZ may explain the lack of adult neurogenesis in the human dentate gyrus.
As pointed out by Cipriani and coworkers in this issue, the reasoning behind their study is to shed light on the neurogenic potential of the human hippocampus as “little information about human adult neurogenesis and neural stem/progenitor cells exists to justify the investment of resources in developing new treatments in humans, and most of the available evidence is inconclusive or contradictory.” Definitely, their study together with Sorrells’ and Dennis’ contributes to bring solid and consistent arguments to inform the field about a real possibility: the human species is once again different, and no significant neurogenesis occurs in the adult human hippocampus. This finding is in tune with the lack of subventricular neurogenesis and migration of new neurons to the olfactory bulb in the adult human, which has already been consistently reported (Sanai et al. 2011; Wang et al. 2011; Bergmann et al. 2012).
Finally, the absence of significant neurogenesis in normal adult humans poses a logical question: why would the human brain loose what seems to be a useful ability: to add, renew and regenerate neurons? As well formulated in the last sentence of the abstract of Sorrells’ paper: “The early decline in hippocampal neurogenesis raises questions about how the function of the dentate gyrus differs between humans and other species in which adult hippocampal neurogenesis is preserved.” However, as pointed out by Rakic (1985), there may be an advantage in keeping your old neurons without adding new ones, when the aim is to acquire and preserve complex knowledge during many decades of life. Stability in the neuronal population over a lifetime may not be a bad thing after all.
Abstract: In this issue of Cerebral Cortex, Cipriani et al. are following up on the recent report of Sorrels et al. to add novel immunohistological observations indicating that, unlike rodents, adult and aging humans do not acquire new neurons in the hippocampus. The common finding emerging from these 2 different, but almost simultaneous studies is highly significant because the dentate gyrus of the hippocampus was, until recently, considered as the only structure in the human brain that may continue neurogenesis throughout the full life span.
Keywords: adult neurogenesis, dentate gyrus, human hippocampus
---
During the lifetime of most vertebrate animals, there is continuous neuronal addition and/or turnover, but this seemingly useful capacity decreases drastically during evolution (e.g., Jacobson 1970). The classical neuroanatomists generally believed that, after the developmental period which ends after puberty or sometimes during adolescence, the human neuronal assembly becomes stabilized (e.g., Ramon y Cajal 1913-1914). However, a paper published 2 decades ago in Nature Medicine (Eriksson et al. 1998) reported the detection in brain neural cells of deoxybromouridine (BrdU) initially administered to cancer patients for diagnostic purposes. This finding convinced a great number of scientists and lay people that the hippocampus was not different in humans than in other mammalian species, as it seemed to also generate new neurons during the entire life span. This possibility has been considered by many as a promise for endogenous cell replacement therapies for aging and neurological diseases as well as for CNS injury repair.
Although some studies have suggested caution in the interpretation of BrdU, showing a toxic effect and its incorporation into non-dividing cells damaged by drugs or exposed to hypoxia/ ischemia (e.g., Kuan et al. 2004; Breunig et al. 2007; Spector and Johanson 2007; Duque and Rakic 2015), the report by Eriksson et al. sparkled the field, and was followed by a number of studies that tried to ratify those results using immunohistochemical methods to identify markers of neurogenesis in postmortem human tissue. Those markers were aimed to progenitors (GFAP, Nestin, vimentin, Sox2), proliferating cells (Ki67, MCM2, PCNA), and immature neurons (DCX, PSANCAM, Tuj1). However, those studies have produced heterogeneous, inconclusive, sometimes contradictory results. One important obstacle is the ability to obtain well-preserved human tissues with a short postmortem delay, that may allow to obtain clear, reliable immunostaining. Another caveat is that many of those reports studied only one marker of neurogenesis, producing inconclusive results. For example, using only Ki67 or MCM2 to identify proliferating cells without further characterization, is not a reliable method to assess neurogenesis, as the labeled cells might be producing oligodendrocytes or microglial cells (Reif et al. 2006; Knoth et al. 2010). The use of PCNA has added a lot of confusion to the field, as it is an inconsistent and unreliable marker of proliferation (Reif et al. 2006; Sanai et al. 2007). Also, morphological analysis of the cells labeled is a must, as for example, the markers of progenitors are shared with reactive astrocytes. Also DCX and PSANCAM expression has been reported in small cells with scant cytoplasm (Knoth et al. 2010; Jin et al. 2004), a morphology that is not expected in immature, migratory neurons.
Spalding et al. (2013) used an alternative technique to assess cell renewal, the neuronal content of C14 in the hippocampus. The increased levels of C14 in hippocampal neurons were interpreted as the consequence of a high and sustained level of neurogenesis in the dentate gyrus along life, in spite of the difficulty to reconcile this data with other studies in human (Knoth et al. 2010) and rodents (Ben Abdallah et al. 2010) showing that hippocampal neurogenesis has an early exponential decline before reaching low, stable adult levels.
At the time of writing this commentary, a new report has been published supporting the model that neurogenesis in the human hippocampus persists throughout adulthood (Boldrini et al. 2018). The authors of this study have, however, based their conclusion on disputable interpretations of immunelabeled cell types: some of the DCX- and PSA-NCAM-positive entities shown belong to the category of small and rounded cells described before, far from the typical elongated morphology of newly generated neurons. Also, cells identified as neuronal progenitors are Nestin- and GFAP-positive cells that do not have the characteristic polarized, radial-like morphology of progenitors and rather look very much like astrocytes.
It is clear that, irrespective of the caveats, there is widespread enthusiasm by the prospect of adult neurogenesis in humans and its therapeutic possibilities. As far as we know, there has been only one report published in 2016 (Dennis et al. 2016) reporting that hippocampal neurogenesis in adult humans is negligible. The authors found only an insignificant number of proliferating progenitors that corresponded to microglial cells and scarce DCX-expressing cells in the adult human hippocampus.
It is therefore quite a coincidence that, almost simultaneously, 2 independent papers coming from different parts of the world have used a similar approach and methodology leading to converging results and the following similar conclusions: hippocampal neurogenesis in humans decays exponentially during childhood and is absent or negligible in the adult. Those 2 papers are Sorrells et al. (2018) from the lab of Alvarez-Buylla in USA published in March in Nature, and the study by Cipriani and coworkers from the Adle-Biassette’s lab in France published in this issue of Cerebral Cortex (2018; 27: 000–000).
Cipriani et al. used a large battery of antibodies to identify progenitors, cell proliferation and differentiating neurons as well as their glial and vascular environment in the human hippocampus from early gestation to aging adults. As expected, they found abundant proliferating progenitors and newly generated neurons in the hippocampus during gestation, but they also observed a sharp decline of all the neurogenic markers after birth. It is worth to note that the analysis of hippocampal tissues at gestational and perinatal stages clearly assessed the presence of numerous proliferating progenitors and immature neurons. But those numbers decreased rapidly in early infancy, and by the age of 7, the authors detected only a few progenitors without significant proliferation and no DCX+ and Tuj1+ colabeled cells. In adults, a single cell co-expressing Nestin and Ki67 was found out of 19 samples, and only a few cells expressing DCX displaying a non-neuronal morphology (small nucleus, scant cytoplasm) were detected.
Interestingly, Cipriani et al. show almost identical results to those of Sorrells, Paredes and coworkers. Both 2 papers provide a high quality analysis of developmental and adult neurogenesis in the human hippocampus, while Sorrells et al. performed an exhaustive analysis, including transcriptome data that is, without a doubt, the most comprehensive to date. They combined the use of well-preserved human postmortem material and surgically resected hippocampi to assess for possible postmortem effects, and of an extensive battery of antibodies completed by electron microscopy analyses. As a key condition to the reliability of their study, they followed stringent criteria to identify differentiating neurons, based on the cellular morphology and the colocalization of DCX and PSA-NCAM. According to both Cipriani et al. and Sorrells, Paredes et al., DCX+ differentiating neurons, were absent from adult hippocampus samples. Both found however, small DCX+ cells with scant cytoplasm in adult samples, that were found by Sorrells, Paredes et al. to express oligodendroglial and microglial markers, expanding previous data of DCX expression in glial cells (Verwer et al. 2007; Zhang et al. 2014). Additionally, they analyzed the formation of the subgranular zone (SGZ) in humans, but they concluded that actually there is no SGZ compartment in the human dentate gyrus comparable to the SGZ of rodents. The SGZ is an important player, as it is the specific niche where postnatal progenitors coalesce and proliferate in all other mammals exhibiting adult neurogenesis. Thus, the lack of SGZ may explain the lack of adult neurogenesis in the human dentate gyrus.
As pointed out by Cipriani and coworkers in this issue, the reasoning behind their study is to shed light on the neurogenic potential of the human hippocampus as “little information about human adult neurogenesis and neural stem/progenitor cells exists to justify the investment of resources in developing new treatments in humans, and most of the available evidence is inconclusive or contradictory.” Definitely, their study together with Sorrells’ and Dennis’ contributes to bring solid and consistent arguments to inform the field about a real possibility: the human species is once again different, and no significant neurogenesis occurs in the adult human hippocampus. This finding is in tune with the lack of subventricular neurogenesis and migration of new neurons to the olfactory bulb in the adult human, which has already been consistently reported (Sanai et al. 2011; Wang et al. 2011; Bergmann et al. 2012).
Finally, the absence of significant neurogenesis in normal adult humans poses a logical question: why would the human brain loose what seems to be a useful ability: to add, renew and regenerate neurons? As well formulated in the last sentence of the abstract of Sorrells’ paper: “The early decline in hippocampal neurogenesis raises questions about how the function of the dentate gyrus differs between humans and other species in which adult hippocampal neurogenesis is preserved.” However, as pointed out by Rakic (1985), there may be an advantage in keeping your old neurons without adding new ones, when the aim is to acquire and preserve complex knowledge during many decades of life. Stability in the neuronal population over a lifetime may not be a bad thing after all.
75 years ago… New York Times debunks (vs. foments) a health scare
75 years ago… New York Times debunks (vs. foments) a health scare. By Steven Milloy
Junkscience, May 2018, https://junkscience.com/2018/05/75-years-ago-new-york-times-debunks-vs-foments-a-health-scare/
Friday, May 11, 2018
The sunk-cost fallacy—pursuing an inferior alternative merely because we have previously invested significant, but nonrecoverable, resources in it—, a striking violation of rational decision making, can appear when costs are borne by someone other than the decision maker
The Interpersonal Sunk-Cost Effect. Christopher Y. Olivola. Psychological Science, https://doi.org/10.1177/0956797617752641
Abstract: The sunk-cost fallacy—pursuing an inferior alternative merely because we have previously invested significant, but nonrecoverable, resources in it—represents a striking violation of rational decision making. Whereas theoretical accounts and empirical examinations of the sunk-cost effect have generally been based on the assumption that it is a purely intrapersonal phenomenon (i.e., solely driven by one’s own past investments), the present research demonstrates that it is also an interpersonal effect (i.e., people will alter their choices in response to other people’s past investments). Across eight experiments (N = 6,076) covering diverse scenarios, I documented sunk-cost effects when the costs are borne by someone other than the decision maker. Moreover, the interpersonal sunk-cost effect is not moderated by social closeness or whether other people observe their sunk costs being “honored.” These findings uncover a previously undocumented bias, reveal that the sunk-cost effect is a much broader phenomenon than previously thought, and pose interesting challenges for existing accounts of this fascinating human tendency.
Keywords: decision making, heuristics, preferences, social influence, open data, open materials, preregistered
Abstract: The sunk-cost fallacy—pursuing an inferior alternative merely because we have previously invested significant, but nonrecoverable, resources in it—represents a striking violation of rational decision making. Whereas theoretical accounts and empirical examinations of the sunk-cost effect have generally been based on the assumption that it is a purely intrapersonal phenomenon (i.e., solely driven by one’s own past investments), the present research demonstrates that it is also an interpersonal effect (i.e., people will alter their choices in response to other people’s past investments). Across eight experiments (N = 6,076) covering diverse scenarios, I documented sunk-cost effects when the costs are borne by someone other than the decision maker. Moreover, the interpersonal sunk-cost effect is not moderated by social closeness or whether other people observe their sunk costs being “honored.” These findings uncover a previously undocumented bias, reveal that the sunk-cost effect is a much broader phenomenon than previously thought, and pose interesting challenges for existing accounts of this fascinating human tendency.
Keywords: decision making, heuristics, preferences, social influence, open data, open materials, preregistered
Tools do not erase but rather extend our intrinsic physical and cognitive skills; this extension is task specific because we found no evidence for superusers, benefitting from the use of a tool irrespective of the task
Osiurak, F., Navarro, J., Reynaud, E., & Thomas, G. (2018). Tools don’t—and won’t—make the man: A cognitive look at the future. Journal of Experimental Psychology: General, 147(5), 782-788. http://dx.doi.org/10.1037/xge0000432
Abstract: The question of whether tools erase cognitive and physical interindividual differences has been surprisingly overlooked in the literature. Yet if technology is profusely available in a near or far future, will we be equal in our capacity to use it? We sought to address this unexplored, fundamental issue, asking 200 participants to perform 3 physical (e.g., fine manipulation) and 3 cognitive tasks (e.g., calculation) in both non–tool-use and tool-use conditions. Here we show that tools do not erase but rather extend our intrinsic physical and cognitive skills. Moreover, this phenomenon of extension is task specific because we found no evidence for superusers, benefitting from the use of a tool irrespective of the task concerned. These results challenge the possibility that technical solutions could always be found to make people equal. Rather, technical innovation might be systematically limited by the user’s initial degree of knowledge or skills for a given task.
Abstract: The question of whether tools erase cognitive and physical interindividual differences has been surprisingly overlooked in the literature. Yet if technology is profusely available in a near or far future, will we be equal in our capacity to use it? We sought to address this unexplored, fundamental issue, asking 200 participants to perform 3 physical (e.g., fine manipulation) and 3 cognitive tasks (e.g., calculation) in both non–tool-use and tool-use conditions. Here we show that tools do not erase but rather extend our intrinsic physical and cognitive skills. Moreover, this phenomenon of extension is task specific because we found no evidence for superusers, benefitting from the use of a tool irrespective of the task concerned. These results challenge the possibility that technical solutions could always be found to make people equal. Rather, technical innovation might be systematically limited by the user’s initial degree of knowledge or skills for a given task.
Thursday, May 10, 2018
The Napoleon Complex: When Shorter Men Take More
The Napoleon Complex: When Shorter Men Take More. Jill E. P. Knapen, Nancy M. Blaker, Mark Van Vugt. Psychological Science, https://doi.org/10.1177/0956797618772822
Abstract: Inspired by an evolutionary psychological perspective on the Napoleon complex, we hypothesized that shorter males are more likely to show indirect aggression in resource competitions with taller males. Three studies provide support for our interpretation of the Napoleon complex. Our pilot study shows that men (but not women) keep more resources for themselves when they feel small. When paired with a taller male opponent (Study 1), shorter men keep more resources to themselves in a game in which they have all the power (dictator game) versus a game in which the opponent also has some power (ultimatum game). Furthermore, shorter men are not more likely to show direct, physical aggression toward a taller opponent (Study 2). As predicted by the Napoleon complex, we conclude that (relatively) shorter men show greater behavioral flexibility in securing resources when presented with cues that they are physically less competitive. Theoretical and practical implications are discussed.
Keywords: Napoleon complex, human height, status, behavioral flexibility, indirect aggression, open data
Abstract: Inspired by an evolutionary psychological perspective on the Napoleon complex, we hypothesized that shorter males are more likely to show indirect aggression in resource competitions with taller males. Three studies provide support for our interpretation of the Napoleon complex. Our pilot study shows that men (but not women) keep more resources for themselves when they feel small. When paired with a taller male opponent (Study 1), shorter men keep more resources to themselves in a game in which they have all the power (dictator game) versus a game in which the opponent also has some power (ultimatum game). Furthermore, shorter men are not more likely to show direct, physical aggression toward a taller opponent (Study 2). As predicted by the Napoleon complex, we conclude that (relatively) shorter men show greater behavioral flexibility in securing resources when presented with cues that they are physically less competitive. Theoretical and practical implications are discussed.
Keywords: Napoleon complex, human height, status, behavioral flexibility, indirect aggression, open data
China's Social Credit System: An Evolving Practice of Control
Creemers, Rogier, China's Social Credit System: An Evolving Practice of Control (May 9, 2018). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3175792
Abstract: The Social Credit System (SCS) is perhaps the most prominent manifestation of the Chinese government's intention to reinforce legal, regulatory and policy processes through the application of information technology. Yet its organizational specifics have not yet received academic scrutiny. This paper will identify the objectives, perspectives and mechanisms through which the Chinese government has sought to realise its vision of "social credit". Reviewing the system's historical evolution, institutional structure, central and local implementation, and relationship with the private sector, this paper concludes that it is perhaps more accurate to conceive of the SCS as an ecosystem of initiatives broadly sharing a similar underlying logic, than a fully unified and integrated machine for social control. It also finds that, intentions with regards to big data and artificial intelligence notwithstanding, the SCS remains a relatively crude tool. This may change in the future, and this paper suggests the dimensions to be studied in order to assess this evolution.
Abstract: The Social Credit System (SCS) is perhaps the most prominent manifestation of the Chinese government's intention to reinforce legal, regulatory and policy processes through the application of information technology. Yet its organizational specifics have not yet received academic scrutiny. This paper will identify the objectives, perspectives and mechanisms through which the Chinese government has sought to realise its vision of "social credit". Reviewing the system's historical evolution, institutional structure, central and local implementation, and relationship with the private sector, this paper concludes that it is perhaps more accurate to conceive of the SCS as an ecosystem of initiatives broadly sharing a similar underlying logic, than a fully unified and integrated machine for social control. It also finds that, intentions with regards to big data and artificial intelligence notwithstanding, the SCS remains a relatively crude tool. This may change in the future, and this paper suggests the dimensions to be studied in order to assess this evolution.
School Progress Among Children of Same-Sex Couples
School Progress Among Children of Same-Sex Couples. Caleb S. Watkins. Demography, https://link.springer.com/article/10.1007/s13524-018-0678-3
Abstract: This study uses logit regressions on a pooled sample of children from the 2012, 2013, and 2014 American Community Survey to perform a nationally representative analysis of school progress for a large sample of 4,430 children who reside with same-sex couples. Odds ratios from regressions that compare children between different-sex married couples and same-sex couples fail to show significant differences in normal school progress between households across a variety of sample compositions. Likewise, marginal effects from regressions that compare children with similar family dynamics between different-sex married couples and same-sex couples fail to predict significantly higher probabilities of grade retention for children of same-sex couples. Significantly lower grade retention rates are sometimes predicted for children of same-sex couples than for different-sex married couples, but these differences are sensitive to sample exclusions and do not indicate causal benefits to same-sex parenting.
Abstract: This study uses logit regressions on a pooled sample of children from the 2012, 2013, and 2014 American Community Survey to perform a nationally representative analysis of school progress for a large sample of 4,430 children who reside with same-sex couples. Odds ratios from regressions that compare children between different-sex married couples and same-sex couples fail to show significant differences in normal school progress between households across a variety of sample compositions. Likewise, marginal effects from regressions that compare children with similar family dynamics between different-sex married couples and same-sex couples fail to predict significantly higher probabilities of grade retention for children of same-sex couples. Significantly lower grade retention rates are sometimes predicted for children of same-sex couples than for different-sex married couples, but these differences are sensitive to sample exclusions and do not indicate causal benefits to same-sex parenting.
Despite being more egalitarian, men with more education are more likely to have careers that give them privileged status in their marriages and may have “more to lose” in their career by changing their name. Men with less education than their wives are less likely to change their surname
Emily Fitzgibbons Shafer et al, Flipping the (Surname) Script: Men's Nontraditional Surname Choice at Marriage, Journal of Family Issues (2018). DOI: 10.1177/0192513X18770218
Abstract: Using unique, nationally representative data that asks individuals about their surname choice in marriage, we explore heterosexual men’s nontraditional surname choice. We focus on how education—both absolute and relative to wives’—correlates with nontraditional surname choice. Following class-based masculinities theory, we find that men with more education are less likely to choose a nontraditional surname. Despite being more egalitarian in attitudes, men with more education are more likely to have careers that give them privileged status in their marriages and may have “more to lose” in their career by changing their name. In addition, men with less education than their wives are less likely to change their surnames. We argue that this is consistent with compensatory gender display theory. Men having less education in marriage may translate into having less earning power, which is gender nonnormative as men are culturally expected to be primary breadwinners in marriage.
Abstract: Using unique, nationally representative data that asks individuals about their surname choice in marriage, we explore heterosexual men’s nontraditional surname choice. We focus on how education—both absolute and relative to wives’—correlates with nontraditional surname choice. Following class-based masculinities theory, we find that men with more education are less likely to choose a nontraditional surname. Despite being more egalitarian in attitudes, men with more education are more likely to have careers that give them privileged status in their marriages and may have “more to lose” in their career by changing their name. In addition, men with less education than their wives are less likely to change their surnames. We argue that this is consistent with compensatory gender display theory. Men having less education in marriage may translate into having less earning power, which is gender nonnormative as men are culturally expected to be primary breadwinners in marriage.
Those applying for a qualified job emphasized their competence while downplaying their warmth; role-playing as crime witnesses, they attenuated their warmth relative to their competence; those in the role of suspects of a severe crime chose to downplay their competence
Lindholm, T. & Yzerbyt, V., (2018). When Being Nice or Being Smart Could Bring You Down: Compensatory Dynamics in Strategic Self-presentation. International Review of Social Psychology. 31(1) , p. 16. DOI: http://doi.org/10.5334/irsp.136
Abstract: Research shows that the two fundamental dimensions of social perception, warmth and competence, are often negatively related in our perceptions of others, the so-called compensation effect. The current experiments investigate people’s use of such compensation when self-presenting strategically to reach a desired goal. In Experiment 1, participants applying for a qualified job emphasized their competence while downplaying their warmth. In Experiments 2 and 3, participants role-playing as crime witnesses similarly attenuated their warmth relative to their competence. In contrast, in Experiment 3, participants in the role of suspects of a severe crime chose to downplay their competence. Results suggest that self-presenters are sensitive to warmth-competence dynamics in social perception as a means to promote the optimal self-image given their specific goals.
Keywords: Strategic self-presentation, warmth, competence, social compensation
Abstract: Research shows that the two fundamental dimensions of social perception, warmth and competence, are often negatively related in our perceptions of others, the so-called compensation effect. The current experiments investigate people’s use of such compensation when self-presenting strategically to reach a desired goal. In Experiment 1, participants applying for a qualified job emphasized their competence while downplaying their warmth. In Experiments 2 and 3, participants role-playing as crime witnesses similarly attenuated their warmth relative to their competence. In contrast, in Experiment 3, participants in the role of suspects of a severe crime chose to downplay their competence. Results suggest that self-presenters are sensitive to warmth-competence dynamics in social perception as a means to promote the optimal self-image given their specific goals.
Keywords: Strategic self-presentation, warmth, competence, social compensation
Age of Fathers, Mutation, and Reproduction
Age of Fathers, Mutation, and Reproduction. In Evolution and Human Reproduction. Martin Fieder and Susanne Huber. In the Oxford Handbook of Evolution, Biology, and Society, Edited by Rosemary L. Hopcroft. DOI 10.1093/oxfordhb/9780190299323.013.29
Our DNA consists of roughly 3.2 billion base pairs (i.e., 3.2 billion pairs of adenine–thymine and guanine–cytosine covering the genomic information of humans, most of (p. 486) whose functions we do not yet understand) that, together with epigenetic signature, make us different from each other. Currently, we have only a relatively limited understanding of the phenotypical outcomes of our genetic makeup (Jobling, Hurles, & Tyler-Smith, 2013). Clearly, human genetics is extraordinarily complex. Nevertheless, there is no doubt that these variations in the DNA make some of us better adapted than others to certain environments. Those better adapted individuals (in the respective environments) eventually end up with more descendants. Due to the reproductive benefits for those better adapted individuals, the genetic information associated with this beneficial phenotype will spread in a population. Adaptation, however, always refers to the current environment. If the environmental conditions change, then a successful adaptation to the original environment may have no or even negative consequences on fertility. Such a maladaptive condition decreases the reproductive success of its carrier or, in the worst-case scenario, causes that lineage to die out.
Most mutations are thought to be neutral—that is, exerting no or hardly detectable effects on the phenotype—and therefore have no immediate adaptive value. Other mutations are harmful, especially if they occur in protein-encoding DNA sequences leading to an altered protein. A small number of mutations, however, may ultimately lead to a phenotype better adapted than others to its current environment. Such a phenotype will be favored by selection. The actual rate of harmful, neutral, or positive mutations, however, remains difficult to estimate (Keightley, 2012), particularly the rate of mutations that are positively selected for. In two Drosophila populations, Schneider, Charlesworth, Eyre-Walker, and Keightley (2011) estimated the rate of positive selected mutations for amino acid coding sequences (i.e., non-synonymous mutations) to be between 1% and 2% of all occurring mutations.
Where do most of the mutations come from? The very recently discovered answer in humans is impressive—from the age of the father (Kong et al., 2012). According to Kong et al., the father’s age explains nearly all newly occurring (i.e., de novo) mutations in a child. Correspondingly, detrimental parental age effects have been demonstrated for a variety of Mendelian and mental disorders and even for educational attainment (for a review, see D’Onofrio et al., 2014). The reason is that in contrast to women, in whom all cell divisions in the egg are completed before birth, men continue producing sperm throughout their reproductive lives. Consequently, the number of cell divisions and chromosome replications that a sperm cell has gone through increases with the age at which the sperm is produced. This increases the risk that “errors” occur in terms of mutations (Crow, 2000).
Because the mutations induced by male age occur randomly in the human genome, the probability that they directly affect reproductive functioning is relatively low because a detrimental mutation occurring somewhere in our genome does not necessarily affect reproductive functioning. In such cases, an individual could still reproduce normally even if he or she carries a potentially harmful mutation. It would pass those harmful mutations on to the next generation, which may then accumulate over generations. It is thus conceivable that a mechanism may exist that helps avoid excessive mutation loads in future generations. We suggest that mate selection may provide such a mechanism to (p. 487) prevent too high mutation load. This view is supported by our recent findings based on a US sample (Wisconsin Longitudinal Study), in which we demonstrated that children of older fathers are less attractive (Huber & Fieder, 2014). Moreover, offspring of older fathers face a higher risk of remaining unmarried and therefore remaining childless (Fieder & Huber, 2015). Marriage was obligatory in the previously mentioned sample, thereby providing a good indicator for mating success. Comparable findings based on large human data sets have confirmed our results (Hayward, Lummaa, & Bazykin, 2015; Arslan et al., 2016). Similar effects of paternal age have also been reported in animal species ranging from bulb mites (Prokop, Stuglik, Żabińska, & Radwan, 2007) to house sparrows (Schroeder, Nakagawa, Rees, Mannarelli, & Burke, 2015). We therefore suggest that this phenomenon is a more fundamental biological principle: An individual’s mutation load could affect mate selection, thus helping to reduce the mutation load of the progeny.
This view is also in line with the mutation–selection balance theory, proposing that a balance of forces between constantly arising, mildly harmful mutations and selection causes variation in genetic quality and phenotypic condition (Miller, 2000; Keller, 2008). This makes it unlikely that the accumulation of new deleterious mutations leads to a detectable fitness decline in current human populations (Keightley, 2012). The mutation–selection balance is assumed to be particularly important in traits influenced by many genetic loci (multigenic, such as human reproduction), providing a large target size for mutations (Keller, 2008).
Although most of the mutations induced by the age of the father are considered neutral or may be harmful, a small proportion of them are advantageous and provide fitness benefits. This raises an interesting question: Are we able to detect potentially promising mutations in a mate that may be adaptive in the long term? Detecting mutations that in the future may lead to an adaptive phenotype is unlikely. We therefore assume that this is probably a random process. Nevertheless, one can speculate that individuals choose extraordinary traits in potential mates—that is, traits that may be associated with newly induced mutations. The numerous examples include the peacock’s tail (Zahavi & Zahavi, 1999), bower birds (Uy & Borgia, 2000), as well as height (Stulp, Barrett, Tropf, & Mills, 2015) and social status in men (Fieder & Huber, 2007; Nettle & Pollet, 2008; Barthold, Myrskylä, & Jones, 2012; Hopcroft, 2015). If such traits carry adaptive benefits outweighing potentially negative impacts, then selection would favor both the carrier of those mutations and the carrier’s mating partners. Accordingly, mutations induced by a father’s age can also be viewed as a “driving force” of evolution. The reason is that without mutations, evolution would not have taken place at all, and without mutations introduced into the population by male age, evolution would at least have been much slower. The positive mutations induced by age might thus be considered an “engine of evolution,” leading to new phenotypes that could potentially be selected for.
Together with the usually higher status of older men, this positive effect might partially explain women’s preference for somewhat older men (Buss, 1989). Basically, this preference reflects a trade-off between benefits associated with higher status and possible detrimental mutations caused by higher paternal age that may be passed to (p. 488) the offspring. However, because some mutations may be adaptive, overall the benefits may outweigh the costs, at least if the age difference between spouses is not too large. Accordingly, women usually prefer men who are only moderately older than themselves (Buss, 1989; Buunk, Dijkstra, Fetchenhauer, & Kenrick, 2002; Schwarz & Hassebrauck, 2012).
Future studies may aim to measure the impact of mutations directly and not just indirectly via the age of fathers, examining, for instance, if there is any evidence for a potential link between father’s age, mutation rate, marriage fertility, and social status. According to D’Onofrio et al. (2014), higher paternal age is associated with lower educational attainment in the offspring. This finding suggests a possible association between de novo mutation rate and educational attainment, leading to the question whether social status goes beyond being solely culturally determined to also contain an inherited component. At least for educational attainment, this has recently been shown (Rietveld et al., 2013).
Our DNA consists of roughly 3.2 billion base pairs (i.e., 3.2 billion pairs of adenine–thymine and guanine–cytosine covering the genomic information of humans, most of (p. 486) whose functions we do not yet understand) that, together with epigenetic signature, make us different from each other. Currently, we have only a relatively limited understanding of the phenotypical outcomes of our genetic makeup (Jobling, Hurles, & Tyler-Smith, 2013). Clearly, human genetics is extraordinarily complex. Nevertheless, there is no doubt that these variations in the DNA make some of us better adapted than others to certain environments. Those better adapted individuals (in the respective environments) eventually end up with more descendants. Due to the reproductive benefits for those better adapted individuals, the genetic information associated with this beneficial phenotype will spread in a population. Adaptation, however, always refers to the current environment. If the environmental conditions change, then a successful adaptation to the original environment may have no or even negative consequences on fertility. Such a maladaptive condition decreases the reproductive success of its carrier or, in the worst-case scenario, causes that lineage to die out.
Most mutations are thought to be neutral—that is, exerting no or hardly detectable effects on the phenotype—and therefore have no immediate adaptive value. Other mutations are harmful, especially if they occur in protein-encoding DNA sequences leading to an altered protein. A small number of mutations, however, may ultimately lead to a phenotype better adapted than others to its current environment. Such a phenotype will be favored by selection. The actual rate of harmful, neutral, or positive mutations, however, remains difficult to estimate (Keightley, 2012), particularly the rate of mutations that are positively selected for. In two Drosophila populations, Schneider, Charlesworth, Eyre-Walker, and Keightley (2011) estimated the rate of positive selected mutations for amino acid coding sequences (i.e., non-synonymous mutations) to be between 1% and 2% of all occurring mutations.
Where do most of the mutations come from? The very recently discovered answer in humans is impressive—from the age of the father (Kong et al., 2012). According to Kong et al., the father’s age explains nearly all newly occurring (i.e., de novo) mutations in a child. Correspondingly, detrimental parental age effects have been demonstrated for a variety of Mendelian and mental disorders and even for educational attainment (for a review, see D’Onofrio et al., 2014). The reason is that in contrast to women, in whom all cell divisions in the egg are completed before birth, men continue producing sperm throughout their reproductive lives. Consequently, the number of cell divisions and chromosome replications that a sperm cell has gone through increases with the age at which the sperm is produced. This increases the risk that “errors” occur in terms of mutations (Crow, 2000).
Because the mutations induced by male age occur randomly in the human genome, the probability that they directly affect reproductive functioning is relatively low because a detrimental mutation occurring somewhere in our genome does not necessarily affect reproductive functioning. In such cases, an individual could still reproduce normally even if he or she carries a potentially harmful mutation. It would pass those harmful mutations on to the next generation, which may then accumulate over generations. It is thus conceivable that a mechanism may exist that helps avoid excessive mutation loads in future generations. We suggest that mate selection may provide such a mechanism to (p. 487) prevent too high mutation load. This view is supported by our recent findings based on a US sample (Wisconsin Longitudinal Study), in which we demonstrated that children of older fathers are less attractive (Huber & Fieder, 2014). Moreover, offspring of older fathers face a higher risk of remaining unmarried and therefore remaining childless (Fieder & Huber, 2015). Marriage was obligatory in the previously mentioned sample, thereby providing a good indicator for mating success. Comparable findings based on large human data sets have confirmed our results (Hayward, Lummaa, & Bazykin, 2015; Arslan et al., 2016). Similar effects of paternal age have also been reported in animal species ranging from bulb mites (Prokop, Stuglik, Żabińska, & Radwan, 2007) to house sparrows (Schroeder, Nakagawa, Rees, Mannarelli, & Burke, 2015). We therefore suggest that this phenomenon is a more fundamental biological principle: An individual’s mutation load could affect mate selection, thus helping to reduce the mutation load of the progeny.
This view is also in line with the mutation–selection balance theory, proposing that a balance of forces between constantly arising, mildly harmful mutations and selection causes variation in genetic quality and phenotypic condition (Miller, 2000; Keller, 2008). This makes it unlikely that the accumulation of new deleterious mutations leads to a detectable fitness decline in current human populations (Keightley, 2012). The mutation–selection balance is assumed to be particularly important in traits influenced by many genetic loci (multigenic, such as human reproduction), providing a large target size for mutations (Keller, 2008).
Although most of the mutations induced by the age of the father are considered neutral or may be harmful, a small proportion of them are advantageous and provide fitness benefits. This raises an interesting question: Are we able to detect potentially promising mutations in a mate that may be adaptive in the long term? Detecting mutations that in the future may lead to an adaptive phenotype is unlikely. We therefore assume that this is probably a random process. Nevertheless, one can speculate that individuals choose extraordinary traits in potential mates—that is, traits that may be associated with newly induced mutations. The numerous examples include the peacock’s tail (Zahavi & Zahavi, 1999), bower birds (Uy & Borgia, 2000), as well as height (Stulp, Barrett, Tropf, & Mills, 2015) and social status in men (Fieder & Huber, 2007; Nettle & Pollet, 2008; Barthold, Myrskylä, & Jones, 2012; Hopcroft, 2015). If such traits carry adaptive benefits outweighing potentially negative impacts, then selection would favor both the carrier of those mutations and the carrier’s mating partners. Accordingly, mutations induced by a father’s age can also be viewed as a “driving force” of evolution. The reason is that without mutations, evolution would not have taken place at all, and without mutations introduced into the population by male age, evolution would at least have been much slower. The positive mutations induced by age might thus be considered an “engine of evolution,” leading to new phenotypes that could potentially be selected for.
Together with the usually higher status of older men, this positive effect might partially explain women’s preference for somewhat older men (Buss, 1989). Basically, this preference reflects a trade-off between benefits associated with higher status and possible detrimental mutations caused by higher paternal age that may be passed to (p. 488) the offspring. However, because some mutations may be adaptive, overall the benefits may outweigh the costs, at least if the age difference between spouses is not too large. Accordingly, women usually prefer men who are only moderately older than themselves (Buss, 1989; Buunk, Dijkstra, Fetchenhauer, & Kenrick, 2002; Schwarz & Hassebrauck, 2012).
Future studies may aim to measure the impact of mutations directly and not just indirectly via the age of fathers, examining, for instance, if there is any evidence for a potential link between father’s age, mutation rate, marriage fertility, and social status. According to D’Onofrio et al. (2014), higher paternal age is associated with lower educational attainment in the offspring. This finding suggests a possible association between de novo mutation rate and educational attainment, leading to the question whether social status goes beyond being solely culturally determined to also contain an inherited component. At least for educational attainment, this has recently been shown (Rietveld et al., 2013).
An empirical, 21st century evaluation of phrenology: The most rigorous evaluation to date says it is bogus
An empirical, 21st century evaluation of phrenology. O. Parker Jones, F. Alfaro-Almagro, S. Jbabdi. Cortex, https://doi.org/10.1016/j.cortex.2018.04.011
Abstract: Phrenology was a nineteenth century endeavour to link personality traits with scalp morphology, which has been both influential and fiercely criticised, not least because of the assumption that scalp morphology can be informative of underlying brain function. Here we test the idea empirically rather than dismissing it out of hand. Whereas nineteenth century phrenologists had access to coarse measurement tools (digital technology referring then to fingers), we were able to re-examine phrenology using 21st century methods and thousands of subjects drawn from the largest neuroimaging study to date. High-quality structural MRI was used to quantify local scalp curvature. The resulting curvature statistics were compared against lifestyle measures acquired from the same cohort of subjects, being careful to match a subset of lifestyle measures to phrenological ideas of brain organisation, in an effort to evoke the character of Victorian times. The results represent the most rigorous evaluation of phrenological claims to date.
Keywords: phrenology; MRI
Abstract: Phrenology was a nineteenth century endeavour to link personality traits with scalp morphology, which has been both influential and fiercely criticised, not least because of the assumption that scalp morphology can be informative of underlying brain function. Here we test the idea empirically rather than dismissing it out of hand. Whereas nineteenth century phrenologists had access to coarse measurement tools (digital technology referring then to fingers), we were able to re-examine phrenology using 21st century methods and thousands of subjects drawn from the largest neuroimaging study to date. High-quality structural MRI was used to quantify local scalp curvature. The resulting curvature statistics were compared against lifestyle measures acquired from the same cohort of subjects, being careful to match a subset of lifestyle measures to phrenological ideas of brain organisation, in an effort to evoke the character of Victorian times. The results represent the most rigorous evaluation of phrenological claims to date.
Keywords: phrenology; MRI
Participants had to make the real-life decision to administer an electroshock to a single mouse or allow five other mice to receive the shock. Responses to hypothetical dilemmas are not predictive of real-life dilemma behavior
Of Mice, Men, and Trolleys: Hypothetical Judgment Versus Real-Life Behavior in Trolley-Style Moral Dilemmas. Dries H. Bostyn, Sybren Sevenhant, Arne Roets. Psychological Science, https://doi.org/10.1177/0956797617752640
Abstract: Scholars have been using hypothetical dilemmas to investigate moral decision making for decades. However, whether people’s responses to these dilemmas truly reflect the decisions they would make in real life is unclear. In the current study, participants had to make the real-life decision to administer an electroshock (that they did not know was bogus) to a single mouse or allow five other mice to receive the shock. Our results indicate that responses to hypothetical dilemmas are not predictive of real-life dilemma behavior, but they are predictive of affective and cognitive aspects of the real-life decision. Furthermore, participants were twice as likely to refrain from shocking the single mouse when confronted with a hypothetical versus the real version of the dilemma. We argue that hypothetical-dilemma research, while valuable for understanding moral cognition, has little predictive value for actual behavior and that future studies should investigate actual moral behavior along with the hypothetical scenarios dominating the field.
Keywords: morality, utilitarianism, trolley, consequentialism, open data, open materials
Abstract: Scholars have been using hypothetical dilemmas to investigate moral decision making for decades. However, whether people’s responses to these dilemmas truly reflect the decisions they would make in real life is unclear. In the current study, participants had to make the real-life decision to administer an electroshock (that they did not know was bogus) to a single mouse or allow five other mice to receive the shock. Our results indicate that responses to hypothetical dilemmas are not predictive of real-life dilemma behavior, but they are predictive of affective and cognitive aspects of the real-life decision. Furthermore, participants were twice as likely to refrain from shocking the single mouse when confronted with a hypothetical versus the real version of the dilemma. We argue that hypothetical-dilemma research, while valuable for understanding moral cognition, has little predictive value for actual behavior and that future studies should investigate actual moral behavior along with the hypothetical scenarios dominating the field.
Keywords: morality, utilitarianism, trolley, consequentialism, open data, open materials
Wednesday, May 9, 2018
Human adults often show a preference for scarce over abundant goods. Examined 4‐ and 6‐year‐old children as well as chimpanzees, only children at 6 displayed such preference, especially in the presence of competitors
The preference for scarcity: A developmental and comparative perspective. Maria John, Alicia P. Melis, Daniel Read, Federico Rossano, Michael Tomasello. Psychology & Marketing, https://doi.org/10.1002/mar.21109
Abstract: Human adults often show a preference for scarce over abundant goods. In this paper, we investigate whether this preference was shared by 4‐ and 6‐year‐old children as well as chimpanzees, humans’ nearest primate relative. Neither chimpanzees nor 4‐year‐olds displayed a scarcity preference, but 6‐year‐olds did, especially in the presence of competitors. We conclude that scarcity preference is a human‐unique preference that develops as humans increase their cognitive skills and social experiences with peers and competitors. We explore different potential psychological explanations for scarcity preference and conclude scarcity preference is based on children's fear of missing out an opportunity, especially when dealing with uncertainty or goods of unknown value in the presence of competitors. Furthermore, the results are in line with studies showing that supply‐based scarcity increases the desirability of hedonic goods, suggesting that even as early as 6 years of age humans may use scarce goods to feel unique or special.
Abstract: Human adults often show a preference for scarce over abundant goods. In this paper, we investigate whether this preference was shared by 4‐ and 6‐year‐old children as well as chimpanzees, humans’ nearest primate relative. Neither chimpanzees nor 4‐year‐olds displayed a scarcity preference, but 6‐year‐olds did, especially in the presence of competitors. We conclude that scarcity preference is a human‐unique preference that develops as humans increase their cognitive skills and social experiences with peers and competitors. We explore different potential psychological explanations for scarcity preference and conclude scarcity preference is based on children's fear of missing out an opportunity, especially when dealing with uncertainty or goods of unknown value in the presence of competitors. Furthermore, the results are in line with studies showing that supply‐based scarcity increases the desirability of hedonic goods, suggesting that even as early as 6 years of age humans may use scarce goods to feel unique or special.
Liberals wanted to feel more empathy and experienced more empathy than conservatives did. Liberals were also more willing to help others than conservatives were, in the United States and Germany, but not in Israel
Are Liberals and Conservatives Equally Motivated to Feel Empathy Toward
Others? Yossi Hasson et al. Personality and Social Psychology Bulletin, https://doi.org/10.1177/0146167218769867
Abstract: Do liberals and conservatives differ in their empathy toward others? This question has been difficult to resolve due to methodological constraints and common use of ideologically biased targets. To more adequately address this question, we examined how much empathy liberals and conservatives want to feel, how much empathy they actually feel, and how willing they are to help others. We used targets that are equivalent in the degree to which liberals and conservatives identify with, by setting either liberals, conservatives, or ideologically neutral members as social targets. To support the generalizability of our findings, we conducted the study in the United States, Israel, and Germany. We found that, on average and across samples, liberals wanted to feel more empathy and experienced more empathy than conservatives did. Liberals were also more willing to help others than conservatives were, in the United States and Germany, but not in Israel. In addition, across samples, both liberals and conservatives wanted to feel less empathy toward outgroup members than toward ingroup members or members of a nonpolitical group.
Keywords: political ideology, empathy, motivation, emotion regulation
Abstract: Do liberals and conservatives differ in their empathy toward others? This question has been difficult to resolve due to methodological constraints and common use of ideologically biased targets. To more adequately address this question, we examined how much empathy liberals and conservatives want to feel, how much empathy they actually feel, and how willing they are to help others. We used targets that are equivalent in the degree to which liberals and conservatives identify with, by setting either liberals, conservatives, or ideologically neutral members as social targets. To support the generalizability of our findings, we conducted the study in the United States, Israel, and Germany. We found that, on average and across samples, liberals wanted to feel more empathy and experienced more empathy than conservatives did. Liberals were also more willing to help others than conservatives were, in the United States and Germany, but not in Israel. In addition, across samples, both liberals and conservatives wanted to feel less empathy toward outgroup members than toward ingroup members or members of a nonpolitical group.
Keywords: political ideology, empathy, motivation, emotion regulation
Is birth attendance a uniquely human feature? New evidence suggests that Bonobo females protect and support the parturient
Is birth attendance a uniquely human feature? New evidence suggests that Bonobo females protect and support the parturient. Elisa Demuru, Pier Francesco Ferrari, Elisabetta Palagi. Evolution and Human Behavior, https://doi.org/10.1016/j.evolhumbehav.2018.05.003
Abstract: Birth attendance has been proposed as a distinguishing feature of humans (Homo sapiens) and it has been linked to the difficulty of the delivery process in our species. Here, we provide the first quantitative study based on video-recordings of the social dynamics around three births in captive bonobos (Pan paniscus), human closest living relative along with the chimpanzee. We show that the general features defining traditional birth attendance in humans can also be identified in bonobos. As in humans, birth in bonobos was a social event, where female attendants provided protection and support to the parturient until the infant was born. Moreover, bystander females helped the parturient during the expulsive phase by performing manual gestures aimed at holding the infant. Our results on bonobos question the traditional view that the “obligatory” need for assistance was the main driving force leading to sociality around birth in our species. Indeed, birth in bonobos is not hindered by physical constraints and the mother is self-sufficient in accomplishing the delivery. Although further studies are needed both in captivity and in the wild, we suggest that the similarities observed between birth attendance in bonobos and humans might be related to the high level of female gregariousness in these species. In our view, the capacity of unrelated females to form strong social bonds and cooperate could have represented the evolutionary pre-requisite for the emergence of human midwifery.
Keywords: Pan paniscus; Delivery; Protection; Support; Female gregariousness; Human birth attendance
Abstract: Birth attendance has been proposed as a distinguishing feature of humans (Homo sapiens) and it has been linked to the difficulty of the delivery process in our species. Here, we provide the first quantitative study based on video-recordings of the social dynamics around three births in captive bonobos (Pan paniscus), human closest living relative along with the chimpanzee. We show that the general features defining traditional birth attendance in humans can also be identified in bonobos. As in humans, birth in bonobos was a social event, where female attendants provided protection and support to the parturient until the infant was born. Moreover, bystander females helped the parturient during the expulsive phase by performing manual gestures aimed at holding the infant. Our results on bonobos question the traditional view that the “obligatory” need for assistance was the main driving force leading to sociality around birth in our species. Indeed, birth in bonobos is not hindered by physical constraints and the mother is self-sufficient in accomplishing the delivery. Although further studies are needed both in captivity and in the wild, we suggest that the similarities observed between birth attendance in bonobos and humans might be related to the high level of female gregariousness in these species. In our view, the capacity of unrelated females to form strong social bonds and cooperate could have represented the evolutionary pre-requisite for the emergence of human midwifery.
Keywords: Pan paniscus; Delivery; Protection; Support; Female gregariousness; Human birth attendance
Taking ownership of implicit bias has mixed outcomes—at times amplifying the expression of explicit prejudice
The Mixed Outcomes of Taking Ownership for Implicit Racial Biases. Erin Cooley, Ryan F. Lei, Taylor Ellerkamp. Personality and Social Psychology Bulletin, https://doi.org/10.1177/0146167218769646
Abstract: One potential strategy for prejudice reduction is encouraging people to acknowledge, and take ownership for, their implicit biases. Across two studies, we explore how taking ownership for implicit racial bias affects the subsequent expression of overt bias. Participants first completed an implicit measure of their attitudes toward Black people. Then we either led participants to think of their implicit bias as their own or as stemming from external factors. Results revealed that taking ownership for high implicit racial bias had diverging effects on subsequent warmth toward Black people (Study 1) and donations to a Black nonprofit (Study 2) based on people’s internal motivations to respond without prejudice (Internal Motivation Scale [IMS]). Critically, among those low in IMS, owning high implicit bias backfired, leading to greater overt prejudice and smaller donations. We conclude that taking ownership of implicit bias has mixed outcomes—at times amplifying the expression of explicit prejudice.
Keywords: social cognition, implicit cognition, intergroup processes, attitudes
Abstract: One potential strategy for prejudice reduction is encouraging people to acknowledge, and take ownership for, their implicit biases. Across two studies, we explore how taking ownership for implicit racial bias affects the subsequent expression of overt bias. Participants first completed an implicit measure of their attitudes toward Black people. Then we either led participants to think of their implicit bias as their own or as stemming from external factors. Results revealed that taking ownership for high implicit racial bias had diverging effects on subsequent warmth toward Black people (Study 1) and donations to a Black nonprofit (Study 2) based on people’s internal motivations to respond without prejudice (Internal Motivation Scale [IMS]). Critically, among those low in IMS, owning high implicit bias backfired, leading to greater overt prejudice and smaller donations. We conclude that taking ownership of implicit bias has mixed outcomes—at times amplifying the expression of explicit prejudice.
Keywords: social cognition, implicit cognition, intergroup processes, attitudes
MHC-Dependent Mate Selection within 872 Spousal Pairs of European Ancestry from the Health and Retirement Study
MHC-Dependent Mate Selection within 872 Spousal Pairs of European Ancestry from the Health and Retirement Study. Zhen Qiao, Joseph E. Powell and David M. Evans. Genes 2018, 9(1), 53; doi:10.3390/genes9010053
Abstract: Disassortative mating refers to the phenomenon in which individuals with dissimilar genotypes and/or phenotypes mate with one another more frequently than would be expected by chance. Although the existence of disassortative mating is well established in plant and animal species, the only documented example of negative assortment in humans involves dissimilarity at the major histocompatibility complex (MHC) locus. Previous studies investigating mating patterns at the MHC have been hampered by limited sample size and contradictory findings. Inspired by the sparse and conflicting evidence, we investigated the role that the MHC region played in human mate selection using genome-wide association data from 872 European American spouses from the Health and Retirement Study (HRS). First, we treated the MHC region as a whole, and investigated genomic similarity between spouses using three levels of genomic variation: single-nucleotide polymorphisms (SNPs), classical human leukocyte antigen (HLA) alleles (both four-digit and two-digit classifications), and amino acid polymorphisms. The extent of MHC dissimilarity between spouses was assessed using a permutation approach. Second, we investigated fine scale mating patterns by testing for deviations from random mating at individual SNPs, HLA genes, and amino acids in HLA molecules. Third, we assessed how extreme the spousal relatedness at the MHC region was compared to the rest of the genome, to distinguish the MHC-specific effects from genome-wide effects. We show that neither the MHC region, nor any single SNPs, classic HLA alleles, or amino acid polymorphisms within the MHC region, were significantly dissimilar between spouses relative to non-spouse pairs. However, dissimilarity in the MHC region was extreme relative to the rest of genome for both spousal and non-spouse pairs. Despite the long-standing controversy, our analyses did not support a significant role of MHC dissimilarity in human mate choice.
Keywords: disassortative mating; non-random mating; major histocompatibility complex; human leukocyte antigen; mate selection
Abstract: Disassortative mating refers to the phenomenon in which individuals with dissimilar genotypes and/or phenotypes mate with one another more frequently than would be expected by chance. Although the existence of disassortative mating is well established in plant and animal species, the only documented example of negative assortment in humans involves dissimilarity at the major histocompatibility complex (MHC) locus. Previous studies investigating mating patterns at the MHC have been hampered by limited sample size and contradictory findings. Inspired by the sparse and conflicting evidence, we investigated the role that the MHC region played in human mate selection using genome-wide association data from 872 European American spouses from the Health and Retirement Study (HRS). First, we treated the MHC region as a whole, and investigated genomic similarity between spouses using three levels of genomic variation: single-nucleotide polymorphisms (SNPs), classical human leukocyte antigen (HLA) alleles (both four-digit and two-digit classifications), and amino acid polymorphisms. The extent of MHC dissimilarity between spouses was assessed using a permutation approach. Second, we investigated fine scale mating patterns by testing for deviations from random mating at individual SNPs, HLA genes, and amino acids in HLA molecules. Third, we assessed how extreme the spousal relatedness at the MHC region was compared to the rest of the genome, to distinguish the MHC-specific effects from genome-wide effects. We show that neither the MHC region, nor any single SNPs, classic HLA alleles, or amino acid polymorphisms within the MHC region, were significantly dissimilar between spouses relative to non-spouse pairs. However, dissimilarity in the MHC region was extreme relative to the rest of genome for both spousal and non-spouse pairs. Despite the long-standing controversy, our analyses did not support a significant role of MHC dissimilarity in human mate choice.
Keywords: disassortative mating; non-random mating; major histocompatibility complex; human leukocyte antigen; mate selection
Mass–Elite Divides in Aversion to Social Change and Support for Donald Trump
Mass–Elite Divides in Aversion to Social Change and Support for Donald Trump. Matt Grossmann, Daniel Thaler. American Politics Research, https://doi.org/10.1177/1532673X18772280
Abstract: Donald Trump won the American presidency in 2016 by overperforming expectations in upper Midwest states, surprising even Republican political elites. We argue that attitudes toward social change were an underappreciated dividing line between supporters of Trump and Hillary Clinton as well as between Republicans at the mass and elite levels. We introduce a concept and measure of aversion to (or acceptance of) social diversification and value change, assess the prevalence of these attitudes in the mass public and among political elites, and demonstrate its effects on support for Trump. Our research uses paired surveys of Michigan’s adult population and community of political elites in the Fall of 2016. Aversion to social change is strongly predictive of support for Trump at the mass level, even among racial minorities. But attitudes are far more accepting of social change among elites than the public and aversion to social change is not a factor explaining elite Trump support. If elites were as averse to social change as the electorate—and if that attitude mattered to their vote choice—they might have been as supportive of Trump. Views of social change were not as strongly related to congressional voting choices.
Keywords: political parties, vote choice, political elites, racial resentment, diversity
---
We sought to assess the relationship between attitudes toward social change and vote preference. Our measure of aversion to change is an additive scale made up of two components—respondents’ level of agreement with a pair of statements about changing cultural values:
1. “Our country is changing too fast, undermining traditional American values.”
2. “By accepting diverse cultures and lifestyles, our country is steadily improving.”
[...]
Our major dependent variable of interest, vote preference, is a three-category ordinal variable created from a survey item asking respondents which of the two major candidates they most support for the presidency in 2016. Each of these variables takes on a value of 0 if the respondent preferred Clinton, a value of 1 if the respondent preferred Trump, and a value of 0.5 if the respondent preferred another candidate or could not decide. A similar variable records the respondent’s preference between the major party candidates in their local congressional election.
Our measure of authoritarian attitudes is based on a measure used by Feldman and Stenner (1997). We constructed a 3-point scale from 0 to 1 from two binary items that asked respondents to choose which of a given pair of personal qualities is more important for a child to have: obedience versus self-reliance, and independence versus respect for elders. Preference for obedience and respect for elders were considered the more authoritarian choices. Our measure of racial resentment is a 9-point scale from 0 to 1 constructed from respondents’ reported level of agreement or disagreement with two statements about race—one positing that African Americans should overcome prejudice and work their way up without any special favors like some other minority groups did, and one (coded in the opposite direction) positing that generations of slavery and discrimination make it difficult for African Americans to work their way up financially. Higher values indicate higher levels of resentment.
Ethnocentrism is measured using a set of “feeling thermometer” questions for particular racial and religious groups, comparing the respondent’s rating of Whites to their rating of Blacks, Hispanics and Latinos, and Muslims. In particular, the variable is coded as the average difference between the score given by the respondent to “Whites” and the score the respondent gave to each of the three minority groups (rescaled from 0 to 1). Minority respondents are coded as having values of 0 on this ethnocentrism scale. Calculating the ethnocentrism of non-White respondents the same way does not change our conclusions in any significant way.
Abstract: Donald Trump won the American presidency in 2016 by overperforming expectations in upper Midwest states, surprising even Republican political elites. We argue that attitudes toward social change were an underappreciated dividing line between supporters of Trump and Hillary Clinton as well as between Republicans at the mass and elite levels. We introduce a concept and measure of aversion to (or acceptance of) social diversification and value change, assess the prevalence of these attitudes in the mass public and among political elites, and demonstrate its effects on support for Trump. Our research uses paired surveys of Michigan’s adult population and community of political elites in the Fall of 2016. Aversion to social change is strongly predictive of support for Trump at the mass level, even among racial minorities. But attitudes are far more accepting of social change among elites than the public and aversion to social change is not a factor explaining elite Trump support. If elites were as averse to social change as the electorate—and if that attitude mattered to their vote choice—they might have been as supportive of Trump. Views of social change were not as strongly related to congressional voting choices.
Keywords: political parties, vote choice, political elites, racial resentment, diversity
---
We sought to assess the relationship between attitudes toward social change and vote preference. Our measure of aversion to change is an additive scale made up of two components—respondents’ level of agreement with a pair of statements about changing cultural values:
1. “Our country is changing too fast, undermining traditional American values.”
2. “By accepting diverse cultures and lifestyles, our country is steadily improving.”
[...]
Our major dependent variable of interest, vote preference, is a three-category ordinal variable created from a survey item asking respondents which of the two major candidates they most support for the presidency in 2016. Each of these variables takes on a value of 0 if the respondent preferred Clinton, a value of 1 if the respondent preferred Trump, and a value of 0.5 if the respondent preferred another candidate or could not decide. A similar variable records the respondent’s preference between the major party candidates in their local congressional election.
Our measure of authoritarian attitudes is based on a measure used by Feldman and Stenner (1997). We constructed a 3-point scale from 0 to 1 from two binary items that asked respondents to choose which of a given pair of personal qualities is more important for a child to have: obedience versus self-reliance, and independence versus respect for elders. Preference for obedience and respect for elders were considered the more authoritarian choices. Our measure of racial resentment is a 9-point scale from 0 to 1 constructed from respondents’ reported level of agreement or disagreement with two statements about race—one positing that African Americans should overcome prejudice and work their way up without any special favors like some other minority groups did, and one (coded in the opposite direction) positing that generations of slavery and discrimination make it difficult for African Americans to work their way up financially. Higher values indicate higher levels of resentment.
Ethnocentrism is measured using a set of “feeling thermometer” questions for particular racial and religious groups, comparing the respondent’s rating of Whites to their rating of Blacks, Hispanics and Latinos, and Muslims. In particular, the variable is coded as the average difference between the score given by the respondent to “Whites” and the score the respondent gave to each of the three minority groups (rescaled from 0 to 1). Minority respondents are coded as having values of 0 on this ethnocentrism scale. Calculating the ethnocentrism of non-White respondents the same way does not change our conclusions in any significant way.
From 2004: Both males and females whose voices were rated as attractive had sex at an earlier age, had more sexual partners, more extra-pair copulation partners, and more sexual partners that were involved in a relationship with another person
Ratings of voice attractiveness predict sexual behavior and body configuration. Susan M. Hughes, Franco Dispenza, Gordon G. Gallup Jr. Evolution and Human Behavior 25 (2004) 295–304.
Abstract: We investigated the relationship between ratings of voice attractiveness and sexually dimorphic differences in shoulder-to-hip ratios (SHR) and waist-to-hip ratios (WHR), as well as different features of sexual behavior. Opposite-sex voice attractiveness ratings were positively correlated with SHR in males and negatively correlated with WHR in females. For both sexes, ratings of opposite-sex voice attractiveness also predicted reported age of first sexual intercourse, number of sexual partners, number of extra-pair copulation (EPC) partners, and number of partners that they had intercourse with that were involved in another relationship (i.e., were themselves chosen as an EPC partner). Coupled with previous findings showing a relationship between voice attractiveness and bilateral symmetry, these results provide additional evidence that the sound of a person’s voice may serve as an important multidimensional fitness indicator.
---
Both males and females whose voices were rated as attractive had sex at an earlier age, had more sexual partners, more EPC partners, and more sexual partners that were involved in a relationship with another person. It is interesting that voice attractiveness ratings by members of the opposite sex were better predictors of sexual behavior than ratings by members of the same sex. Aside from Wilson (1984), who noted that lower voiced male opera singers were more inclined to have sexual affairs with fellow singers, our findings are the first to empirically implicate the existence of a relationship between voice and sexual behavior.
Individuals with attractive voices are perceived more favorably and as having more desirable personality characteristics (Zuckerman & Driver, 1989). Furthermore, the higher the ratings of voice attractiveness, the more the speaker is judged to be similar to the rater and the more the rater would like to affiliate with the speaker (Miyake & Zuckerman, 1993). This bvocal attractiveness stereotypeQ (Zuckerman & Driver, 1989; Zuckerman, Hodgins, & Miyake, 1990) may promote sexual opportunities. Although Zuckerman and Driver (1989) did not find an effect, Collins and Missing (2003) report a substantial correlation between ratings of voice attractiveness and of facial attractiveness in women. Therefore, since ratings of facial attractiveness predict semen quality in males (Soler et al., 2003) and longevity in both males and females (Henderson & Anglin, 2003), voice attractiveness may be an indicator (albeit indirect) of other fitness-related features as well.
Abstract: We investigated the relationship between ratings of voice attractiveness and sexually dimorphic differences in shoulder-to-hip ratios (SHR) and waist-to-hip ratios (WHR), as well as different features of sexual behavior. Opposite-sex voice attractiveness ratings were positively correlated with SHR in males and negatively correlated with WHR in females. For both sexes, ratings of opposite-sex voice attractiveness also predicted reported age of first sexual intercourse, number of sexual partners, number of extra-pair copulation (EPC) partners, and number of partners that they had intercourse with that were involved in another relationship (i.e., were themselves chosen as an EPC partner). Coupled with previous findings showing a relationship between voice attractiveness and bilateral symmetry, these results provide additional evidence that the sound of a person’s voice may serve as an important multidimensional fitness indicator.
---
Both males and females whose voices were rated as attractive had sex at an earlier age, had more sexual partners, more EPC partners, and more sexual partners that were involved in a relationship with another person. It is interesting that voice attractiveness ratings by members of the opposite sex were better predictors of sexual behavior than ratings by members of the same sex. Aside from Wilson (1984), who noted that lower voiced male opera singers were more inclined to have sexual affairs with fellow singers, our findings are the first to empirically implicate the existence of a relationship between voice and sexual behavior.
Individuals with attractive voices are perceived more favorably and as having more desirable personality characteristics (Zuckerman & Driver, 1989). Furthermore, the higher the ratings of voice attractiveness, the more the speaker is judged to be similar to the rater and the more the rater would like to affiliate with the speaker (Miyake & Zuckerman, 1993). This bvocal attractiveness stereotypeQ (Zuckerman & Driver, 1989; Zuckerman, Hodgins, & Miyake, 1990) may promote sexual opportunities. Although Zuckerman and Driver (1989) did not find an effect, Collins and Missing (2003) report a substantial correlation between ratings of voice attractiveness and of facial attractiveness in women. Therefore, since ratings of facial attractiveness predict semen quality in males (Soler et al., 2003) and longevity in both males and females (Henderson & Anglin, 2003), voice attractiveness may be an indicator (albeit indirect) of other fitness-related features as well.
Also: Men's voices and women's choices. Sarah A.Collins. Animal Behaviour, Volume 60, Issue 6, December 2000, Pages 773-780. https://doi.org/10.1006/anbe.2000.1523
Abstract: I investigated the relationship between male human vocal characteristics and female judgements about the speaker. Thirty-four males were recorded uttering five vowels and measures were taken, from power spectrums, of the first five harmonic frequencies, overall peak frequency and formant frequencies (emphasized, resonance, frequencies within the vowel). Male body measures were also taken (age, weight, height, and hip and shoulder width) and the men were asked whether they had chest hair. The recordings were then played to female judges, who were asked to rate the males' attractiveness, age, weight and height, and to estimate the muscularity of the speaker and whether he had a hairy chest. Men with voices in which there were closely spaced, low-frequency harmonics were judged as being more attractive, older and heavier, more likely to have a hairy chest and of a more muscular body type. There was no relationship between any vocal and body characteristic. The judges' estimates were incorrect except for weight. They showed extremely strong agreement on all judgements. The results imply that there could be sexual selection through female choice for male vocal characteristics, deeper voices being preferred. However, the function of the preference is unclear given that the estimates were generally incorrect.
Subscribe to:
Posts (Atom)