Empathy, Exploitation, and Adolescent Bullying Perpetration: a Longitudinal Social-Ecological Investigation. Ann H. Farrell, Anthony A. Volk, Tracy Vaillancourt. Journal of Psychopathology and Behavioral Assessment, December 4 2019. https://link.springer.com/article/10.1007/s10862-019-09767-6
Abstract: Empathy has been often negatively associated with bullying perpetration, whereas tendencies to be exploitative have been relatively understudied with bullying. Empathic concern and exploitation may also indirectly link distal social-ecological factors to bullying perpetration. Therefore, the associations among personality (i.e., empathic concern, exploitation), self-perceived social-ecological factors (school bonding, social resources), and bullying perpetration were examined in a sample of 531 adolescents across three years of high school in Ontario, Canada (i.e., Grades 9 to 11; mean age 14.96 [SD = 0.37] in Grade 9). As expected, exploitation had concurrent and longitudinal associations with bullying, but unexpectedly empathic concern only had concurrent associations and no longitudinal associations with bullying. Also as expected, exploitation indirectly linked self-perceived social resources to bullying perpetration, but unexpectedly there were no indirect effects with empathic concern. Findings suggest a complex social ecology whereby a lack of empathic concern may remain an important correlate of bullying within each year of high school, whereas exploitative tendencies may be an important predictor of bullying across the high school years, including to strategically leverage self-perceived social resources.
Keywords: Bullying Adolescents Exploitation Empathic concern Social-ecology
From the main author's PhD Thesis, excerpts of the general discussion (chp 5):
Increasing evidence supports the suggestion that adolescence may be a
developmental period when bullying can be adaptively used to acquire material, social,
and romantic resources (Volk, Dane, & Marini, 2014). Bullying may be adaptive under a
specific combination of proximate intrinsic and distal extrinsic social ecological factors.
In particular, genetically influenced personality traits may indirectly link broader
environments to adolescent bullying. The purpose of this dissertation was to investigate
the associations between exploitative personality traits and broader social ecologies
(family, peers, school, community, and economic) to see how they independently and
indirectly facilitated adolescent bullying perpetration. These associations were examined
concurrently, longitudinally, and experimentally in three studies. My prediction that the
broader social environments would filter through exploitative personality traits to
indirectly associate with bullying perpetration was largely supported throughout these
three studies.
In Study 1, I found that environmental variables from three different ecological
systems (micro-, meso-, and macro-) were concurrently associated to both direct (i.e.,
physical, verbal) and indirect (i.e., social) forms of adolescent bullying primarily through
a trait capturing exploitation (i.e., lower Honesty-Humility). Direct bullying also had
indirect associations from social ecological variables through a trait capturing
recklessness (i.e., lower Conscientiousness). To extend on Study 1, I examined
personality-environment associations in a sample of adolescents longitudinally. I found
that exploitation, but not empathy, was longitudinally associated with bullying
perpetration across the first three years of high school. Additionally, social ecological
variables, in particular social status and family functioning, were longitudinally
associated with exploitation, and social status was indirectly longitudinally associated
with bullying through exploitation. Finally, given that Studies 1 and 2 were correlational,
in Study 3, I examined whether bullying perpetration could be simulated through point
allocations in economic games in a laboratory setting. I found that economic games can
be a novel way to experimentally investigate bullying perpetration. Self-report bullying
and selfish Dictator Game point allocations were both related to one another and an
exploitative personality trait (i.e., lower Honesty-Humility). Also, the association
between the environment and both forms of behavior were indirectly facilitated through
this exploitative trait. These three studies together contributed two overall themes in the
social ecology of adolescent bullying perpetration. First, these studies demonstrated the
significance of the role of exploitative personality traits, as opposed to a lack of empathy,
general disagreeableness, or impulsivity, within the context of adaptive adolescent
bullying. Second, these three studies demonstrated a complex social ecology of bullying,
whereby broader social environments from multiple ecological systems can indirectly
facilitate bullying perpetration through exploitative personality traits.
Antisocial Personality and Bullying Perpetration: The Importance of Exploitation
Across all three studies, it was evident that traits capturing exploitation were the
most prominent personality correlates of adolescent bullying perpetration. In both Studies
1 and 3, lower Honesty-Humility was significantly associated with higher bullying
perpetration and selfish Dictator Game point allocations (i.e., an experimental proxy for
bullying). In Study 2, higher exploitation was longitudinally associated with bullying
perpetration. These results are consistent with previous concurrent associations between
adolescent bullying and Honesty-Humility (e.g., Book, Volk, & Hosker, 2012; Farrell,
Della Cioppa, Volk, & Book, 2014), experimental studies on economic game behavior
and Honesty-Humility (e.g., Hilbig, Thielmann, Hepp, Klein, & Zettler, 2015; Hilbig,
Thielman, Wührl, & Zettler, 2015; Hilbig & Zettler, 2009; Hilbig, Zettler, Leist, &
Heydasch, 2013), and finally longitudinal studies on bullying perpetration and narcissism
(i.e., comprised of exploitation and self-superiority; Fanti & Henrich, 2015). It was
evident that adolescents may be strategically exploiting weaker and vulnerable peers to
maximize self-gain, while minimizing costs like victim retaliation. More importantly, my
results demonstrate that a predatory, exploitative tendency may be the most relevant
personality risk factor for engaging in bullying, over and above other personality traits
related to antisocial tendencies.
In contrast to previous studies that found bullying perpetration is often associated
with personality traits such as a lack of empathy, a general tendency to be disagreeable or
angry, and higher impulsivity (e.g., Bollmer, Harris, & Milich, 2006; Caravita, Di Blasio,
& Salmivalli, 2009; Tani, Greeman, Schneider, & Fregoso, 2003), I found lower
Honesty-Humility and higher exploitativeness were associated with bullying, despite
controlling for these other antisocial personality traits. In both Studies 1 and 3, I found
that lower Honesty-Humility was the strongest correlate of bullying perpetration, over
and above the other HEXACO personality traits. Although indirect and direct forms of
bullying and Dictator Game point allocations were both negatively related with lower
Agreeableness (and in Study 1 additionally related to lower Conscientiousness), Honesty-
Humility was the strongest correlate. Thus, it appears that predatory exploitation over
weaker individuals may be the driving personality factor facilitating bullying, even if the
other antisocial traits are still important and associated with bullying. These results are
consistent with recent findings that although Honesty-Humility, Emotionality,
Agreeableness, and Conscientiousness from the HEXACO were all associated with
antisocial tendencies, Honesty-Humility was the largest and driving contributor of
antisociality (Book et al., 2016; Book, Visser & Volk, 2015; Hodson et al., 2018).
However, it is important to note that lower Conscientiousness was additionally a
significant multivariate predictor of direct, but not indirect, bullying. This result
demonstrates that in addition to reflecting strategical exploitation, direct forms of
bullying like hitting, pushing, or kicking, may reflect a risky form of antisocial behavior
that is associated with a general recklessness (Volk et al., 2014). These indirect
associations are also consistent with theories of a faster life history, which posit that
certain individuals who experience competitive or adverse social environments may be
more likely to engage in more impulsive and aggressive behavior to obtain immediate,
short-term access to resources, and bullying may be one behavior that can reflect this
strategy (Dane, Marini, Volk, & Vaillancourt, 2017; Del Giudice & Belsky, 2011;
Hawley, 2011). Interestingly, unlike Agreeableness and Conscientiousness, which had at
least univariate associations with bullying, lower Emotionality or lower empathy had the
fewest univariate and multivariate associations with adolescent bullying.
Although contrasting with the prevalent theories linking lower empathy with
bullying, our lack of association agrees with more contemporary theories of adolescent
bullying as an adaptive, predatory strategy. In Study 1, lower Emotionality was not
significantly related at either the univariate or multivariate levels with bullying, and in
Study 2, lower empathy was only concurrently, but not longitudinally, related with higher
bullying. My results are in contrast to those of previous researchers who found significant
associations between child bullying and lower empathy (e.g., Caravita et al., 2009; Zych,
Ttofi, & Farrington, 2016). Instead, my findings support my prediction that instead of a
lack of emotional recognition or response, a predatory exploitation of others’ weaknesses
may be an important reason why adolescents bully. This may be one potential reason why
empathy related interventions may have been largely ineffective for adolescents (Yeager,
Fong, Lee, & Espelage, 2015). Taken together, all three studies not only support existing
literature on the concurrent association between exploitative style traits and adolescent
bullying (e.g., Book et al., 2012), but extend on these findings by providing both quasiexperimental
and longitudinal evidence for this association. These results with
personality and bullying also suggest that not every risk factor for bullying affects all
adolescents in the same way. Instead, adolescents with specific personality traits may be
more likely and willing to use bullying. Further, adolescents with these personality traits
can respond to, or are influenced by, particular environments in multiple ways (Caspi et
al., 2002; Marceau et al., 2013; Moffitt, 2005; Scarr & McCartney, 1983). In my thesis,
these associations between environment and personality were evident through the
multiple social environmental variables that were indirectly associated with bullying
through exploitative personality styles.
Bullying Perpetration and Indirect Associations with Broader Social Ecology
Across the three studies, it was evident that not all social environments facilitate
bullying in the same way for all adolescents. Instead as expected, I found that multiple
adverse and risky social environment variables filtered specifically through exploitative
personality traits to indirectly facilitate adolescent bullying. These social environmental
variables were from multiple ecological systems ranging from proximate economic
power contexts and peer and family relationships, to distal school and community
variables. Starting with the more proximate factors, all three studies demonstrated that
social relationships in the microsystem (i.e., immediate social context), had indirect
associations with bullying most frequently through either lower Honesty-Humility (i.e.,
Study 1 and 3), or higher exploitation (i.e., Study 2), as opposed to other antisocial
personality traits. Occasionally in Study 1, a proximate ecological factor was found to
have an indirect effect with bullying primarily through Honesty-Humility and secondarily
through lower Conscientiousness. In these instances, the strength of the indirect
associations through Honesty-Humility and Conscientiousness were often comparable, as
indicated through the standardized beta coefficients. The associations with Honesty-
Humility and exploitation may be a result of predatory individuals being able to
strategically take advantage of adverse and/or risky social ecological circumstances.
Adverse social relationships including poorer family dynamics and higher peer
problems, along with powerful social positions such as higher social status (i.e., Study 2)
or higher interpersonal influence (i.e., Study 1 and 3), appeared to be risk factors
indirectly associated with bullying through exploitative personality styles. Individuals
with higher social status or social influence, and individuals who experience negative
social relationships characterized by conflict, lower support and lower warmth, may
exploit these social environments. For example, adolescents who know their parents do
not have much knowledge or care for their whereabouts may take advantage of this lack
of interest by engaging in bullying, knowing that they would have fewer repercussions.
Likewise, adolescents who know they have poorer friendships may exploit these friends
and employ these friends in bullying strategies. Finally, exploiting these relationships
may be especially advantageous for adolescents who have higher social status, as they
would likely have greater influence when navigating their peer networks to effectively
assert their power through bullying tactics. These concurrent results held longitudinally
across three years of adolescence, and also held when manipulating dyadic economic
contexts in a laboratory setting.
These results are consistent with broader evolutionary frameworks that help
explain the use of aggressive behavior. Bullying may be a facultative or conditional
adaptation that an adolescent may consciously or subconsciously decide to engage in
after evaluating his or her own personality traits (i.e., exploitative tendencies; Buss, 2011)
in combination with their broader environments (e.g., friendships, family relationships,
social status; Dane et al., 2017; Del Giudice & Belsky, 2011; Hawley, 2011; Volk,
Camilleri, Dane, & Marini, 2012). Adverse and negative social environments may also
facilitate faster life history strategies that encourage aggressive behavior like bullying (as
opposed to cooperative, long-term strategies) as an immediate means for resources (Del
Giudice & Belsky, 2011; Hawley, 2011). After these assessments of the self and
environment, an adolescent may anticipate the immediate benefits of bullying over
weaker peers may outweigh the costs. Additionally, if previous uses of bullying have
been successful, these dominant and exploitative adolescents may be more inclined to use
this behavior again (Dawkins, 1989). Alternatively, adolescents who possess certain
genetically based personality traits such as exploitative tendencies, may already be more
likely to use coercive or bullying behavior, as opposed to prosocial or cooperative forms
of behavior (Del Giudice & Belsky, 2011).
In addition to evolutionary frameworks, my results are consistent with
Bronfenbrenner’s Ecological Systems Theory (EST; Bronfenbrenner, 1979), and with
recent findings that multiple ecological levels can differentially facilitate bullying
perpetration (e.g., Hong & Espelage, 2012). Furthermore, my results demonstrate that
multiple ecological contexts can have indirect associations with individual differences in
personality, similar to previous ecological studies on bullying (e.g., Barboza et al., 2009;
Lee, 2011; Low & Espelage, 2014). However, my results provide some key novel
contributions. My findings demonstrate that there are indirect associations from adverse
parental and peer relationships and socially powerful positions to bullying, specifically
through exploitative traits, as opposed to other antisocial personality traits. These results
are likely due to the reason that exploitative adolescents may be more willing and able to
take advantage of adverse relationships and powerful positions.
It is likely that within these environmental contexts, exploitative adolescents may
experience more social benefits when using bullying (e.g., increased social status), and
may simultaneously have fewer costs imposed by parents and/or weaker peers. One of the
most noteworthy and prominent social ecological variables that emerged were status
related variables that indicate higher power. Across all three studies, it was evident that
higher social status (i.e., Study 2) or higher interpersonal influence (i.e., Studies 1 and 3)
were commonly associated with bullying through exploitative personality, and this
association held concurrently, longitudinally, and in a laboratory based experimental
setting. Bullying is fundamentally about a power imbalance (Volk et al., 2014). By
definition, bullying requires an individual with more power to inflict harm on a weaker
individual. As evident throughout animal studies (e.g., hierarchy in hyenas; Stewart,
1987), and in research in which human participants are assigned a powerful role (e.g.,
role of prison guards; Haney, Banks, & Zimbardo, 1973), a position of higher status and
social power can be translated into gaining resources at the expense of others. It is not
surprising then, that this fundamental feature that distinguishes bullying from other forms
of aggression is reflected in the broader adolescent social ecology. Those willing to use
power to inflict harm on weaker peers may be more effective in doing so if they have
exploitative, predatory tendencies (as opposed a general lack of concern or empathy for
others). These exploitative tendencies will ultimately assist in taking advantage of higher
social status and influence to strategically bully weaker peers, who are less likely to
defend themselves and/or retaliate.
My findings are similar to previous associations between strategic adolescent
bullying and higher perceived popularity, social status, and influence (Dijkstra,
Lindenberg, & Veenstra, 2008; Garandeau, Lee, & Salmivalli, 2013; Pellegrini & Long,
2002; Reijntjes et al., 2013; Sentse, Veenstra, Kiuru, & Salmivalli, 2015; Sijtsema,
Veenstra, Lindenberg, & Salmivalli, 2009). Additionally, these results are consistent with
those of previous researchers who found that although adolescents who bully can be high
in peer-perceived popularity, power, and status, they are not necessarily socially preferred
or liked by peers as friends (Vaillancourt & Hymel 2006). Researchers have found that
early adolescence is a developmental period when peer-perceived popularity is most
valued (LaFontana & Cillessen, 2010). As a result, given that adolescents may be
exploiting their social status to engage in bullying, my results support the notion that
bullying can be used selectively and adaptively by adolescents for status, a goal that is
highly salient during adolescence. A similar pattern of indirect effects also emerged with
more distal ecological variables.
Adverse mesosystem variables (i.e., interactions among immediate social
environments), and macrosystem variables (i.e., broader cultural attitudes and values)
also indirectly facilitated bullying through either lower Honesty-Humility or higher
exploitation. Risky, negative aspects of the social environment such as higher
neighborhood violence, higher school competition, and adverse school climates were
indirectly associated with bullying. It appears that in addition to immediate social
environments, exploitative adolescents may take advantage of wider negative climates to
engage in bullying for self-gain. These broader adverse environments may not provide
the social structures including discipline that could prevent adolescents from acting on
their exploitative motivations. Thus, in addition to assessments of the self and
environment, adolescents may have learned the benefits of bullying within these
environments outweigh the costs through vicarious reinforcement, consistent with the
Social Learning Theory (Bandura, 1978). The fact that risky environments filtered
through both lower Honesty-Humility and lower Conscientiousness for direct bullying
behavior in Study 1 suggests that while all forms of bullying can be strategically
implemented within the right conditions, direct forms of bullying behavior also reflect a
recklessness for consequences (Volk et al., 2014), and a tendency to engage in riskier,
aggressive behavior for immediate gain (Del Giudice & Belsky, 2011). Accordingly,
these findings further provide support that not all environments affect all adolescents the
same way. Although predatory, exploitative tendencies appear to filter both proximate
and distal adverse social environments for bullying perpetration, there are even subtle
differences in the bullying behavior used. Poorer social relationships, higher social status,
and more competitive and violent school and neighborhood variables appear to be risk
factors for engaging in bullying as a whole. These variables appear to be risk factors for
exploitative adolescents who may strategically take advantage of these contexts to
adaptively bully. However, these adverse environments may also be risk factors for
generally impulsive or reckless adolescents willing to engage in direct forms of bullying.
Accordingly, there appears to be a successful facultative translation of these risky social
environments into adaptive bullying behavior by adolescents with a primarily predatory,
exploitatively personality style.
Taken together, my results are consistent with previous findings on poorer social
relations interacting with Honesty-Humility to predict bullying (e.g., lower parental
knowledge; Farrell, Provenzano, Dane, Marini, & Volk, 2017), and with additional
ecological findings on personality interacting with or indirectly linking social
environments for bullying (Barboza et al., 2009; Lee, 2011; Low & Espelage, 2014). My
findings suggest that adolescents who bully may not necessarily be generally
disagreeable, antisocial individuals with a lack of empathy. Instead, adolescent
perpetrators may be strategic, exploitative individuals who are able to take advantage of
their broader social environments and immediate social influence to gain more benefits,
while simultaneously reducing costs. Both Studies 1 and 2 demonstrated that the distal
and proximate environmental contexts may adaptively filter through an exploitative
personality trait to predict bullying, a behavior rooted in taking advantage of power.
However, Study 3 extended findings from the previous two studies by demonstrating how
proximate contextual factors like power can be manipulated to examine bullying and/or
similarly related competitive behavior, and how these forms of behavior relate with
personality. Despite these significant contributions, this dissertation was not without
limitations.
Wednesday, December 4, 2019
Induced Mate Abundance Increases Women’s Expectations for Engagement Ring Size and Cost
Induced Mate Abundance Increases Women’s Expectations for Engagement Ring Size and Cost. Ashley Locke, Jessica Desrochers, Steven Arnocky. Evolutionary Psychological Science, December 4 2019. https://link.springer.com/article/10.1007/s40806-019-00214-z
Abstract: Research on some non-human species suggests that an abundance of reproductively viable males relative to females can increase female choosiness and preferences for longer-term mating and resource investment by males. Yet little research has explored the potential influence of mate availability upon women’s preferences for signals of men’s commitment and resource provisioning. Using an experimental mate availability priming paradigm, the present study examined whether women (N = 205) primed with either mate scarcity or abundance would differ in their expectations for engagement ring size and cost. Results demonstrated that women who were primed with the belief that good-quality mates are abundant in the population reported expecting a statistically-significantly larger and more expensive engagement ring relative to women primed with mate scarcity. Results suggest that women flexibly attune their expectations for signals of men’s investment based, in part, upon their perception of the availability of viable mates.
Keywords: Priming Sex ratio Engagement rings Social psychology Evolutionary psychology Mating behavior
---
Sample
205 undergraduate women aged 17 to 39 (M 20 SD 2 87).
Demographic Measures
Prior to the priming task, participants completed measures of age and romantic relationship status (“Are you currently in a committed heterosexual romantic relationship?”)
Mate Availability Priming Task
Using a set of fictitious magazine articles developed by Spielmann, MacDonald, and Wilson 2009 participants were primed with the belief that potential mates were either abundant or scarce. In this task, participants read one of two articles In the mate abundant condition, the article explained the task of finding a new romantic partner as being relatively easy, with the mating population
consisting of many available mates. Conversely, in the mate scarcity condition, the article highlighted the difficulty of finding a new romantic partner, with desirable mates being a scarce resource.
Manipulation Check
Participants then responded to the following two items asking about their own perceptions of mate availability: (1) “It scares me to think there might not be anyone out there for me” and (2) "I feel it is close to being too late for me to find love in my life"
Engagement Ring Preferences
Following Cloud and Taylor 2018 female participants were asked ,,“if this man were to propose to you after an extended period of dating, what is the smallest size engagement ring that you would be satisfied with him giving to you To make their decision, participants saw five identical engagement rings that differed only by carat weight and cost, ranging from 0 50 carats 500 to 1 50 carats 9000 and their selection was recorded (see Fig 1 from Cloud and Taylor 2008).
Abstract: Research on some non-human species suggests that an abundance of reproductively viable males relative to females can increase female choosiness and preferences for longer-term mating and resource investment by males. Yet little research has explored the potential influence of mate availability upon women’s preferences for signals of men’s commitment and resource provisioning. Using an experimental mate availability priming paradigm, the present study examined whether women (N = 205) primed with either mate scarcity or abundance would differ in their expectations for engagement ring size and cost. Results demonstrated that women who were primed with the belief that good-quality mates are abundant in the population reported expecting a statistically-significantly larger and more expensive engagement ring relative to women primed with mate scarcity. Results suggest that women flexibly attune their expectations for signals of men’s investment based, in part, upon their perception of the availability of viable mates.
Keywords: Priming Sex ratio Engagement rings Social psychology Evolutionary psychology Mating behavior
---
Sample
205 undergraduate women aged 17 to 39 (M 20 SD 2 87).
Demographic Measures
Prior to the priming task, participants completed measures of age and romantic relationship status (“Are you currently in a committed heterosexual romantic relationship?”)
Mate Availability Priming Task
Using a set of fictitious magazine articles developed by Spielmann, MacDonald, and Wilson 2009 participants were primed with the belief that potential mates were either abundant or scarce. In this task, participants read one of two articles In the mate abundant condition, the article explained the task of finding a new romantic partner as being relatively easy, with the mating population
consisting of many available mates. Conversely, in the mate scarcity condition, the article highlighted the difficulty of finding a new romantic partner, with desirable mates being a scarce resource.
Manipulation Check
Participants then responded to the following two items asking about their own perceptions of mate availability: (1) “It scares me to think there might not be anyone out there for me” and (2) "I feel it is close to being too late for me to find love in my life"
Engagement Ring Preferences
Following Cloud and Taylor 2018 female participants were asked ,,“if this man were to propose to you after an extended period of dating, what is the smallest size engagement ring that you would be satisfied with him giving to you To make their decision, participants saw five identical engagement rings that differed only by carat weight and cost, ranging from 0 50 carats 500 to 1 50 carats 9000 and their selection was recorded (see Fig 1 from Cloud and Taylor 2008).
Are Sex Differences in Mating Preferences Really “Overrated”? The Effects of Sex and Relationship Orientation on Long-Term and Short-Term Mate Preference
Are Sex Differences in Mating Preferences Really “Overrated”? The Effects of Sex and Relationship Orientation on Long-Term and Short-Term Mate Preference. Sascha Schwarz, Lisa Klümper, Manfred Hassebrauck. Evolutionary Psychological Science, December 4 2019. https://link.springer.com/article/10.1007/s40806-019-00223-y
Abstract: Sex differences in mating-relevant attitudes and behaviors are well established in the literature and seem to be robust throughout decades and cultures. However, recent research claimed that sex differences are “overrated”, and individual differences in mating strategies (beyond sex) are more important than sex differences. In our current research, we explore between-sex as well as within-sex differences; further we distinguish between short-term and long-term relationship orientation and their interactions with sex for predicting mate preferences. In Study 1, we analyzed a large dataset (n = 21,245) on long-term mate characteristics. In Study 2 (n = 283), participants indicated their preference for long-term as well as short-term partners. The results demonstrate the necessity to include both intersexual as well as intrasexual differences in mating strategies. Our results question the claim that sex differences in mate preferences are “overrated.”
Keywords: Sex differences Mate preferences Sociosexual orientation Long-term relationship orientation Short-term relationship orientation Online dating
Abstract: Sex differences in mating-relevant attitudes and behaviors are well established in the literature and seem to be robust throughout decades and cultures. However, recent research claimed that sex differences are “overrated”, and individual differences in mating strategies (beyond sex) are more important than sex differences. In our current research, we explore between-sex as well as within-sex differences; further we distinguish between short-term and long-term relationship orientation and their interactions with sex for predicting mate preferences. In Study 1, we analyzed a large dataset (n = 21,245) on long-term mate characteristics. In Study 2 (n = 283), participants indicated their preference for long-term as well as short-term partners. The results demonstrate the necessity to include both intersexual as well as intrasexual differences in mating strategies. Our results question the claim that sex differences in mate preferences are “overrated.”
Keywords: Sex differences Mate preferences Sociosexual orientation Long-term relationship orientation Short-term relationship orientation Online dating
Education's marginal cognitive benefit does not reach a plateau until 17 years of education; those with low childhood intelligence derive the largest benefit of education
The influence of educational attainment on intelligence. Emilie Rune Hegelund et al. Intelligence. Volume 78, January–February 2020, 101419. https://doi.org/10.1016/j.intell.2019.101419
Highlights
• Education has a positive influence on intelligence.
• The marginal cognitive benefit of education does not reach a plateau until 17 years of education.
• Individuals with low childhood intelligence derive the largest benefit from education.
• Findings of relatively small cognitive benefits might be explained by selection bias.
Abstract: Education has been found to have a positive influence on intelligence, but to be able to inform policy, it is important to analyse whether the observed association depends on the educational duration and intelligence prior to variations in educational attainment. Therefore, a longitudinal cohort study was conducted of all members of the Metropolit 1953 Danish Male Birth Cohort who were intelligence tested at age 12 and appeared before the Danish draft boards (N = 7389). A subpopulation also participated in the Copenhagen Aging and Midlife Biobank (N = 1901). The associations of educational attainment with intelligence in young adulthood and midlife were estimated by use of general linear regression with adjustment for intelligence test score at age 12 and family socioeconomic position. Results showed a positive association of educational attainment with intelligence test scores in both young adulthood and midlife after prior intelligence had been taken into account. The marginal cognitive benefits depended on the educational duration but did not reach a plateau until 17 years. Further, intelligence test score at age 12 was found to modify the association, suggesting that individuals with low intelligence in childhood derive the largest benefit from education. Comparing the strength of the association observed among participants and non-participants in our midlife study, we showed that selection due to loss to follow-up might bias the investigated association towards the null. This might explain previous studies' findings of relatively small cognitive benefits. In conclusion, education seems to constitute a promising method for raising intelligence, especially among the least advantaged individuals.
4.2. Comparison with the existing literature
The finding of a positive association between educational attainment
and intelligence test scores after prior intelligence has been taken
into account is consistent with extant literature (Clouston et al., 2012;
Falch & Massih, 2011; Ritchie, Bates, Der, Starr, & Deary, 2013), including
a recent meta-analysis (Ritchie & Tucker-Drob, 2018). More
specifically, our results suggested an average increase in intelligence
test score of 4.3 IQ points per year of education in young adulthood and
1.3 IQ points per year of education in midlife. The effect estimate in
young adulthood is considerably higher than the effect estimate of 1.2
IQ points (95% confidence interval: 0.8, 1.6) reported in the metaanalysis
for the control prior intelligence design. However, in a simultaneous
multiple-moderator analysis, the authors report an adjusted
effect estimate of 2.1 IQ points (95% confidence interval: 0.8, 3.4),
taking into account the possible influence of age at early test, age at
outcome test, outcome test category, and male-only studies. Besides
these possible moderators, contextual factors might also account for the
contrasting findings between our study and the seven studies included
in the meta-analysis. However, it is important to notice that our effect
estimate in midlife is consistent with the findings of the meta-analysis,
suggesting that sample selectivity might have influenced the association
observed in midlife in our study and perhaps the associations observed
in the cohort studies included in the meta-analysis as well. This is
supported by findings of higher educational attainment among the individuals
in our study population who participated in the midlife study,
higher IQ at age 12 (IQ mean: 103.2 vs. 98.9) as well as our finding of
effect measure modification. If both educational attainment and intelligence
are positively associated with participation in studies and
individuals with low intelligence in childhood derive the largest benefit
from education, selection will bias the investigated associations towards
the null. This is probably the reason why the study with the least
selected sample in the meta-analysis is the one reporting the largest
effect estimate (Falch & Massih, 2011). Based on a male population who
were initially intelligence tested in school at age 10 and later at the
mandatory military draft board examination at age 20, the authors
found an increase in intelligence test score of 3.5 IQ points per year of
education (95% confidence interval: 3.0, 3.9), which is much more in
line with our results.
Our finding of a stronger association between educational attainment
and intelligence test scores in young adulthood compared with
midlife in the subpopulation of individuals who participated in both
examinations is consistent with the findings of the meta-analysis
(Ritchie & Tucker-Drob, 2018). One possible explanation for this
finding might be that schooling has a larger influence on intelligence
compared with vocational education or training, which mainly takes
place after the age of 18. However, the finding might also be explained
by the smaller time gap between the measurements of educational attainment
and intelligence, as the measurement of educational attainment
in midlife in most cases will reflect one's educational attainment
before the age of 30. The older the age at outcome testing, the larger the
time gap and thus more additional factors might influence the association,
such as the individual's occupational complexity and health
(Smart, Gow, & Deary, 2014; Waldstein, 2000).
In general, our finding of a positive association between educational
attainment and intelligence test scores should be interpreted with
caution. As previously written, it is difficult to separate the positive
influence of educational attainment on intelligence from the influence
of selection by prior intelligence, whereby individuals with higher intelligence
test scores prior to variations in educational attainment
progress further in the educational system. Although our results show a
strong positive association between educational attainment and intelligence
test scores after prior intelligence has been taken into account,
a hierarchical analysis of our data suggests that educational attainment
only increases the amount of explained variance in later IQ by
7% when IQ at age 12 is already accounted for (R2 = 0.46 vs
R2 = 0.53; p < .001). Therefore, our findings most likely not only
reflect the positive influence of educational attainment on intelligence,
but also a residual influence of selection processes which our statistical
analyses were not able to take into account.
To answer one of our specific aims, we also investigated whether the
increase in intelligence test score for each extra year of education depended
on the educational duration. Irrespective of whether intelligence
was measured in young adulthood or midlife, intelligence test
scores were found to increase with increasing years of education in a
cubic relation, suggesting that the increase in intelligence test score for
each extra year of education diminishes with increasing length of
education. This finding supports the hypothesis proposed by the authors
of a recent study, who, based on their own data and the existing literature,
suggest that the influence of educational attainment on intelligence
eventually might reach a plateau (Kremen et al., 2019).
However, where the authors of the previous study suggest that this
plateau might already be reached by late adolescence as their findings
show no significant association between educational attainment and
intelligence test score in midlife after IQ at age 20 has been taken into
account, we find no plateau until approximately 17 years of education.
In fact, replicating the previous study's statistical analyses we find an
average increase in intelligence test score in midlife of 0.8 IQ points per
year of education taking IQ at age 20 into account (Supplementary
Table 4). We speculate whether this contrasting finding might be explained
by the representativeness of the study populations as the previous
study is based on a selected sample of twins serving in the
American military at some point between 1965 and 1975. Thus, a study
based on the two Lothian birth cohorts, which like our study includes a
follow-up examination of a population-representative survey in childhood,
finds a weighted average increase in intelligence test score in late
life of 1.2 IQ points per year of education taking IQ at age 11 into
account (Ritchie et al., 2013). However, the contrasting finding might
also be explained by residual confounding due to the use of non-identical
baseline and outcome intelligence tests in our study. Nevertheless,
in our study, the strongest association between educational attainment
and intelligence in midlife was observed in upper-secondary school, i.e.
around 10–13 years of education. A possible explanation for this finding
might be that pupils up to and including upper-secondary school receive
general education, which improves exactly what the intelligence
tests included in our study most likely measure: General cognitive
ability. After upper-secondary school, individuals start to specialize in
different fields, which may explain why the increase in intelligence test
score for each extra year of education diminishes. However, the cubic
tendency was relatively weak and as written above the association
between educational attainment and intelligence did not reach a plateau
until approximately 17 years of education, corresponding to the
completion of a Master's degree program. As our study is the first to
investigate whether the increase in intelligence test score for each extra
year of education depends on the educational duration, replication of
our finding is needed – preferably in studies with access to the same
baseline and outcome test.
Finally, to answer another of our specific aims, we investigated
whether the increase in intelligence test score for each extra year of
education depended on the intelligence prior to variations in educational
attainment. The results showed that the increase in intelligence
test score for each extra year of education was higher in the group of
individuals with an IQ < 90 compared with the group of individuals
with an IQ of 90–109. Although this finding clearly needs to be replicated,
it is in line with the findings of a Danish study investigating
whether distributional changes accompanied the secular increases in
intelligence test scores among males born in 1939–1958 and
1967–1969 (Teasdale & Owen, 1989). According to the authors of that
study, a possible explanation of why individuals with low intelligence
in childhood derive the largest benefit from education is that the Danish
school system for the last seven decades mainly has focused on improving
the abilities of the least able (Teasdale & Owen, 1989).
Therefore, future studies are needed to investigate whether our finding
is peculiar to the Danish school system or whether it can be generalized
to school systems in other countries.
Highlights
• Education has a positive influence on intelligence.
• The marginal cognitive benefit of education does not reach a plateau until 17 years of education.
• Individuals with low childhood intelligence derive the largest benefit from education.
• Findings of relatively small cognitive benefits might be explained by selection bias.
Abstract: Education has been found to have a positive influence on intelligence, but to be able to inform policy, it is important to analyse whether the observed association depends on the educational duration and intelligence prior to variations in educational attainment. Therefore, a longitudinal cohort study was conducted of all members of the Metropolit 1953 Danish Male Birth Cohort who were intelligence tested at age 12 and appeared before the Danish draft boards (N = 7389). A subpopulation also participated in the Copenhagen Aging and Midlife Biobank (N = 1901). The associations of educational attainment with intelligence in young adulthood and midlife were estimated by use of general linear regression with adjustment for intelligence test score at age 12 and family socioeconomic position. Results showed a positive association of educational attainment with intelligence test scores in both young adulthood and midlife after prior intelligence had been taken into account. The marginal cognitive benefits depended on the educational duration but did not reach a plateau until 17 years. Further, intelligence test score at age 12 was found to modify the association, suggesting that individuals with low intelligence in childhood derive the largest benefit from education. Comparing the strength of the association observed among participants and non-participants in our midlife study, we showed that selection due to loss to follow-up might bias the investigated association towards the null. This might explain previous studies' findings of relatively small cognitive benefits. In conclusion, education seems to constitute a promising method for raising intelligence, especially among the least advantaged individuals.
4.2. Comparison with the existing literature
The finding of a positive association between educational attainment
and intelligence test scores after prior intelligence has been taken
into account is consistent with extant literature (Clouston et al., 2012;
Falch & Massih, 2011; Ritchie, Bates, Der, Starr, & Deary, 2013), including
a recent meta-analysis (Ritchie & Tucker-Drob, 2018). More
specifically, our results suggested an average increase in intelligence
test score of 4.3 IQ points per year of education in young adulthood and
1.3 IQ points per year of education in midlife. The effect estimate in
young adulthood is considerably higher than the effect estimate of 1.2
IQ points (95% confidence interval: 0.8, 1.6) reported in the metaanalysis
for the control prior intelligence design. However, in a simultaneous
multiple-moderator analysis, the authors report an adjusted
effect estimate of 2.1 IQ points (95% confidence interval: 0.8, 3.4),
taking into account the possible influence of age at early test, age at
outcome test, outcome test category, and male-only studies. Besides
these possible moderators, contextual factors might also account for the
contrasting findings between our study and the seven studies included
in the meta-analysis. However, it is important to notice that our effect
estimate in midlife is consistent with the findings of the meta-analysis,
suggesting that sample selectivity might have influenced the association
observed in midlife in our study and perhaps the associations observed
in the cohort studies included in the meta-analysis as well. This is
supported by findings of higher educational attainment among the individuals
in our study population who participated in the midlife study,
higher IQ at age 12 (IQ mean: 103.2 vs. 98.9) as well as our finding of
effect measure modification. If both educational attainment and intelligence
are positively associated with participation in studies and
individuals with low intelligence in childhood derive the largest benefit
from education, selection will bias the investigated associations towards
the null. This is probably the reason why the study with the least
selected sample in the meta-analysis is the one reporting the largest
effect estimate (Falch & Massih, 2011). Based on a male population who
were initially intelligence tested in school at age 10 and later at the
mandatory military draft board examination at age 20, the authors
found an increase in intelligence test score of 3.5 IQ points per year of
education (95% confidence interval: 3.0, 3.9), which is much more in
line with our results.
Our finding of a stronger association between educational attainment
and intelligence test scores in young adulthood compared with
midlife in the subpopulation of individuals who participated in both
examinations is consistent with the findings of the meta-analysis
(Ritchie & Tucker-Drob, 2018). One possible explanation for this
finding might be that schooling has a larger influence on intelligence
compared with vocational education or training, which mainly takes
place after the age of 18. However, the finding might also be explained
by the smaller time gap between the measurements of educational attainment
and intelligence, as the measurement of educational attainment
in midlife in most cases will reflect one's educational attainment
before the age of 30. The older the age at outcome testing, the larger the
time gap and thus more additional factors might influence the association,
such as the individual's occupational complexity and health
(Smart, Gow, & Deary, 2014; Waldstein, 2000).
In general, our finding of a positive association between educational
attainment and intelligence test scores should be interpreted with
caution. As previously written, it is difficult to separate the positive
influence of educational attainment on intelligence from the influence
of selection by prior intelligence, whereby individuals with higher intelligence
test scores prior to variations in educational attainment
progress further in the educational system. Although our results show a
strong positive association between educational attainment and intelligence
test scores after prior intelligence has been taken into account,
a hierarchical analysis of our data suggests that educational attainment
only increases the amount of explained variance in later IQ by
7% when IQ at age 12 is already accounted for (R2 = 0.46 vs
R2 = 0.53; p < .001). Therefore, our findings most likely not only
reflect the positive influence of educational attainment on intelligence,
but also a residual influence of selection processes which our statistical
analyses were not able to take into account.
To answer one of our specific aims, we also investigated whether the
increase in intelligence test score for each extra year of education depended
on the educational duration. Irrespective of whether intelligence
was measured in young adulthood or midlife, intelligence test
scores were found to increase with increasing years of education in a
cubic relation, suggesting that the increase in intelligence test score for
each extra year of education diminishes with increasing length of
education. This finding supports the hypothesis proposed by the authors
of a recent study, who, based on their own data and the existing literature,
suggest that the influence of educational attainment on intelligence
eventually might reach a plateau (Kremen et al., 2019).
However, where the authors of the previous study suggest that this
plateau might already be reached by late adolescence as their findings
show no significant association between educational attainment and
intelligence test score in midlife after IQ at age 20 has been taken into
account, we find no plateau until approximately 17 years of education.
In fact, replicating the previous study's statistical analyses we find an
average increase in intelligence test score in midlife of 0.8 IQ points per
year of education taking IQ at age 20 into account (Supplementary
Table 4). We speculate whether this contrasting finding might be explained
by the representativeness of the study populations as the previous
study is based on a selected sample of twins serving in the
American military at some point between 1965 and 1975. Thus, a study
based on the two Lothian birth cohorts, which like our study includes a
follow-up examination of a population-representative survey in childhood,
finds a weighted average increase in intelligence test score in late
life of 1.2 IQ points per year of education taking IQ at age 11 into
account (Ritchie et al., 2013). However, the contrasting finding might
also be explained by residual confounding due to the use of non-identical
baseline and outcome intelligence tests in our study. Nevertheless,
in our study, the strongest association between educational attainment
and intelligence in midlife was observed in upper-secondary school, i.e.
around 10–13 years of education. A possible explanation for this finding
might be that pupils up to and including upper-secondary school receive
general education, which improves exactly what the intelligence
tests included in our study most likely measure: General cognitive
ability. After upper-secondary school, individuals start to specialize in
different fields, which may explain why the increase in intelligence test
score for each extra year of education diminishes. However, the cubic
tendency was relatively weak and as written above the association
between educational attainment and intelligence did not reach a plateau
until approximately 17 years of education, corresponding to the
completion of a Master's degree program. As our study is the first to
investigate whether the increase in intelligence test score for each extra
year of education depends on the educational duration, replication of
our finding is needed – preferably in studies with access to the same
baseline and outcome test.
Finally, to answer another of our specific aims, we investigated
whether the increase in intelligence test score for each extra year of
education depended on the intelligence prior to variations in educational
attainment. The results showed that the increase in intelligence
test score for each extra year of education was higher in the group of
individuals with an IQ < 90 compared with the group of individuals
with an IQ of 90–109. Although this finding clearly needs to be replicated,
it is in line with the findings of a Danish study investigating
whether distributional changes accompanied the secular increases in
intelligence test scores among males born in 1939–1958 and
1967–1969 (Teasdale & Owen, 1989). According to the authors of that
study, a possible explanation of why individuals with low intelligence
in childhood derive the largest benefit from education is that the Danish
school system for the last seven decades mainly has focused on improving
the abilities of the least able (Teasdale & Owen, 1989).
Therefore, future studies are needed to investigate whether our finding
is peculiar to the Danish school system or whether it can be generalized
to school systems in other countries.
Self-reported ratings of deception ability were positively correlated with telling a higher frequency of white lies & exaggerations, & telling the majority of lies to colleagues, friends, & romantic partners
Verigin BL, Meijer EH, Bogaard G, Vrij A (2019) Lie prevalence, lie characteristics and strategies of self-reported good liars. PLoS ONE 14(12): e0225566. https://doi.org/10.1371/journal.pone.0225566
Abstract: Meta-analytic findings indicate that the success of unmasking a deceptive interaction relies more on the performance of the liar than on that of the lie detector. Despite this finding, the lie characteristics and strategies of deception that enable good liars to evade detection are largely unknown. We conducted a survey (n = 194) to explore the association between laypeople’s self-reported ability to deceive on the one hand, and their lie prevalence, characteristics, and deception strategies in daily life on the other. Higher self-reported ratings of deception ability were positively correlated with self-reports of telling more lies per day, telling inconsequential lies, lying to colleagues and friends, and communicating lies via face-to-face interactions. We also observed that self-reported good liars highly relied on verbal strategies of deception and they most commonly reported to i) embed their lies into truthful information, ii) keep the statement clear and simple, and iii) provide a plausible account. This study provides a starting point for future research exploring the meta-cognitions and patterns of skilled liars who may be most likely to evade detection.
Discussion
We found that self-reported good liars i) may be responsible for a disproportionate amount of lies in daily life, ii) tend to tell inconsequential lies, mostly to colleagues and friends, and generally via face-to-face interactions, and iii) highly rely on verbal strategies of deception, most commonly reporting to embed their lies into truthful information, and to keep the statement clear, simple and plausible.
Lie prevalence and characteristics
First, we replicated the finding that people lie, on average, once or twice per day, including its skewed distribution. Nearly 40% of all lies were reported by a few prolific liars. Furthermore, higher self-reported ratings of individuals’ deception ability were positively correlated with self-reports of: i) telling a greater number of lies per day, ii) telling a higher frequency of white lies and exaggerations, iii) telling the majority of lies to colleagues and friends or others such as romantic partners, and iv) telling the majority of lies via face-to-face interactions. Importantly, skewed distributions were also observed for the other lie characteristics, suggesting that it may be misleading to draw conclusions from sample means, given that this does not reflect the lying behaviours of the average person. A noteworthy finding is that prolific liars also considered themselves to be good liars.
The finding that individuals who consider themselves good liars report mostly telling inconsequential lies is somewhat surprising. This deviates from the results of a previous study, which showed that prolific liars reported telling significantly more serious lies, as well as more inconsequential lies, compared to everyday liars [15]. However, small, white lies are generally more common [18] and people who believe they can get away with such minor falsehoods may be more inclined to include them frequently in daily interactions. It is also possible that self-reported good liars in our sample had inflated perceptions of their own deception ability because they tell only trivial lies versus lies of serious consequence.
Regarding the other lie characteristics, we found a positive correlation between self-reported deception ability and telling lies to colleagues, friends and others (e.g., romantic partners). This variation suggests that good liars are perhaps less restricted in who they lie to, relative to other liars who tell more lies to casual acquaintances and strangers than to family and friends [22]. Our results also showed that good liars tended to prefer telling lies face-to-face. This fits the findings of one of the only other studies to examine characteristics of self-reported good versus poor liars, which found that self-perceived good liars most commonly lied via face-to-face interactions versus through text chat [42]. This could be a strategic decision to deceive someone to their face, since people may expect more deception via online environments [43]. As researchers continue to examine the nature of lying and to search for ways of effectively detecting deception, it is important to recognize how certain lie characteristics may influence individuals’ detectability as liars.
Deception strategies
We also isolated the lie-telling strategies of self-reported good liars. People who identified as good liars placed a higher value on verbal strategies for successfully deceiving. Additional inspection of the verbal strategies reported by good liars showed that commonly reported strategies were embedding lies into truthful information and keeping their statements clear, simple and plausible. In fact, good liars were more likely than poor liars to endorse using these strategies, as well as matching the amount and type of details in their lies to the truthful part/s of their story, and providing unverifiable details. A common theme among these strategies is the relation to truthful information. This fits with the findings of previous literature, that liars typically aim to provide as much experienced information as possible, to the extent that they do not incriminate themselves [35, 44]. Additionally, good liars used plausibility as a strategy for succeeding with their lies. This reflects the findings of the meta-analysis by Hartwig and Bond [45] that implausibility is one of the most robust correlates of deception judgements, and the results of DePaulo et al. [26] that one of the strongest cues to deception is liars’ tendency to sound less plausible than truth tellers (d = -0.23).
We also found that self-reported poor liars were more likely than good liars to rely on the avoidance strategy (i.e., being intentionally vague or avoiding mentioning certain details). Previous research suggests that this is one of the most common strategies used by guilty suspects during investigative interviews [46]. Additionally, all liars in our study expressed behavioural strategies as being important for deceiving successfully. This could be explained by the widespread misconceptions about the associations between lying and behaviour, for example that gaze aversion, increased movement or sweating are behaviours symptomatic of deception [2, 47].
There was inconsistency in our data between the responses to the qualitative strategy question and the multiple-response strategy question. Based on the qualitative strategy data it seems that Good, Neutral, and Poor liars do not differ in their use of strategies. However, robust differences emerged when we evaluated participants’ endorsement of the predetermined strategies. One explanation for this finding is the difficulty people perceive when they have to verbalize the reasons for their behavior. Ericsson and Simon [48] suggest that inconsistencies can occur especially when the question posed is too vague to elicit the appropriate information, which might have been the case in our study. Another explanation for the discrepancy in the data between the two measurement procedures is that data-driven coding is inherently susceptible to human subjectivity, error, and bias [49, 50]. Such limitations apply to a lesser extent to coding based on predetermined categories that are derived from psychological theory, an approach which has been heavily used within the deception literature [2]. In any case, future research should continue exploring the deception strategies of good liars using a variety of methodological approaches. In particular, it would be beneficial to establish more reliable techniques for measuring interviewees’ processing regarding their deception strategies. One potential idea could be to explore the effectiveness of using a series of cued questions to encourage the recall of specific aspects of interviewees’ memory or cognitive processing. Another suggestion is to combine the data-driven and theory-driven approaches, whereby the coding system is generated inductively from the data but the coders draw from the theoretical literature when identifying categories [50].
Limitations
Some methodological considerations should be addressed. First, the results of the present study are drawn from participants’ self-reports about their patterns of deception in daily life. Sources of error associated with such self-report data limit our ability to draw strong inferences from this study. However, previous research has validated the use of self-report to measure lying prevalence by correlating self-reported lying with other measures of dishonesty [17]. Moreover, self-report data may not be as untrustworthy as critics argue, and in some situations, it may be the most appropriate methodology [51]. This study was intended as an initial examination of the strategies and preferences of good liars, and surveying liars for their own perspectives provided a novel source of insight into their behaviour. A constraint to the generalizability of this research is that we did not establish the ground truth as to whether self-reported good liars are indeed skilled deceivers. Future research could attempt to extend our findings by examining deceivers’ lie frequency, characteristics, and strategies after systematically testing their lie-telling ability within a controlled laboratory setting.
Second, one of the most frequent concerns about using Amazon MTurk relates to low compensation and resulting motivation [52, 53]. We took measures to ensure that our remuneration to participants was above the fair price for comparable experiments. Importantly, data collected through MTurk produces equivalent results as data collected from online and student samples [52, 54–58]. As well, mTurk surveys have been shown to produce a representative sample of the United States population that yields results akin to those observed from more expensive survey techniques, such as telephone surveys [57]. It speaks to the validity of our data, for example, that the self-reported prevalence of lies, and the endorsement of nonverbal deception strategies, replicates previous research. Nonetheless, the results of this study could be advanced if future research i) directly replicates our survey amongst different populations, for instance university students, and ii) conceptually replicates this research by evaluating different methodological approaches for measuring deception ability (e.g., via controlled evaluation) and good liars’ strategies for deceiving (e.g., via cued recall).
Implications and future research
This study explored the deception characteristics and strategies used by self-reported good liars. Deception researchers generally agree that the most diagnostic information is found in the content of liars’ speech [59]. Content-based cues to deception, however, may be less effective for detecting good liars who rely highly on verbal strategies of deception. This could even offer an explanation for the modest effect sizes observed in the deception literature [60]. For instance, good liars in our study reported to strategically embed their lies into truthful information. This finding has potential implications for the reliability of credibility assessment tools that derive from the assumption that truth tellers’ statements are drawn from memory traces whereas liars’ statements are fabricated from imagination [61, 62]. If good liars draw on their memory of truthful previous experiences, then their statements may closely resemble those of their truth telling counterparts. Another interesting observation was that self-reported good liars were more likely than poor liars to provide unverifiable details. This fits with the findings of previous literature on the VA, which contends that liars provide information that cannot be verified to balance their goals of being perceived as cooperative and of minimizing the chances of falsification by investigators [32, 33]. A fruitful avenue of future research could be to further explore liars’ strategic inclusion of truthful information and unverifiable details. Doing so may give lie detectors an advantage for unmasking skilled liars. It would also be interesting for future research to examine how good versus poor liars are affected by certain interview techniques designed to increase the difficulty of lying such as the reverse-order technique [63].
Abstract: Meta-analytic findings indicate that the success of unmasking a deceptive interaction relies more on the performance of the liar than on that of the lie detector. Despite this finding, the lie characteristics and strategies of deception that enable good liars to evade detection are largely unknown. We conducted a survey (n = 194) to explore the association between laypeople’s self-reported ability to deceive on the one hand, and their lie prevalence, characteristics, and deception strategies in daily life on the other. Higher self-reported ratings of deception ability were positively correlated with self-reports of telling more lies per day, telling inconsequential lies, lying to colleagues and friends, and communicating lies via face-to-face interactions. We also observed that self-reported good liars highly relied on verbal strategies of deception and they most commonly reported to i) embed their lies into truthful information, ii) keep the statement clear and simple, and iii) provide a plausible account. This study provides a starting point for future research exploring the meta-cognitions and patterns of skilled liars who may be most likely to evade detection.
Discussion
We found that self-reported good liars i) may be responsible for a disproportionate amount of lies in daily life, ii) tend to tell inconsequential lies, mostly to colleagues and friends, and generally via face-to-face interactions, and iii) highly rely on verbal strategies of deception, most commonly reporting to embed their lies into truthful information, and to keep the statement clear, simple and plausible.
Lie prevalence and characteristics
First, we replicated the finding that people lie, on average, once or twice per day, including its skewed distribution. Nearly 40% of all lies were reported by a few prolific liars. Furthermore, higher self-reported ratings of individuals’ deception ability were positively correlated with self-reports of: i) telling a greater number of lies per day, ii) telling a higher frequency of white lies and exaggerations, iii) telling the majority of lies to colleagues and friends or others such as romantic partners, and iv) telling the majority of lies via face-to-face interactions. Importantly, skewed distributions were also observed for the other lie characteristics, suggesting that it may be misleading to draw conclusions from sample means, given that this does not reflect the lying behaviours of the average person. A noteworthy finding is that prolific liars also considered themselves to be good liars.
The finding that individuals who consider themselves good liars report mostly telling inconsequential lies is somewhat surprising. This deviates from the results of a previous study, which showed that prolific liars reported telling significantly more serious lies, as well as more inconsequential lies, compared to everyday liars [15]. However, small, white lies are generally more common [18] and people who believe they can get away with such minor falsehoods may be more inclined to include them frequently in daily interactions. It is also possible that self-reported good liars in our sample had inflated perceptions of their own deception ability because they tell only trivial lies versus lies of serious consequence.
Regarding the other lie characteristics, we found a positive correlation between self-reported deception ability and telling lies to colleagues, friends and others (e.g., romantic partners). This variation suggests that good liars are perhaps less restricted in who they lie to, relative to other liars who tell more lies to casual acquaintances and strangers than to family and friends [22]. Our results also showed that good liars tended to prefer telling lies face-to-face. This fits the findings of one of the only other studies to examine characteristics of self-reported good versus poor liars, which found that self-perceived good liars most commonly lied via face-to-face interactions versus through text chat [42]. This could be a strategic decision to deceive someone to their face, since people may expect more deception via online environments [43]. As researchers continue to examine the nature of lying and to search for ways of effectively detecting deception, it is important to recognize how certain lie characteristics may influence individuals’ detectability as liars.
Deception strategies
We also isolated the lie-telling strategies of self-reported good liars. People who identified as good liars placed a higher value on verbal strategies for successfully deceiving. Additional inspection of the verbal strategies reported by good liars showed that commonly reported strategies were embedding lies into truthful information and keeping their statements clear, simple and plausible. In fact, good liars were more likely than poor liars to endorse using these strategies, as well as matching the amount and type of details in their lies to the truthful part/s of their story, and providing unverifiable details. A common theme among these strategies is the relation to truthful information. This fits with the findings of previous literature, that liars typically aim to provide as much experienced information as possible, to the extent that they do not incriminate themselves [35, 44]. Additionally, good liars used plausibility as a strategy for succeeding with their lies. This reflects the findings of the meta-analysis by Hartwig and Bond [45] that implausibility is one of the most robust correlates of deception judgements, and the results of DePaulo et al. [26] that one of the strongest cues to deception is liars’ tendency to sound less plausible than truth tellers (d = -0.23).
We also found that self-reported poor liars were more likely than good liars to rely on the avoidance strategy (i.e., being intentionally vague or avoiding mentioning certain details). Previous research suggests that this is one of the most common strategies used by guilty suspects during investigative interviews [46]. Additionally, all liars in our study expressed behavioural strategies as being important for deceiving successfully. This could be explained by the widespread misconceptions about the associations between lying and behaviour, for example that gaze aversion, increased movement or sweating are behaviours symptomatic of deception [2, 47].
There was inconsistency in our data between the responses to the qualitative strategy question and the multiple-response strategy question. Based on the qualitative strategy data it seems that Good, Neutral, and Poor liars do not differ in their use of strategies. However, robust differences emerged when we evaluated participants’ endorsement of the predetermined strategies. One explanation for this finding is the difficulty people perceive when they have to verbalize the reasons for their behavior. Ericsson and Simon [48] suggest that inconsistencies can occur especially when the question posed is too vague to elicit the appropriate information, which might have been the case in our study. Another explanation for the discrepancy in the data between the two measurement procedures is that data-driven coding is inherently susceptible to human subjectivity, error, and bias [49, 50]. Such limitations apply to a lesser extent to coding based on predetermined categories that are derived from psychological theory, an approach which has been heavily used within the deception literature [2]. In any case, future research should continue exploring the deception strategies of good liars using a variety of methodological approaches. In particular, it would be beneficial to establish more reliable techniques for measuring interviewees’ processing regarding their deception strategies. One potential idea could be to explore the effectiveness of using a series of cued questions to encourage the recall of specific aspects of interviewees’ memory or cognitive processing. Another suggestion is to combine the data-driven and theory-driven approaches, whereby the coding system is generated inductively from the data but the coders draw from the theoretical literature when identifying categories [50].
Limitations
Some methodological considerations should be addressed. First, the results of the present study are drawn from participants’ self-reports about their patterns of deception in daily life. Sources of error associated with such self-report data limit our ability to draw strong inferences from this study. However, previous research has validated the use of self-report to measure lying prevalence by correlating self-reported lying with other measures of dishonesty [17]. Moreover, self-report data may not be as untrustworthy as critics argue, and in some situations, it may be the most appropriate methodology [51]. This study was intended as an initial examination of the strategies and preferences of good liars, and surveying liars for their own perspectives provided a novel source of insight into their behaviour. A constraint to the generalizability of this research is that we did not establish the ground truth as to whether self-reported good liars are indeed skilled deceivers. Future research could attempt to extend our findings by examining deceivers’ lie frequency, characteristics, and strategies after systematically testing their lie-telling ability within a controlled laboratory setting.
Second, one of the most frequent concerns about using Amazon MTurk relates to low compensation and resulting motivation [52, 53]. We took measures to ensure that our remuneration to participants was above the fair price for comparable experiments. Importantly, data collected through MTurk produces equivalent results as data collected from online and student samples [52, 54–58]. As well, mTurk surveys have been shown to produce a representative sample of the United States population that yields results akin to those observed from more expensive survey techniques, such as telephone surveys [57]. It speaks to the validity of our data, for example, that the self-reported prevalence of lies, and the endorsement of nonverbal deception strategies, replicates previous research. Nonetheless, the results of this study could be advanced if future research i) directly replicates our survey amongst different populations, for instance university students, and ii) conceptually replicates this research by evaluating different methodological approaches for measuring deception ability (e.g., via controlled evaluation) and good liars’ strategies for deceiving (e.g., via cued recall).
Implications and future research
This study explored the deception characteristics and strategies used by self-reported good liars. Deception researchers generally agree that the most diagnostic information is found in the content of liars’ speech [59]. Content-based cues to deception, however, may be less effective for detecting good liars who rely highly on verbal strategies of deception. This could even offer an explanation for the modest effect sizes observed in the deception literature [60]. For instance, good liars in our study reported to strategically embed their lies into truthful information. This finding has potential implications for the reliability of credibility assessment tools that derive from the assumption that truth tellers’ statements are drawn from memory traces whereas liars’ statements are fabricated from imagination [61, 62]. If good liars draw on their memory of truthful previous experiences, then their statements may closely resemble those of their truth telling counterparts. Another interesting observation was that self-reported good liars were more likely than poor liars to provide unverifiable details. This fits with the findings of previous literature on the VA, which contends that liars provide information that cannot be verified to balance their goals of being perceived as cooperative and of minimizing the chances of falsification by investigators [32, 33]. A fruitful avenue of future research could be to further explore liars’ strategic inclusion of truthful information and unverifiable details. Doing so may give lie detectors an advantage for unmasking skilled liars. It would also be interesting for future research to examine how good versus poor liars are affected by certain interview techniques designed to increase the difficulty of lying such as the reverse-order technique [63].
Tuesday, December 3, 2019
Females are more proactive, males are more reactive: neural basis of the gender-related speed/accuracy trade-off in visuo-motor tasks
Females are more proactive, males are more reactive: neural basis of the gender-related speed/accuracy trade-off in visuo-motor tasks. V. Bianco et al. Brain Structure and Function, December 3 2019. https://link.springer.com/article/10.1007/s00429-019-01998-3
Abstract: In the present study, we investigated neural correlates associated with gender differences in a simple response task (SRT) and in a discriminative response task (DRT) by means of event-related potential (ERP) technique. 120 adults participated in the study, and, based on their sex, were divided into two groups matched for age and education level. Behavioral performance was assessed with computing response speed, accuracy rates and response consistency. Pre- and post-stimulus ERPs were analyzed and compared between groups. Results indicated that males were faster than females in all tasks, while females were more accurate and consistent than males in the more complex tasks. This different behavioral performance was associated with distinctive ERP features. In the preparation phase, males showed smaller prefrontal negativity (pN) and visual negativity (vN), interpreted as reduced cognitive preparation to stimulus occurrence and reduced reliance on sensory proactive readiness, respectively. In the post-stimulus phase, gender differences were present over occipital (P1, N1, P2 components) and prefrontal (pN1, pP1, pP2 components) areas, suggesting allocation of attentional resources at distinct stages of information processing in the two groups. Overall, the present data provide evidence in favor of a more proactive and cautious cognitive processing in females and a more reactive and fast cognitive processing in males. In addition, we confirm that (1) gender is an important variable to be considered in ERP studies on perceptual processing and decision making, and (2) the pre-stimulus component analysis can provide useful information concerning neural correlates of upcoming performance.
Keywords: Gender differences Speed–accuracy trade-off Motor behavior Proactive control Decision making Predictive brain
Conclusions
Present data suggest that in simple and complex visuo-motor tasks, males and females allocate their cortical resources in diverse ways, possibly leading to the well-documented gender-related speed/accuracy trade-off in visuo-motor per-formance. When the task is very simple, both preparatory (the BP) and reactive (the pP1, P2 and P3) cortical process-ing are enhanced in males with respect to females, leading to faster responses. When the task is more complex (implying stimulus discrimination and response selection), females’ proactive allocation of more cortical resources at both pre-frontal (pN) and sensory (vN) level, as well as several reac-tive stages after stimulus onset (the pN1, the P1, and the P3), leads to relatively slow and very accurate responses. In contrast, males allocate a reduced level of pre-stimulus sustained attention to the task (smaller pN and vN), possi-bly compensating with enhanced reactive attention at visual level processing (larger N1 and P2). Even though the neu-ral processing associated with S–R mapping (the pP2) is generally enhanced in males (for both target and non-target stimuli), signals associated to different stimulus categories are less distinguishable in males than females, as indicated by dpP2 effect, possibly facilitating female accuracy in a complex task.
Present research provides evidence that gender is an important variable to be considered in neurocognitive studies of perceptual decision making; this variable should be considered while planning experimental designs or interpreting the results because, per se, could explain the speed/accuracy trade-off in visuo-motor performance and relative differences in brain functions. In contrast, some studies excluded females from their samples or ignored gender as a factor in their findings (for review see Mendrek 2015), possibly jeopardizing their results’ interpretation.
Abstract: In the present study, we investigated neural correlates associated with gender differences in a simple response task (SRT) and in a discriminative response task (DRT) by means of event-related potential (ERP) technique. 120 adults participated in the study, and, based on their sex, were divided into two groups matched for age and education level. Behavioral performance was assessed with computing response speed, accuracy rates and response consistency. Pre- and post-stimulus ERPs were analyzed and compared between groups. Results indicated that males were faster than females in all tasks, while females were more accurate and consistent than males in the more complex tasks. This different behavioral performance was associated with distinctive ERP features. In the preparation phase, males showed smaller prefrontal negativity (pN) and visual negativity (vN), interpreted as reduced cognitive preparation to stimulus occurrence and reduced reliance on sensory proactive readiness, respectively. In the post-stimulus phase, gender differences were present over occipital (P1, N1, P2 components) and prefrontal (pN1, pP1, pP2 components) areas, suggesting allocation of attentional resources at distinct stages of information processing in the two groups. Overall, the present data provide evidence in favor of a more proactive and cautious cognitive processing in females and a more reactive and fast cognitive processing in males. In addition, we confirm that (1) gender is an important variable to be considered in ERP studies on perceptual processing and decision making, and (2) the pre-stimulus component analysis can provide useful information concerning neural correlates of upcoming performance.
Keywords: Gender differences Speed–accuracy trade-off Motor behavior Proactive control Decision making Predictive brain
Conclusions
Present data suggest that in simple and complex visuo-motor tasks, males and females allocate their cortical resources in diverse ways, possibly leading to the well-documented gender-related speed/accuracy trade-off in visuo-motor per-formance. When the task is very simple, both preparatory (the BP) and reactive (the pP1, P2 and P3) cortical process-ing are enhanced in males with respect to females, leading to faster responses. When the task is more complex (implying stimulus discrimination and response selection), females’ proactive allocation of more cortical resources at both pre-frontal (pN) and sensory (vN) level, as well as several reac-tive stages after stimulus onset (the pN1, the P1, and the P3), leads to relatively slow and very accurate responses. In contrast, males allocate a reduced level of pre-stimulus sustained attention to the task (smaller pN and vN), possi-bly compensating with enhanced reactive attention at visual level processing (larger N1 and P2). Even though the neu-ral processing associated with S–R mapping (the pP2) is generally enhanced in males (for both target and non-target stimuli), signals associated to different stimulus categories are less distinguishable in males than females, as indicated by dpP2 effect, possibly facilitating female accuracy in a complex task.
Present research provides evidence that gender is an important variable to be considered in neurocognitive studies of perceptual decision making; this variable should be considered while planning experimental designs or interpreting the results because, per se, could explain the speed/accuracy trade-off in visuo-motor performance and relative differences in brain functions. In contrast, some studies excluded females from their samples or ignored gender as a factor in their findings (for review see Mendrek 2015), possibly jeopardizing their results’ interpretation.
It has been shown that a gamble is judged to be more attractive when its zero outcome is designated as “losing $0” rather than “winning $0,” an instance of what we refer to as the mutable-zero effect
The framing of nothing and the psychology of choice. Marc Scholten, Daniel Read, Neil Stewart. Journal of Risk and Uncertainty, December 3 2019. https://link.springer.com/article/10.1007/s11166-019-09313-5
Abstract: Zero outcomes are inconsequential in most models of choice. However, when disclosing zero outcomes they must be designated. It has been shown that a gamble is judged to be more attractive when its zero outcome is designated as “losing $0” rather than “winning $0,” an instance of what we refer to as the mutable-zero effect. Drawing on norm theory, we argue that “losing $0” or “paying $0” evokes counterfactual losses, with which the zero outcome compares favorably (a good zero), and thus acquires positive value, whereas “winning $0” or “receiving $0” evokes counterfactual gains, with which the zero outcome compares unfavorably (a bad zero), and thus acquires negative value. Moreover, we propose that the acquired value of zero outcomes operates just as the intrinsic value of nonzero outcomes in the course of decision making. We derive testable implications from prospect theory for mutable-zero effects in risky choice, and from the double-entry mental accounting model for mutable-zero effects in intertemporal choice. The testable implications are consistently confirmed. We conclude that prevalent theories of choice can explain how decisions are influenced by mutable zeroes, on the shared understanding that nothing can have value, just like everything else.
Keywords: Descriptive invariance Norm theory Counterfactuals Zero outcomes Risk and time Prospect theory Double-entry mental accounting model
JEL Classifications: D00 D90 D91
4 General discussion
The valence of a zero event depends on its “irrelevant” description: It “feels better” to
lose or pay nothing than to win or receive nothing. A negative wording (lose, pay) sets
up a norm of negative events, with which the zero event compares favorably, while a
positive wording (win, receive) sets up a norm of positive events, with which the zero
event compares unfavorably, so that a negative wording acquires a more positive tone
than a positive wording. Descriptive invariance requires from us that this should not
affect our decisions, but we have shown that it does, among a fair number of us at least.
To others among us, the framing of zero events may actually be irrelevant. The
mutable-zero effect is indeed small; yet, it is a reliable phenomenon. And if one thinks
of the alternative descriptions of a zero outcome as a minimal manipulation, the small
effect may actually be considered quite impressive (Prentice and Miller 1992).
Descriptive invariance, along with dominance, is an essential condition of rational
choice (Tversky and Kahneman 1986), and it has seen a number of violations,
commonly referred to as framing effects. A stylized example of framing is the adage
that optimists see a glass of wine as half full, while pessimists see it as half empty. And
if the wine glass is half full, and therefore half empty, then these are complementary
descriptions of the same state of the world, so that, normatively, using one or the other
should not matter for judgment and choice (Mandel 2001)—but it does.
4.1 Counterfactuals versus expectations
Life often confronts us with zero events. A bookstore may offer us “free shipping.” Our
employermay grant us “no bonus.”We are pleased to pay $0 to the bookstore, and this may be
because we expected to pay something but did not.We are not pleased to receive $0 from our
employer, and this may be because we expected to receive something but did not (Rick and
Loewenstein 2008). In norm theory, Kahneman and Miller (1986) suggested that reasoning
may not only flow forward, “from anticipation and hypothesis to confirmation or revision,” but
also backward, “from the experience to what it reminds us of or makes us think about” (p.
137). In the latter case, “objects or events generate their own norms by retrieval of similar
experiences stored in memory or by construction of counterfactual alternatives” (p. 136). Thus,
“free shipping” may sound pleasant because it reminds us of occasions on which we were
charged shipping fees, and “no bonus” may sound unpleasant because it reminds us of
occasions on which we were granted bonuses; not so much because we expected to pay or
receive something. Of course, both norms and expectations may influence our feelings, and
may be difficult to disentangle in many real-life situations. Kahneman and Miller’s (1986)
intention with norm theory was “not to deny the existence of anticipation and expectation but
to encourage the consideration of alternative accounts for some of the observations that are
routinely explained in terms of forward processing” (p. 137, emphasis added). Our intention
was to compile a set of observations that cannot reasonably be explained in terms of forward
processing, which therefore constitute the clearest exposure of norms.
4.2 Expectations in decision theory
We have incorporated counterfactuals into theories of choice, so as to predict the effects
of mutable zeroes when people face risk and when people face time. Traditionally,
decision theory has ignored counterfactuals, but expectations play a role in most
theories of decision under risk. While prospect theory sacrifices the expectation
principle from EU, by assigning a decision weight w(p) to probability p of an outcome
occurring, other formulations have maintained the expectation principle but modified
the utility function. For instance, the utility function has been expanded with anticipated
regret and rejoicing as they result from comparisons between the possible outcomes
of a gamble and those that would occur if one were to choose differently (Bell 1982,
1983; Loomes and Sugden 1982). Similarly, the utility function has been expanded
with anticipated emotions as they result from comparisons between the possible
outcomes of a gamble with the expected value of the gamble: Anticipated disappointment
when “it could come out better,” and anticipated elation when “it could come out
worse” (Bell 1985; Loomes and Sugden 1986). Zero outcomes acquire value in the
same way as nonzero outcomes do: Either from between-gamble or within-gamble
comparisons. Thus, a zero outcome acquires negative value (by regret or disappointment)
if the comparison is with a gain, and positive value (by rejoicing or elation) if the
comparison is with a loss. In our analysis, however, zero outcomes are unique, in that
only they elicit counterfactual gains and losses, which will then serve as a reference
point for evaluating the zero outcomes themselves. Nonetheless, in Experiment 3,
dealing with zero outcomes in intertemporal choice, we obtained a result suggesting
that between-prospect comparisons of zero and nonzero outcomes also affected choice.
4.3 The framing of something, the framing of nothing
The investigation of framing effects in judgment and decision making began with
Tversky and Kahneman’s (1981) Asian Disease Problem, in which the lives of 600
people are threatened, and life-saving programs are examined. One group of participants
preferred a program that would save 200 people for sure over a program that
would save 600 people with a 1/3 probability, but save no people with a 2/3 probability.
Another group of participants preferred a program that would let nobody die with a
probability of 1/3, but let 600 people die with a 2/3 probability, over a program that
would let 400 people die for sure. Prospect theory ascribes this result to reference
dependence, i.e., v(0) = 0, and diminishing sensitivity, i.e., v is concave over gains, so
that v(600) < 3v(200), which works against the gamble, and convex over losses, so that
v(−600) > 3v(−200), which works in favor of the gamble.
Our interpretation is that some of the action may lie in the zero outcomes, rather than
the nonzero outcomes. Specifically, “save no people” brings to mind saving some
people, with which saving no people compares unfavorably, thus working against the
gamble. Similarly, “let nobody die” brings to mind letting somebody die, with which
letting nobody die compares favorably, thus working in favor of the gamble. Reference
dependence is fine, but designating zero outcomes means that v(0) ≠ 0, because the
reference point is no longer the status quo, but rather something imagined.
There is no shortage of competing views on framing effects (for one of many
discussions, see Mandel 2014), and our norm-theory approach to the Asian Disease
Problem is a partial explanation at best. Indeed, the reversal from an ample majority
(72%) choosing the safe option in the positive frame (saving lives) to an ample majority
(78%) choosing the risky option in the negative frame (giving up lives) is a large effect,
whereas the mutable-zero effect is a small effect, unlikely to be the sole responsible for
Tversky and Kahneman’s (1981) result. However, judgments and decisions are influenced
by the framing of zero outcomes, and we have shown that prevalent theories of
choice, Kahneman and Tversky’s (1979) prospect theory and Prelec and Loewenstein’s
(1998) double-entry mental accounting model, can explain how decisions are influenced
by mutable zeroes, on the shared understanding that nothing can have value, just
like everything else.
Abstract: Zero outcomes are inconsequential in most models of choice. However, when disclosing zero outcomes they must be designated. It has been shown that a gamble is judged to be more attractive when its zero outcome is designated as “losing $0” rather than “winning $0,” an instance of what we refer to as the mutable-zero effect. Drawing on norm theory, we argue that “losing $0” or “paying $0” evokes counterfactual losses, with which the zero outcome compares favorably (a good zero), and thus acquires positive value, whereas “winning $0” or “receiving $0” evokes counterfactual gains, with which the zero outcome compares unfavorably (a bad zero), and thus acquires negative value. Moreover, we propose that the acquired value of zero outcomes operates just as the intrinsic value of nonzero outcomes in the course of decision making. We derive testable implications from prospect theory for mutable-zero effects in risky choice, and from the double-entry mental accounting model for mutable-zero effects in intertemporal choice. The testable implications are consistently confirmed. We conclude that prevalent theories of choice can explain how decisions are influenced by mutable zeroes, on the shared understanding that nothing can have value, just like everything else.
Keywords: Descriptive invariance Norm theory Counterfactuals Zero outcomes Risk and time Prospect theory Double-entry mental accounting model
JEL Classifications: D00 D90 D91
4 General discussion
The valence of a zero event depends on its “irrelevant” description: It “feels better” to
lose or pay nothing than to win or receive nothing. A negative wording (lose, pay) sets
up a norm of negative events, with which the zero event compares favorably, while a
positive wording (win, receive) sets up a norm of positive events, with which the zero
event compares unfavorably, so that a negative wording acquires a more positive tone
than a positive wording. Descriptive invariance requires from us that this should not
affect our decisions, but we have shown that it does, among a fair number of us at least.
To others among us, the framing of zero events may actually be irrelevant. The
mutable-zero effect is indeed small; yet, it is a reliable phenomenon. And if one thinks
of the alternative descriptions of a zero outcome as a minimal manipulation, the small
effect may actually be considered quite impressive (Prentice and Miller 1992).
Descriptive invariance, along with dominance, is an essential condition of rational
choice (Tversky and Kahneman 1986), and it has seen a number of violations,
commonly referred to as framing effects. A stylized example of framing is the adage
that optimists see a glass of wine as half full, while pessimists see it as half empty. And
if the wine glass is half full, and therefore half empty, then these are complementary
descriptions of the same state of the world, so that, normatively, using one or the other
should not matter for judgment and choice (Mandel 2001)—but it does.
4.1 Counterfactuals versus expectations
Life often confronts us with zero events. A bookstore may offer us “free shipping.” Our
employermay grant us “no bonus.”We are pleased to pay $0 to the bookstore, and this may be
because we expected to pay something but did not.We are not pleased to receive $0 from our
employer, and this may be because we expected to receive something but did not (Rick and
Loewenstein 2008). In norm theory, Kahneman and Miller (1986) suggested that reasoning
may not only flow forward, “from anticipation and hypothesis to confirmation or revision,” but
also backward, “from the experience to what it reminds us of or makes us think about” (p.
137). In the latter case, “objects or events generate their own norms by retrieval of similar
experiences stored in memory or by construction of counterfactual alternatives” (p. 136). Thus,
“free shipping” may sound pleasant because it reminds us of occasions on which we were
charged shipping fees, and “no bonus” may sound unpleasant because it reminds us of
occasions on which we were granted bonuses; not so much because we expected to pay or
receive something. Of course, both norms and expectations may influence our feelings, and
may be difficult to disentangle in many real-life situations. Kahneman and Miller’s (1986)
intention with norm theory was “not to deny the existence of anticipation and expectation but
to encourage the consideration of alternative accounts for some of the observations that are
routinely explained in terms of forward processing” (p. 137, emphasis added). Our intention
was to compile a set of observations that cannot reasonably be explained in terms of forward
processing, which therefore constitute the clearest exposure of norms.
4.2 Expectations in decision theory
We have incorporated counterfactuals into theories of choice, so as to predict the effects
of mutable zeroes when people face risk and when people face time. Traditionally,
decision theory has ignored counterfactuals, but expectations play a role in most
theories of decision under risk. While prospect theory sacrifices the expectation
principle from EU, by assigning a decision weight w(p) to probability p of an outcome
occurring, other formulations have maintained the expectation principle but modified
the utility function. For instance, the utility function has been expanded with anticipated
regret and rejoicing as they result from comparisons between the possible outcomes
of a gamble and those that would occur if one were to choose differently (Bell 1982,
1983; Loomes and Sugden 1982). Similarly, the utility function has been expanded
with anticipated emotions as they result from comparisons between the possible
outcomes of a gamble with the expected value of the gamble: Anticipated disappointment
when “it could come out better,” and anticipated elation when “it could come out
worse” (Bell 1985; Loomes and Sugden 1986). Zero outcomes acquire value in the
same way as nonzero outcomes do: Either from between-gamble or within-gamble
comparisons. Thus, a zero outcome acquires negative value (by regret or disappointment)
if the comparison is with a gain, and positive value (by rejoicing or elation) if the
comparison is with a loss. In our analysis, however, zero outcomes are unique, in that
only they elicit counterfactual gains and losses, which will then serve as a reference
point for evaluating the zero outcomes themselves. Nonetheless, in Experiment 3,
dealing with zero outcomes in intertemporal choice, we obtained a result suggesting
that between-prospect comparisons of zero and nonzero outcomes also affected choice.
4.3 The framing of something, the framing of nothing
The investigation of framing effects in judgment and decision making began with
Tversky and Kahneman’s (1981) Asian Disease Problem, in which the lives of 600
people are threatened, and life-saving programs are examined. One group of participants
preferred a program that would save 200 people for sure over a program that
would save 600 people with a 1/3 probability, but save no people with a 2/3 probability.
Another group of participants preferred a program that would let nobody die with a
probability of 1/3, but let 600 people die with a 2/3 probability, over a program that
would let 400 people die for sure. Prospect theory ascribes this result to reference
dependence, i.e., v(0) = 0, and diminishing sensitivity, i.e., v is concave over gains, so
that v(600) < 3v(200), which works against the gamble, and convex over losses, so that
v(−600) > 3v(−200), which works in favor of the gamble.
Our interpretation is that some of the action may lie in the zero outcomes, rather than
the nonzero outcomes. Specifically, “save no people” brings to mind saving some
people, with which saving no people compares unfavorably, thus working against the
gamble. Similarly, “let nobody die” brings to mind letting somebody die, with which
letting nobody die compares favorably, thus working in favor of the gamble. Reference
dependence is fine, but designating zero outcomes means that v(0) ≠ 0, because the
reference point is no longer the status quo, but rather something imagined.
There is no shortage of competing views on framing effects (for one of many
discussions, see Mandel 2014), and our norm-theory approach to the Asian Disease
Problem is a partial explanation at best. Indeed, the reversal from an ample majority
(72%) choosing the safe option in the positive frame (saving lives) to an ample majority
(78%) choosing the risky option in the negative frame (giving up lives) is a large effect,
whereas the mutable-zero effect is a small effect, unlikely to be the sole responsible for
Tversky and Kahneman’s (1981) result. However, judgments and decisions are influenced
by the framing of zero outcomes, and we have shown that prevalent theories of
choice, Kahneman and Tversky’s (1979) prospect theory and Prelec and Loewenstein’s
(1998) double-entry mental accounting model, can explain how decisions are influenced
by mutable zeroes, on the shared understanding that nothing can have value, just
like everything else.
Results provide a cautionary tale for the naïve application of VAMs to teacher evaluation and other settings; they point to the possibility of the misidentification of sizable teacher “effects”where none exist
Teacher Effects on Student Achievement and Height: A Cautionary Tale. Marianne Bitler, Sean Corcoran, Thurston Domina, Emily Penner. NBER Working Paper No. 26480, November 2019. https://www.nber.org/papers/w26480
Abstract: Estimates of teacher “value-added” suggest teachers vary substantially in their ability to promote student learning. Prompted by this finding, many states and school districts have adopted value-added measures as indicators of teacher job performance. In this paper, we conduct a new test of the validity of value-added models. Using administrative student data from New York City, we apply commonly estimated value-added models to an outcome teachers cannot plausibly affect: student height. We find the standard deviation of teacher effects on height is nearly as large as that for math and reading achievement, raising obvious questions about validity. Subsequent analysis finds these “effects” are largely spurious variation (noise), rather than bias resulting from sorting on unobserved factors related to achievement. Given the difficulty of differentiating signal from noise in real-world teacher effect estimates, this paper serves as a cautionary tale for their use in practice.
6 Discussion
Schools and districts across the country want to employ teachers who can best help students to learn, grow, and achieve academic success. Identifying such individuals is integral to schools' successbutis also difficult to do in practice. In the face of data and measurement limitations, school leaders and state education departments seek low-cost, unbiased ways to observe and monitor the impact that their teachers have on students. Although many have criticized the use of VAMs to evaluate teachers, they remain a widely-used measure of teacher performance. In part, their popularity is due to convenience-while observational protocols which send observers to every teacher's classroom require expensive training and considerable resources to implement at scale, VAMs use existing data and can be calculated centrally at low cost. Further, VAMs are arguably less biased than many other evaluation methods that districts might use instead (Bacher-Hicks et al. 2017; Harris et al. 2014; Hill et al. 2011).
Yet questions remain about the reliability, validity, and practical use of VAMs. This paper interrogates concerns raised by prior research on VAMs and raises new concerns about the use of VAMs in career and compensation decisions. We explore the bias and reliability of commonlyestimated VAMs by comparing estimates of teacher value-added in mathematics and ELA with parallel estimates of teacher value-added on a well-measured biomarker that teachers should not impact: student height. Using administrative data from New York City, we find estimated teacher “effects”on height that are comparable in magnitude to actual teacher effects on math and ELA achievement, 0.22:compared to 0.29:and0.26:respectively. On its face, such results raise concerns about the validity of these models.
Fortunately, subsequent analysis finds that teacher effects on height are primarily noise, rather than bias due to sorting on unobserved factors. To ameliorate the effect of sampling error on value-added estimates, analysts sometimes “shrink” VAMs, scaling them by their estimated signal-to-noise ratio. When we apply the shrinkage method across multiple years of data from Kane and Staiger (2008), the persistent teacher “effect”on height goes away, becoming the expected (and known) mean of zero. This procedure is not always done in practice, however, and requires multiple years of classroom data for the same teachers to implement. Of course, for making hiring and firing decisions, it seems important to consider that value added measures which require multiple years of data to implement will likely permit identification of persistently bad teachers, but not provide a performance evaluation metric that can be met by teachers trying to improve their performance. In more realistic settings where the persistent effect is not zero, it is less clear that shrinkage would have a major influence on performance decisions, since it has modest effects on the relative rankings of teachers.
Taken together, our results provide a cautionary tale for the naïve application of VAMs to teacher evaluation and other settings. They point to the possibility of the misidentification of sizable teacher “effects”where none exist. These effects may be due in part to spurious variation driven by the typically small samples of children used to estimate a teacher's individual effect.
Abstract: Estimates of teacher “value-added” suggest teachers vary substantially in their ability to promote student learning. Prompted by this finding, many states and school districts have adopted value-added measures as indicators of teacher job performance. In this paper, we conduct a new test of the validity of value-added models. Using administrative student data from New York City, we apply commonly estimated value-added models to an outcome teachers cannot plausibly affect: student height. We find the standard deviation of teacher effects on height is nearly as large as that for math and reading achievement, raising obvious questions about validity. Subsequent analysis finds these “effects” are largely spurious variation (noise), rather than bias resulting from sorting on unobserved factors related to achievement. Given the difficulty of differentiating signal from noise in real-world teacher effect estimates, this paper serves as a cautionary tale for their use in practice.
6 Discussion
Schools and districts across the country want to employ teachers who can best help students to learn, grow, and achieve academic success. Identifying such individuals is integral to schools' successbutis also difficult to do in practice. In the face of data and measurement limitations, school leaders and state education departments seek low-cost, unbiased ways to observe and monitor the impact that their teachers have on students. Although many have criticized the use of VAMs to evaluate teachers, they remain a widely-used measure of teacher performance. In part, their popularity is due to convenience-while observational protocols which send observers to every teacher's classroom require expensive training and considerable resources to implement at scale, VAMs use existing data and can be calculated centrally at low cost. Further, VAMs are arguably less biased than many other evaluation methods that districts might use instead (Bacher-Hicks et al. 2017; Harris et al. 2014; Hill et al. 2011).
Yet questions remain about the reliability, validity, and practical use of VAMs. This paper interrogates concerns raised by prior research on VAMs and raises new concerns about the use of VAMs in career and compensation decisions. We explore the bias and reliability of commonlyestimated VAMs by comparing estimates of teacher value-added in mathematics and ELA with parallel estimates of teacher value-added on a well-measured biomarker that teachers should not impact: student height. Using administrative data from New York City, we find estimated teacher “effects”on height that are comparable in magnitude to actual teacher effects on math and ELA achievement, 0.22:compared to 0.29:and0.26:respectively. On its face, such results raise concerns about the validity of these models.
Fortunately, subsequent analysis finds that teacher effects on height are primarily noise, rather than bias due to sorting on unobserved factors. To ameliorate the effect of sampling error on value-added estimates, analysts sometimes “shrink” VAMs, scaling them by their estimated signal-to-noise ratio. When we apply the shrinkage method across multiple years of data from Kane and Staiger (2008), the persistent teacher “effect”on height goes away, becoming the expected (and known) mean of zero. This procedure is not always done in practice, however, and requires multiple years of classroom data for the same teachers to implement. Of course, for making hiring and firing decisions, it seems important to consider that value added measures which require multiple years of data to implement will likely permit identification of persistently bad teachers, but not provide a performance evaluation metric that can be met by teachers trying to improve their performance. In more realistic settings where the persistent effect is not zero, it is less clear that shrinkage would have a major influence on performance decisions, since it has modest effects on the relative rankings of teachers.
Taken together, our results provide a cautionary tale for the naïve application of VAMs to teacher evaluation and other settings. They point to the possibility of the misidentification of sizable teacher “effects”where none exist. These effects may be due in part to spurious variation driven by the typically small samples of children used to estimate a teacher's individual effect.
Political imagery on money relates to less political freedom & more gender inequality; more scientific & agricultural images on money relate to less economic, political freedom, human development & gender equality.
Using currency iconography to measure institutional quality. Kerianne Lawson. The Quarterly Review of Economics and Finance, Volume 72, May 2019, Pages 73-79. https://doi.org/10.1016/j.qref.2018.10.006
Highlights
• Countries with more political imagery on their money tend of have less political freedom and more gender inequality.
• More scientific & agricultural images on money correspond with less economic freedom, political freedom, human development & gender equality.
• More art related and cultural images on money correspond with higher economic freedom and human development index scores.
• More religious images on money correspond with less political freedom. And for OPEC countries, also less economic freedom.
• For non-Commonwealth nations, images of women on money correspond with more political freedom, human development & gender equality.
Abstract: The images on a country’s currency are purposefully chosen by the people or government to be representative of that country. Potentially, one could learn a lot about the economic and political climate of a country by simply looking at the pictures on its money. This paper reports indexes measuring the political, religious, and cultural/scientific content as well as the representation of women on currency notes. The analysis suggests that we can look to the iconography in currency as an indication of the quality of the institutions or socio-economic outcomes in that country.
2. Survey of related literature
The iconographic analysis of currency notes is a much-discussedtopic in disciplines outside economics. Sociologists, historians,anthropologists, and many others have looked at the images on cur-rency notes to discuss the social and political environment within acountry. There is even a ‘Bank Note of the Year’ contest held by theE-mail address: knl0013@mix.wvu.eduInternational Bank Note Society, which is decided by vote. Votersevaluate the “artistic merit, design, use of [color], contrast, bal-ance, and security features of each nomination” (IBNS Banknoteof the Year, 2018). It is widely accepted that the images on a coun-try’s currency hold significance, so they are worthy of discussion.
Most of the iconographic work on money looks at a single coun-try’s currency: Denmark (Sorensen, 2016), Ghana (Fuller, 2008),Indonesia (Strassler, 2009), Laos (Tappe, 2007), Scotland (Penrose& Cumming, 2011), Palestine (Wallach, 2011), Taiwan (Hymans &Fu, 2017), and the former Soviet Union (Cooper, 2009).
While most scholars have looked at a country’s currency at apoint in time, Schwartz (2014) examined the change in imageson Chinese currency from the 1940s to the 1990s. In particu-lar, he examined the use of classical socialist, including Soviet,imagery on the Yuan. An oncoming train, for example, symbolizesthe inevitability of the socialist revolution. Peasants and work-ers looking off into the distance reflected another theme in Sovietart, common people wistfully looking toward a promising socialistfuture. Mao’s absence on the Yuan note until after his death mightbe attributed to communist ideas around money, and Mao him-self said the achievements of an individual should not be glorified.
However, this is in direct contradiction with basically every otherform of media, which painted Mao as practically a divine being.Schwartz argues that keeping his face off of the currency was astrategic decision to dissociate him from the state and maintain hisimage as an ally to the masses.
Hymans (2004) investigated the evolution of currency iconogra-phy in Europe from the 19thcentury to the era of the Euro. He foundthat there were iconographic changes over time that reflected thesocial changes and trends we know from history. And yet, there arefew iconographic differences across the European countries at anypoint in time. Hymans suggests that the images on European coun-tries currencies were probably not used as propaganda towardstheir own citizens, but rather to mirror the values of their neigh-bors to legitimize themselves and fit in with a broader, collectiveEuropean identity.
An unrelated strand of literature within economics has beentrying to find new, unconventional ways to measure social or eco-nomic conditions. Chong, La Porta, Lopez-de-Silanes, and Shleifer,(2014) mailed letters to nonexistent addresses of businesses andthen graded government efficiency by how promptly, if at all, theletters were returned. The goal was to create an objective mea-sure of government efficiency across the 159 countries observed.They found that their measures of efficiency correlated with otherindicators of government quality.
Henderson, Storeygard, and Weil, (2012) used satellite imagesof nighttime lights to estimate economic activity. They found thattheir estimates were different by only a few percentage points fromthe official data. However, their method allowed for more specificregional and international analysis that was not possible from otherconventional ways to collect this data.
Fisman and Miguel (2007) measured government corruption byrecording foreign diplomat parking violations in New York City.Thanks to diplomatic immunity, foreign diplomats may legallyignore parking tickets, but many still do pay them voluntarily.Diplomats from highly corrupt countries did in fact pay less thanthose from other less corrupt countries. Thus, parking ticket pay-ments may serve as a proxy for the cultural norms in the homecountry.
This paper attempts to unite the literature on the iconogra-phy of currency with the literature using unconventional methodsto measure country-level characteristics. The images found on anation’s currency may be indicators for socio-economic conditionsor underlying institutional quality. Unlike the previous literaturewhich looks at why a currency’s iconography has changed overtime in a certain country or region, this project seeks to answer thequestion: is currency iconography a good indicator of institutionalquality for all countries?
Highlights
• Countries with more political imagery on their money tend of have less political freedom and more gender inequality.
• More scientific & agricultural images on money correspond with less economic freedom, political freedom, human development & gender equality.
• More art related and cultural images on money correspond with higher economic freedom and human development index scores.
• More religious images on money correspond with less political freedom. And for OPEC countries, also less economic freedom.
• For non-Commonwealth nations, images of women on money correspond with more political freedom, human development & gender equality.
Abstract: The images on a country’s currency are purposefully chosen by the people or government to be representative of that country. Potentially, one could learn a lot about the economic and political climate of a country by simply looking at the pictures on its money. This paper reports indexes measuring the political, religious, and cultural/scientific content as well as the representation of women on currency notes. The analysis suggests that we can look to the iconography in currency as an indication of the quality of the institutions or socio-economic outcomes in that country.
2. Survey of related literature
The iconographic analysis of currency notes is a much-discussedtopic in disciplines outside economics. Sociologists, historians,anthropologists, and many others have looked at the images on cur-rency notes to discuss the social and political environment within acountry. There is even a ‘Bank Note of the Year’ contest held by theE-mail address: knl0013@mix.wvu.eduInternational Bank Note Society, which is decided by vote. Votersevaluate the “artistic merit, design, use of [color], contrast, bal-ance, and security features of each nomination” (IBNS Banknoteof the Year, 2018). It is widely accepted that the images on a coun-try’s currency hold significance, so they are worthy of discussion.
Most of the iconographic work on money looks at a single coun-try’s currency: Denmark (Sorensen, 2016), Ghana (Fuller, 2008),Indonesia (Strassler, 2009), Laos (Tappe, 2007), Scotland (Penrose& Cumming, 2011), Palestine (Wallach, 2011), Taiwan (Hymans &Fu, 2017), and the former Soviet Union (Cooper, 2009).
While most scholars have looked at a country’s currency at apoint in time, Schwartz (2014) examined the change in imageson Chinese currency from the 1940s to the 1990s. In particu-lar, he examined the use of classical socialist, including Soviet,imagery on the Yuan. An oncoming train, for example, symbolizesthe inevitability of the socialist revolution. Peasants and work-ers looking off into the distance reflected another theme in Sovietart, common people wistfully looking toward a promising socialistfuture. Mao’s absence on the Yuan note until after his death mightbe attributed to communist ideas around money, and Mao him-self said the achievements of an individual should not be glorified.
However, this is in direct contradiction with basically every otherform of media, which painted Mao as practically a divine being.Schwartz argues that keeping his face off of the currency was astrategic decision to dissociate him from the state and maintain hisimage as an ally to the masses.
Hymans (2004) investigated the evolution of currency iconogra-phy in Europe from the 19thcentury to the era of the Euro. He foundthat there were iconographic changes over time that reflected thesocial changes and trends we know from history. And yet, there arefew iconographic differences across the European countries at anypoint in time. Hymans suggests that the images on European coun-tries currencies were probably not used as propaganda towardstheir own citizens, but rather to mirror the values of their neigh-bors to legitimize themselves and fit in with a broader, collectiveEuropean identity.
An unrelated strand of literature within economics has beentrying to find new, unconventional ways to measure social or eco-nomic conditions. Chong, La Porta, Lopez-de-Silanes, and Shleifer,(2014) mailed letters to nonexistent addresses of businesses andthen graded government efficiency by how promptly, if at all, theletters were returned. The goal was to create an objective mea-sure of government efficiency across the 159 countries observed.They found that their measures of efficiency correlated with otherindicators of government quality.
Henderson, Storeygard, and Weil, (2012) used satellite imagesof nighttime lights to estimate economic activity. They found thattheir estimates were different by only a few percentage points fromthe official data. However, their method allowed for more specificregional and international analysis that was not possible from otherconventional ways to collect this data.
Fisman and Miguel (2007) measured government corruption byrecording foreign diplomat parking violations in New York City.Thanks to diplomatic immunity, foreign diplomats may legallyignore parking tickets, but many still do pay them voluntarily.Diplomats from highly corrupt countries did in fact pay less thanthose from other less corrupt countries. Thus, parking ticket pay-ments may serve as a proxy for the cultural norms in the homecountry.
This paper attempts to unite the literature on the iconogra-phy of currency with the literature using unconventional methodsto measure country-level characteristics. The images found on anation’s currency may be indicators for socio-economic conditionsor underlying institutional quality. Unlike the previous literaturewhich looks at why a currency’s iconography has changed overtime in a certain country or region, this project seeks to answer thequestion: is currency iconography a good indicator of institutionalquality for all countries?
Narcissistic individuals were no better at accurately identifying other narcissists, but such individuals demonstrated considerable aversion to narcissistic faces
The Relation Between Narcissistic Personality Traits and Accurate Identification of, and Preference for, Facially Communicated Narcissism. Mary M. Medlin, Donald F. Sacco, Mitch Brown. Evolutionary Psychological Science, December 3 2019. https://link.springer.com/article/10.1007/s40806-019-00224-x
Abstract: When evaluating someone as a potential social acquaintance, people prefer affiliative, pleasant individuals. This necessitates the evolution of perceptual acuity in distinguishing between genuinely prosocial traits and those connoting exploitative intentions. Such intentions can be readily inferred through facial structures connoting personality, even in the absence of other diagnostic cues. We sought to explore how self-reported narcissism, a personality constellation associated with inflated self-views and exploitative intentions, might facilitate one’s ability to detect narcissism in others’ faces as means of identifying social targets who could best satisfy potential exploitative goals. Participants viewed pairs of male and female targets manipulated to connote high and low levels of narcissism before identifying which appeared more narcissistic and indicating their preference among each pair. Narcissistic individuals were no better at accurately identifying other narcissists, but such individuals demonstrated considerable aversion to narcissistic faces. Women higher in exploitative narcissism additionally preferred narcissistic female faces, while men high in exploitative narcissism demonstrated similar patterns of aversion toward narcissistic male faces. Findings provide evidence that narcissistic individuals may adaptively avoid those whom they identify as having similar exploitative behavior repertoire, though when considering the exploitive dimension of narcissism specifically, sex differences emerged.
Keywords: Narcissism Face perception Personality preference Evolutionary psychology
Detecting Personality from Faces
A growing body of literature has demonstrated that people canaccurately determine personality information from facial cues (Little and Perrett2007;Parkinson2005; Sacco and Brown2018). Additionally, individuals are able to make these judgments based on limited exposure to others’faces, sometimesin as little as 50 ms (Borkenau et al. 2009; Penton-Voak et al. 2006; Zebrowitz and Collins 1997). The human ability to inferpersonality traits based on a single, cursory glance at another’s face may have evolved to facilitate efficient identification ofcooperative and exploitative conspecifics to motivate adaptiv eapproach and avoidance behavior, respectively (Borkenauet al. 2004; Sacco and Brown 2018; Zebrowitz and Collins 1997). For example, upon identifying the genuinely affiliative intentions in faces possessing extraverted facial structures,individuals consistently prefer such structures, particularlywhen motivated to seek affiliative opportunities (Brownet al.2019a). Conversely, the recognition of facial structuresconnoting exploitative intentions (e.g., psychopathy) elicits considerable aversion from perceivers (Brown et al.2017). This efficient identification of affiliative and exploitative conspecifics could thus expedite the avoidance of persons withgreater intention to harm others (Haselton and Nettle 2006).
Interpersonal Dynamics of Narcissism
Individuals known to attempt social espionage include thosewith personality types related to more manipulative behavioralrepertoires, including those high in narcissism. In fact, highly narcissistic individuals are particularly motivated to presentthemselves in a positive light toward others in the service of acquiring access to social capital (Rauthmann 2011). These deceptive interpersonal tactics have been shaped by an evolutionary arms race, wherein narcissistic individuals seek to de-ceive group members with such group members subsequently evolving greater capacity to recognize those likely to exploitthem (Cosmides and Tooby1992). Given that narcissistic individuals are especially prone to cheating (Baughman et al.2014), it would thus be adaptive to identify narcissistic individuals preemptively to reduce the likelihood of falling victimto their exploitation. Indeed, narcissism is readily inferred through various interpersonal behaviors, including dressing provocatively (Vazire et al.2008) and heightened selfie-taking (e.g., McCain et al.2016). These inferences are additionally possible through facial features, with narcissismpossessing a specific facial structure (Holtzman 2011). Given narcissistic individuals’ability to mask their intentions, through the absence of clear affective cues of manipulative intent, people may subsequently rely on facial structures todetect any such intention in an attempt to avoid those capable of inflicting considerable interpersonal costs, such as those associated with narcissism.
There is no doubt that associating with narcissistic individuals is costly, especially for cooperative individuals who fullyparticipate in group living. However, an association with anarcissist may be even more costly for another narcissist. Narcissistic individuals are, by their very nature, interpersonally dominant and unlikely to be exploited by others (Chenget al. 2010), which would position them to reap the benefits from social competitions with others (e.g., access to resources;Jonason et al. 2015). This could suggest that one narcissist could be a potential threat to another in their pursuit of socialresources if both individuals have similar exploitative aspirations. This recognition of threat could be particularly critical inthe mating arena, given that the presence of more narcissistic individuals would result in a reduction in short-term mating opportunities (Holtzman & Strube 2010). For this reason, itwould be expected for narcissists to demonstrate aversion forother narcissists in the service of reducing competition forresources and mates among those utilizing similarly exploitative interpersonal strategies.
Abstract: When evaluating someone as a potential social acquaintance, people prefer affiliative, pleasant individuals. This necessitates the evolution of perceptual acuity in distinguishing between genuinely prosocial traits and those connoting exploitative intentions. Such intentions can be readily inferred through facial structures connoting personality, even in the absence of other diagnostic cues. We sought to explore how self-reported narcissism, a personality constellation associated with inflated self-views and exploitative intentions, might facilitate one’s ability to detect narcissism in others’ faces as means of identifying social targets who could best satisfy potential exploitative goals. Participants viewed pairs of male and female targets manipulated to connote high and low levels of narcissism before identifying which appeared more narcissistic and indicating their preference among each pair. Narcissistic individuals were no better at accurately identifying other narcissists, but such individuals demonstrated considerable aversion to narcissistic faces. Women higher in exploitative narcissism additionally preferred narcissistic female faces, while men high in exploitative narcissism demonstrated similar patterns of aversion toward narcissistic male faces. Findings provide evidence that narcissistic individuals may adaptively avoid those whom they identify as having similar exploitative behavior repertoire, though when considering the exploitive dimension of narcissism specifically, sex differences emerged.
Keywords: Narcissism Face perception Personality preference Evolutionary psychology
Detecting Personality from Faces
A growing body of literature has demonstrated that people canaccurately determine personality information from facial cues (Little and Perrett2007;Parkinson2005; Sacco and Brown2018). Additionally, individuals are able to make these judgments based on limited exposure to others’faces, sometimesin as little as 50 ms (Borkenau et al. 2009; Penton-Voak et al. 2006; Zebrowitz and Collins 1997). The human ability to inferpersonality traits based on a single, cursory glance at another’s face may have evolved to facilitate efficient identification ofcooperative and exploitative conspecifics to motivate adaptiv eapproach and avoidance behavior, respectively (Borkenauet al. 2004; Sacco and Brown 2018; Zebrowitz and Collins 1997). For example, upon identifying the genuinely affiliative intentions in faces possessing extraverted facial structures,individuals consistently prefer such structures, particularlywhen motivated to seek affiliative opportunities (Brownet al.2019a). Conversely, the recognition of facial structuresconnoting exploitative intentions (e.g., psychopathy) elicits considerable aversion from perceivers (Brown et al.2017). This efficient identification of affiliative and exploitative conspecifics could thus expedite the avoidance of persons withgreater intention to harm others (Haselton and Nettle 2006).
Interpersonal Dynamics of Narcissism
Individuals known to attempt social espionage include thosewith personality types related to more manipulative behavioralrepertoires, including those high in narcissism. In fact, highly narcissistic individuals are particularly motivated to presentthemselves in a positive light toward others in the service of acquiring access to social capital (Rauthmann 2011). These deceptive interpersonal tactics have been shaped by an evolutionary arms race, wherein narcissistic individuals seek to de-ceive group members with such group members subsequently evolving greater capacity to recognize those likely to exploitthem (Cosmides and Tooby1992). Given that narcissistic individuals are especially prone to cheating (Baughman et al.2014), it would thus be adaptive to identify narcissistic individuals preemptively to reduce the likelihood of falling victimto their exploitation. Indeed, narcissism is readily inferred through various interpersonal behaviors, including dressing provocatively (Vazire et al.2008) and heightened selfie-taking (e.g., McCain et al.2016). These inferences are additionally possible through facial features, with narcissismpossessing a specific facial structure (Holtzman 2011). Given narcissistic individuals’ability to mask their intentions, through the absence of clear affective cues of manipulative intent, people may subsequently rely on facial structures todetect any such intention in an attempt to avoid those capable of inflicting considerable interpersonal costs, such as those associated with narcissism.
There is no doubt that associating with narcissistic individuals is costly, especially for cooperative individuals who fullyparticipate in group living. However, an association with anarcissist may be even more costly for another narcissist. Narcissistic individuals are, by their very nature, interpersonally dominant and unlikely to be exploited by others (Chenget al. 2010), which would position them to reap the benefits from social competitions with others (e.g., access to resources;Jonason et al. 2015). This could suggest that one narcissist could be a potential threat to another in their pursuit of socialresources if both individuals have similar exploitative aspirations. This recognition of threat could be particularly critical inthe mating arena, given that the presence of more narcissistic individuals would result in a reduction in short-term mating opportunities (Holtzman & Strube 2010). For this reason, itwould be expected for narcissists to demonstrate aversion forother narcissists in the service of reducing competition forresources and mates among those utilizing similarly exploitative interpersonal strategies.
For years, SAT developers & administrators have declined to say that the test measures intelligence, despite the fact that the SAT can trace its roots through the Army Alpha & Beta tests, & others
What We Know, Are Still Getting Wrong, and Have Yet to Learn about the Relationships among the SAT, Intelligence and Achievement. Meredith C. Frey. J. Intell. 2019, 7(4), 26; December 2 2019, https://doi.org/10.3390/jintelligence7040026
Abstract: Fifteen years ago, Frey and Detterman established that the SAT (and later, with Koenig, the ACT) was substantially correlated with measures of general cognitive ability and could be used as a proxy measure for intelligence (Frey and Detterman, 2004; Koenig, Frey, and Detterman, 2008). Since that finding, replicated many times and cited extensively in the literature, myths about the SAT, intelligence, and academic achievement continue to spread in popular domains, online, and in some academic administrators. This paper reviews the available evidence about the relationships among the SAT, intelligence, and academic achievement, dispels common myths about the SAT, and points to promising future directions for research in the prediction of academic achievement.
Keywords: intelligence; SAT; academic achievement
3. What We Get Wrong about the SAT
Nearly a decade ago, Kuncel and Hezlett provided a detailed rebuttal to four misconceptions about the use of cognitive abilities tests, including the SAT, for admissions and hiring decisions: (1) a lack of relationship to non-academic outcomes, (2) predictive bias in the measurements, (3) a problematically strong relationship to socioeconomic status, and (4) a threshold in the measures, beyond which individual differences cease to be important predictors of outcomes [13]. Yet many of these misconceptions remain, especially in opinion pieces, popular books, blogs, and more troublingly, in admissions decisions and in the hearts of academic administrators (see [14] for a review for general audiences).
3.1. The SAT Mostly Measures Ability, Not Privilege
SAT scores correlate moderately with socioeconomic status [15], as do other standardized measures of intelligence. Contrary to some opinions, the predictive power of the SAT holds even when researchers control for socioeconomic status, and this pattern is similar across gender and racial/ethnic subgroups [15,16]. Another popular misconception is that one can “buy” a better SAT score through costly test prep. Yet research has consistently demonstrated that it is remarkably difficult to increase an individual’s SAT score, and the commercial test prep industry capitalizes on, at best, modest changes [13,17]. Short of outright cheating on the test, an expensive and complex undertaking that may carry unpleasant legal consequences, high SAT scores are generally difficult to acquire by any means other than high ability.
That is not to say that the SAT is a perfect measure of intelligence, or only measures intelligence. We know that other variables, such as test anxiety and self-efficacy, seem to exert some influence on SAT scores, though not as much influence as intelligence does. Importantly, though, group differences demonstrated on the SAT may be primarily a product of these noncognitive variables. For example, Hannon demonstrated that gender differences in SAT scores were rendered trivial by the inclusion of test anxiety and performance-avoidance goals [18]. Additional evidence indicates some noncognitive variables—epistemic belief of learning, performance-avoidance goals, and parental education—explain ethnic group differences in scores [19] and variables such as test anxiety may exert greater influence on test scores for different ethnic groups (e.g., [20], in this special issue). Researchers and admissions officers should attend to these influences without discarding the test entirely.
Abstract: Fifteen years ago, Frey and Detterman established that the SAT (and later, with Koenig, the ACT) was substantially correlated with measures of general cognitive ability and could be used as a proxy measure for intelligence (Frey and Detterman, 2004; Koenig, Frey, and Detterman, 2008). Since that finding, replicated many times and cited extensively in the literature, myths about the SAT, intelligence, and academic achievement continue to spread in popular domains, online, and in some academic administrators. This paper reviews the available evidence about the relationships among the SAT, intelligence, and academic achievement, dispels common myths about the SAT, and points to promising future directions for research in the prediction of academic achievement.
Keywords: intelligence; SAT; academic achievement
2. What We Know about the SAT
2.1. The SAT Measures Intelligence
Although the principal finding of Frey and Detterman has been established for 15 years, it bears repeating: the SAT is a good measure of intelligence [1]. Despite scientific consensus around that statement, some are remarkably resistant to accept the evidence of such an assertion. In the wake of a recent college admissions cheating scandal, Shapiro and Goldstein reported, in a piece for the New York Times, “The SAT and ACT are not aptitude or IQ tests” [6]. While perhaps this should not be alarming, as the authors are not experts in the field, the publication reached more than one million subscribers in the digital edition (the article also appeared on page A14 in the print edition, reaching hundreds of thousands more). And it is false, not a matter of opinion, but rather directly contradicted by evidence.
For years, SAT developers and administrators have declined to call the test what it is; this despite the fact that the SAT can trace its roots through the Army Alpha and Beta tests and back to the original Binet test of intelligence [7]. This is not to say that these organizations directly refute Frey and Detterman; rather, they are silent. On the ETS website, the word intelligence does not appear on the pages containing frequently asked questions, the purpose of testing, or the ETS glossary. If one were to look at the relevant College Board materials (and this author did, rather thoroughly), there are no references to intelligence in the test specifications for the redesigned SAT, the validity study of the redesigned SAT, the technical manual, or the SAT understanding scores brochure.
Further, while writing this paper, I entered the text “does the SAT measure intelligence” into the Google search engine. Of the first 10 entries, the first (an advertisement) was a link to the College Board for scheduling the SAT, four were links to news sites offering mixed opinions, and fully half were links to test prep companies or authors, who all indicated the test is not a measure of intelligence. This is presumably because acknowledging the test as measure of intelligence would decrease consumers’ belief that scores could be vastly improved with adequate coaching (even though there is substantial evidence that coaching does little to change test scores). One test prep book author’s blog was also the “featured snippet”, or the answer highlighted for searchers just below the ad. In the snippet, the author made the claims that “The SAT does not measure how intelligent you are. Experts disagree whether intelligence can be measured at all, in truth” [8]—little wonder, then, that there is such confusion about the test.
2.2. The SAT Predicts College Achievement
Again, an established finding bears repeating: the SAT predicts college achievement, and a combination of SAT scores and high school grades offer the best prediction of student success. In the most recent validity sample of nearly a quarter million students, SAT scores and high school GPA combined offered the best predictor of first year GPA for college students. Including SAT scores in regression analyses yielded a roughly 15% increase in predictive power above using high school grades alone. Additionally, SAT scores improved the prediction of student retention to the second year of college [9]. Yet many are resistant to using standardized test scores in admissions decisions, and, as a result, an increasing number of schools are becoming “test optional”, meaning that applicants are not required to submit SAT or ACT scores to be considered for admission. But, without these scores, admissions officers lose an objective measure of ability and the best option for predicting student success.
2.3. The SAT Is Important to Colleges
Colleges, even nonselective ones, need to identify those individuals whose success is most likely, because that guarantees institutions a consistent revenue stream and increases retention rates, seen by some as an important measure of institutional quality. Selective and highly selective colleges further need to identify the most talented students because those students (or, rather, their average SAT scores) are important for the prestige of the university. Indeed, the correlation between average SAT/ACT scores and college ranking in U.S. News & World Report is very nearly 0.9 [10,11].
2.4. The SAT Is Important to Students
Here, it is worth recalling the reason the SAT was used in admissions decisions in the first place: to allow scholarship candidates to apply for admission to Harvard without attending an elite preparatory school [7]. Without an objective measure of ability, admissions officers are left with assessing not just the performance of the student in secondary education, but also the quality of the opportunities afforded to that student, which vary considerably across the secondary school landscape in the United States. Klugman analyzed data from a nationally representative sample and found that high school resources are an important factor in determining the selectivity of colleges that students apply for, both in terms of programmatic resources (e.g., AP classes) and social resources (e.g., socioeconomic status of other students) [12]. It is possible, then, that relying solely on high school records will exacerbate rather than reduce pre-existing inequalities.
Of further importance, performance on the SAT predicts the probability of maintaining a 2.5 GPA (a proxy for good academic standing) [9]. Universities can be rather costly and admitting students with little chance of success until they either leave of their own accord or are removed for academic underperformance—with no degree to show and potentially large amounts of debt—is hardly the most just solution.
Nearly a decade ago, Kuncel and Hezlett provided a detailed rebuttal to four misconceptions about the use of cognitive abilities tests, including the SAT, for admissions and hiring decisions: (1) a lack of relationship to non-academic outcomes, (2) predictive bias in the measurements, (3) a problematically strong relationship to socioeconomic status, and (4) a threshold in the measures, beyond which individual differences cease to be important predictors of outcomes [13]. Yet many of these misconceptions remain, especially in opinion pieces, popular books, blogs, and more troublingly, in admissions decisions and in the hearts of academic administrators (see [14] for a review for general audiences).
3.1. The SAT Mostly Measures Ability, Not Privilege
SAT scores correlate moderately with socioeconomic status [15], as do other standardized measures of intelligence. Contrary to some opinions, the predictive power of the SAT holds even when researchers control for socioeconomic status, and this pattern is similar across gender and racial/ethnic subgroups [15,16]. Another popular misconception is that one can “buy” a better SAT score through costly test prep. Yet research has consistently demonstrated that it is remarkably difficult to increase an individual’s SAT score, and the commercial test prep industry capitalizes on, at best, modest changes [13,17]. Short of outright cheating on the test, an expensive and complex undertaking that may carry unpleasant legal consequences, high SAT scores are generally difficult to acquire by any means other than high ability.
That is not to say that the SAT is a perfect measure of intelligence, or only measures intelligence. We know that other variables, such as test anxiety and self-efficacy, seem to exert some influence on SAT scores, though not as much influence as intelligence does. Importantly, though, group differences demonstrated on the SAT may be primarily a product of these noncognitive variables. For example, Hannon demonstrated that gender differences in SAT scores were rendered trivial by the inclusion of test anxiety and performance-avoidance goals [18]. Additional evidence indicates some noncognitive variables—epistemic belief of learning, performance-avoidance goals, and parental education—explain ethnic group differences in scores [19] and variables such as test anxiety may exert greater influence on test scores for different ethnic groups (e.g., [20], in this special issue). Researchers and admissions officers should attend to these influences without discarding the test entirely.
Merely Possessing a Placebo Analgesic Reduced Pain Intensity: Preliminary Findings from a Randomized Design
Merely Possessing a Placebo Analgesic Reduced Pain Intensity: Preliminary Findings from a Randomized Design. Victoria Wai-lan Yeung, Andrew Geers, Simon Man-chun Kam. Current Psychology, February 2019, Volume 38, Issue 1, pp 194–203. https://link.springer.com/article/10.1007/s12144-017-9601-0
Abstract: An experiment was conducted to examine whether the mere possession of a placebo analgesic cream would affect perceived pain intensity in a laboratory pain-perception test. Healthy participants read a medical explanation of pain aimed at inducing a desire to seek pain relief and then were informed that a placebo cream was an effective analgesic drug. Half of the participants were randomly assigned to receive the cream as an unexpected gift, whereas the other half did not receive the cream. Subsequently, all participants performed the cold-pressor task. We found that participants who received the cream but did not use it reported lower levels of pain intensity during the cold-pressor task than those who did not receive the cream. Our findings constitute initial evidence that simply possessing a placebo analgesic can reduce pain intensity. The study represents the first attempt to investigate the role of mere possession in understanding placebo analgesia. Possible mechanisms and future directions are discussed.
Keywords: Placebo effect Mere possession Cold pressor Placebo analgesia Pain
Discussion
Past research has demonstrated that placebo analgesics can increase pain relief. The primary focus was on pain relief that occurred following the use of the placebo-analgesic treatment. We tested the novel hypothesis that merely possessing a placebo analgesic can boost pain relief. Consistent with this hypothesis, participants who received but did not use what they were told was a placebo-analgesic cream reported lower levels of pain intensity in a cold-pressor test than did participants who did not possess the cream. To our knowledge, the present data are the first to extend research on the mere-possession phenomenon (Beggan 1992) to the realm of placebo analgesia.
Traditional placebo studies have included both possessing and consuming: Participants first possess an inert object, then they consume or use it and report diminished pain as a consequence (Atlas et al. 2009; de la Fuente-Fernández et al. 2001; Price et al. 2008; Vase et al. 2003). The current study provided initial evidence that consuming or using the placebo analgesia is unnecessary for the effect. However, it remains possible that the effect would be enhanced were possession to be accompanied by consumption or use. This and related hypotheses could be tested in future studies.
In the current experiment, we measured several different variables (fear of pain, dispositional optimism, desire for control, suggestibility, and trait anxiety) that could be considered as potential moderators of the observed placebo-analgesia effect. However, none of them proved significant. Although we remain unsure of the processes responsible for the mere possession effect we observed, a previously offered account may be applicable. Specifically, participants’ pain reduction may have been induced by a positive expectation of pain relief that was mediated by an elevated perception of self efficacy in coping with pain (see Peck and Coleman 1991; Spanos et al. 1989). To directly test this possibility in further research, it would be important to measure participants’ self-perceived analgesic efficacy in relation to the mere-possession effect.
It is possible that the mere possession of what participants were told was an analgesic cream induced a positive affect through reception of a free gift. The affect may have influenced participants’ perceived pain intensity. In order to test this possibility, we looked more closely at an item in the State-Anxiety Subscale (Spielberger et al. 1983), specifically, BI feel happy^. Participants in the mere-possession condition did not feel happier (M = 2.47, SD = .96) than those in the nopossession condition (M = 2.80, SD = .70), t(37) = 1.22, p = .23, d = .38, CI95% = [−0.24, 1.00]. Nevertheless, since the participants completed the State-Anxiety Subscale after they received the cream and following the pain-perception test, in order to strictly delineate the effect of affect from other factors, future research should measure participants’ mood after they receive the cream and prior to the pain-perception test. In our study, participants’ pain reduction could not be attributed to the mere-exposure effect because participants in both conditions were initially exposed to the sample of the cream simultaneously. The only difference between the two conditions was that participants in the mere-possession condition were subsequently granted ownership of the sample cream, but participants in the no-possession condition did not.
A significant group difference in pain perception appeared in the analysis of the MPQ results but not those from the VAS. There are at least two possible reasons for this outcome. First, prior researchers had demonstrated that the VAS is sensitive to changes in perceived pain when participants are asked to continuously report their pain intensity (Joyce et al. 1975; Schafer et al. 2015).
In our study, participants reported their pain intensity only once. Whether a significant group difference would be observed if the VAS was to be administered several times within the 1-min immersion duration is presently unknown. Second, it should be noted that VAS may not be sensitive to Asians’ pain perception (Yokobe et al. 2014). No similar observation has been made about results from the use of the MPQ.
Our findings add to the placebo-analgesia literature by indicating potential directions for further research, including limitations of our study that will need to be considered. First, we induced participants to seek the reduction of pain and to anticipate the effectiveness of the placebo. Doing so may have optimized the incidence of the mere-possession effect. Second, although our data demonstrated that the effect we observed was not due to a positive feeling in response to receiving a free gift, future studies might involve a control condition in which the gift is not purported to relieve pain. Third, our participants were healthy university students of Chinese ethnicity. Prior research has shown that cultural background influences pain perception (Callister 2003; Campbell and Edwards 2012). Future researchers may extend the ethnic and cultural range of the participants in an effort to generalize the current findings. Moreover, it seems critical to conduct future research with clinical patients who are in demonstrable pain. Lastly, it is unclear whether the mere-possession effect extends to other types of pain-induction tasks, such as those involving heat (e.g., Mitchell et al. 2004; Duschek et al. 2009) or loud noise (e.g., Brown et al. 2015; Rose et al. 2014).
Abstract: An experiment was conducted to examine whether the mere possession of a placebo analgesic cream would affect perceived pain intensity in a laboratory pain-perception test. Healthy participants read a medical explanation of pain aimed at inducing a desire to seek pain relief and then were informed that a placebo cream was an effective analgesic drug. Half of the participants were randomly assigned to receive the cream as an unexpected gift, whereas the other half did not receive the cream. Subsequently, all participants performed the cold-pressor task. We found that participants who received the cream but did not use it reported lower levels of pain intensity during the cold-pressor task than those who did not receive the cream. Our findings constitute initial evidence that simply possessing a placebo analgesic can reduce pain intensity. The study represents the first attempt to investigate the role of mere possession in understanding placebo analgesia. Possible mechanisms and future directions are discussed.
Keywords: Placebo effect Mere possession Cold pressor Placebo analgesia Pain
Discussion
Past research has demonstrated that placebo analgesics can increase pain relief. The primary focus was on pain relief that occurred following the use of the placebo-analgesic treatment. We tested the novel hypothesis that merely possessing a placebo analgesic can boost pain relief. Consistent with this hypothesis, participants who received but did not use what they were told was a placebo-analgesic cream reported lower levels of pain intensity in a cold-pressor test than did participants who did not possess the cream. To our knowledge, the present data are the first to extend research on the mere-possession phenomenon (Beggan 1992) to the realm of placebo analgesia.
Traditional placebo studies have included both possessing and consuming: Participants first possess an inert object, then they consume or use it and report diminished pain as a consequence (Atlas et al. 2009; de la Fuente-Fernández et al. 2001; Price et al. 2008; Vase et al. 2003). The current study provided initial evidence that consuming or using the placebo analgesia is unnecessary for the effect. However, it remains possible that the effect would be enhanced were possession to be accompanied by consumption or use. This and related hypotheses could be tested in future studies.
In the current experiment, we measured several different variables (fear of pain, dispositional optimism, desire for control, suggestibility, and trait anxiety) that could be considered as potential moderators of the observed placebo-analgesia effect. However, none of them proved significant. Although we remain unsure of the processes responsible for the mere possession effect we observed, a previously offered account may be applicable. Specifically, participants’ pain reduction may have been induced by a positive expectation of pain relief that was mediated by an elevated perception of self efficacy in coping with pain (see Peck and Coleman 1991; Spanos et al. 1989). To directly test this possibility in further research, it would be important to measure participants’ self-perceived analgesic efficacy in relation to the mere-possession effect.
It is possible that the mere possession of what participants were told was an analgesic cream induced a positive affect through reception of a free gift. The affect may have influenced participants’ perceived pain intensity. In order to test this possibility, we looked more closely at an item in the State-Anxiety Subscale (Spielberger et al. 1983), specifically, BI feel happy^. Participants in the mere-possession condition did not feel happier (M = 2.47, SD = .96) than those in the nopossession condition (M = 2.80, SD = .70), t(37) = 1.22, p = .23, d = .38, CI95% = [−0.24, 1.00]. Nevertheless, since the participants completed the State-Anxiety Subscale after they received the cream and following the pain-perception test, in order to strictly delineate the effect of affect from other factors, future research should measure participants’ mood after they receive the cream and prior to the pain-perception test. In our study, participants’ pain reduction could not be attributed to the mere-exposure effect because participants in both conditions were initially exposed to the sample of the cream simultaneously. The only difference between the two conditions was that participants in the mere-possession condition were subsequently granted ownership of the sample cream, but participants in the no-possession condition did not.
A significant group difference in pain perception appeared in the analysis of the MPQ results but not those from the VAS. There are at least two possible reasons for this outcome. First, prior researchers had demonstrated that the VAS is sensitive to changes in perceived pain when participants are asked to continuously report their pain intensity (Joyce et al. 1975; Schafer et al. 2015).
In our study, participants reported their pain intensity only once. Whether a significant group difference would be observed if the VAS was to be administered several times within the 1-min immersion duration is presently unknown. Second, it should be noted that VAS may not be sensitive to Asians’ pain perception (Yokobe et al. 2014). No similar observation has been made about results from the use of the MPQ.
Our findings add to the placebo-analgesia literature by indicating potential directions for further research, including limitations of our study that will need to be considered. First, we induced participants to seek the reduction of pain and to anticipate the effectiveness of the placebo. Doing so may have optimized the incidence of the mere-possession effect. Second, although our data demonstrated that the effect we observed was not due to a positive feeling in response to receiving a free gift, future studies might involve a control condition in which the gift is not purported to relieve pain. Third, our participants were healthy university students of Chinese ethnicity. Prior research has shown that cultural background influences pain perception (Callister 2003; Campbell and Edwards 2012). Future researchers may extend the ethnic and cultural range of the participants in an effort to generalize the current findings. Moreover, it seems critical to conduct future research with clinical patients who are in demonstrable pain. Lastly, it is unclear whether the mere-possession effect extends to other types of pain-induction tasks, such as those involving heat (e.g., Mitchell et al. 2004; Duschek et al. 2009) or loud noise (e.g., Brown et al. 2015; Rose et al. 2014).
A message coming from behind is interpreted as more negative than a message presented in front of a listener; social information presented from behind is associated with uncertainty and lack of control
Rear Negativity:Verbal Messages Coming from Behind are Perceived as More Negative. Natalia Frankowska Michal Parzuchowski Bogdan Wojciszke Michał Olszanowski Piotr Winkielman. European Journal of Social Psychology, 29 November 2019. https://doi.org/10.1002/ejsp.2649
Abstract: Many studies have explored the evaluative effects of vertical (up/down) or horizontal (left/right) spatial locations. However, little is known about the role of information that comes from the front and back. Based on multiple theoretical considerations, we propose that spatial location of sounds is a cue for message valence, such that a message coming from behind is interpreted as more negative than a message presented in front of a listener. Here we show across a variety of manipulations and dependent measures that this effect occurs in the domain of social information. Our data are most compatible with theoretical accounts which propose that social information presented from behind is associated with uncertainty and lack of control, which is amplified in conditions of self‐relevance.
Excerpts:
Abstract: Many studies have explored the evaluative effects of vertical (up/down) or horizontal (left/right) spatial locations. However, little is known about the role of information that comes from the front and back. Based on multiple theoretical considerations, we propose that spatial location of sounds is a cue for message valence, such that a message coming from behind is interpreted as more negative than a message presented in front of a listener. Here we show across a variety of manipulations and dependent measures that this effect occurs in the domain of social information. Our data are most compatible with theoretical accounts which propose that social information presented from behind is associated with uncertainty and lack of control, which is amplified in conditions of self‐relevance.
Excerpts:
General Discussion
Rear Negativity Effect in Social Domain
The present series of studies document a “rear negativity effect” – a phenomenon where perceivers evaluate social information coming from a source located behind them as more negative than identical information coming from a source located in front of them. We observed this effect repeatedly for a variety of verbal messages (communications in a language incomprehensible to the listeners, neutral communications, positive or negative words spoken in participants’ native language), for a variety of dependent variables (ratings, reaction times), and among different subject populations (Poland, US). Specifically, in Study 1, Polish subjects interpreted Chinese sentences as more negative when presented behind the listener. In Study 2, Polish subjects evaluated feedback from a bogus test as indicative of poorer results when it was presented behind, rather than in front of them. In Study 3, Polish subjects evaluated the Chinese sentences as the most negative when they were played from behind and when they supposedly described in-group (i.e., Polish) members. In Study 4, US subjects judged negative traits more quickly when the traits were supposedly describing self-relevant information and were played behind the listener.
Explanations of the effect
The current research extends previous findings that ecological, naturally occurring, sounds are detected quicker and induce stronger negative emotions when presented behind participants (Asutay & Västfjäll, 2015). Critically, the current studies document this effect in the domain of social information and show it to be stronger or limited to processing of self-relevant information, whether this relevance was induced by reference of messages to the self or to an in-group. Our characterization of the “rear negativity” effect in the social domains is compatible with several considerations and theoretical frameworks. Most generally, the effect is consistent with a notion common in many cultures that things that take place “behind one’s back” are generally negative. However, the accounts of why this is vary – ranging from metaphor theory, simple links between processing ease and evaluation, affordance and uncertainty theories, attentional as well as emotion-appraisal accounts.
Spatial metaphors. People not only talk metaphorically, but also think metaphorically, activating mental representations of space to scaffold their thinking in a variety of non-spatial domains, including time (Torralbo et al., 2006), social dominance (Schubert, 2005), emotional valence (Meier & Robinson, 2004), similarity (Casasanto, 2008), and musical pitch (Rusconi et al., 2006). Thus, it is interesting to consider how our results fit with spatial metaphor theories. Specifically, perhaps when people hear a message, they activate a metaphor and, as a result, evaluate the information as being more unpleasant, dishonest, disloyal, false, or secretive when coming from behind than from the front. Our results suggest that reasons for the rear negativity of verbal information go beyond simple metaphorical explanation. This is because this negativity occurs solely or is augmented for information that is personally relevant to the listener, and that it occurs even in paradigms that require fast automatic processing, leaving little time for activation of a conceptual metaphor. Of course, once the valence-to-location mapping is metaphorically established, it could manifest quickly and be stronger for personally-important information. In short, it would be useful to conduct further investigation of the metaphorical account, perhaps by manipulating the degree of metaphor activation, its specific form or its relevance.
Associative learning and cultural interpretations. The valence-location link could have metaphorical origins but could also result from an individual’s personal experiences that create a mental association (Casasanto, 2009). One could potentially examine an individual’s personal history and her cultural setting and see whether a rear location has linked to negative events. Specifically, everyday experiences could lead to a location-valence association. For example, during conversations an individual may have encountered more high-status people in front of her rather than behind, thus creating an association of respect and location. Or, the individual may have experienced more sounds from behind that are associated with criticism or harassment rather than compliments. Beyond an individual’s own associative history, there is also culture. For example, European cultures used to have a strong preference for facing objects of respect (e.g., not turning your back to the monarch, always facing the church altar). As a result, sounds coming from behind may be interpreted as coming from sources of less respect. More complex interpretative processes may also be involved. As discussed in the context of Study 3, hearing from behind from an out-group about one’s own group can increase the tendency to attribute negative biases to the outgroup. It can lead then to interpreting the outgroup’s utterances as being more critical or even threatening, especially when such utterances are negative (e.g. Judd et al., 2005; Yzerbyt, Judd, & Muller, 2009). However, these speculations are clearly post-hoc and further research is needed to understand the full pattern of results. Simila [cut here!!!!]
Fluency. One simple mechanistic explanation of the current results draws on the idea that difficult (disfluent) processing lowers stimulus evaluations, while easy (fluent) processing enhances evaluations (Winkielman et al., 2003). People usually listen to sounds positioned in front. So, it is possible that sounds coming from behind are perceived as more negative because they are less fluent (or less familiar). However, fluency, besides increasing the experience of positive affect, is also manifested through the speed of processing (i.e. fluent stimuli are recognized faster). Yet, it is worth mentioning that in Study 4 we did not observe the effect of location on overall reaction times. Moreover, previous research suggests that if anything, information presented from behind is processed faster (Asutay & Västfjäll, 2015). For these reasons, and because the effect is limited to self-relevant information, the fluency approach does not explain the presented effects. However, future research may consider a potential of fluency manipulations to reduce the rear negativity effect.
Affordances. Yet another possible explanation draws on classic affordance theory suggesting that the world is perceived not only in terms of objects and their spatial relationships, but also in terms of one’s possible actions (Gibson, 1950, 1966). Thus, verbal information located in the back may restrict possible actions to the listener and hence may cause negative evaluation. However, this explanation is weakened by our observations that reward negativity effect also appears when participants are seated and blindfolded, so they cannot see in the front. Further examination of this account could include a set-up that involves restricting participants’ hands or using virtual reality to manipulate perspective and embodied affordances.
Rear Negativity Effect in Social Domain
The present series of studies document a “rear negativity effect” – a phenomenon where perceivers evaluate social information coming from a source located behind them as more negative than identical information coming from a source located in front of them. We observed this effect repeatedly for a variety of verbal messages (communications in a language incomprehensible to the listeners, neutral communications, positive or negative words spoken in participants’ native language), for a variety of dependent variables (ratings, reaction times), and among different subject populations (Poland, US). Specifically, in Study 1, Polish subjects interpreted Chinese sentences as more negative when presented behind the listener. In Study 2, Polish subjects evaluated feedback from a bogus test as indicative of poorer results when it was presented behind, rather than in front of them. In Study 3, Polish subjects evaluated the Chinese sentences as the most negative when they were played from behind and when they supposedly described in-group (i.e., Polish) members. In Study 4, US subjects judged negative traits more quickly when the traits were supposedly describing self-relevant information and were played behind the listener.
Explanations of the effect
The current research extends previous findings that ecological, naturally occurring, sounds are detected quicker and induce stronger negative emotions when presented behind participants (Asutay & Västfjäll, 2015). Critically, the current studies document this effect in the domain of social information and show it to be stronger or limited to processing of self-relevant information, whether this relevance was induced by reference of messages to the self or to an in-group. Our characterization of the “rear negativity” effect in the social domains is compatible with several considerations and theoretical frameworks. Most generally, the effect is consistent with a notion common in many cultures that things that take place “behind one’s back” are generally negative. However, the accounts of why this is vary – ranging from metaphor theory, simple links between processing ease and evaluation, affordance and uncertainty theories, attentional as well as emotion-appraisal accounts.
Spatial metaphors. People not only talk metaphorically, but also think metaphorically, activating mental representations of space to scaffold their thinking in a variety of non-spatial domains, including time (Torralbo et al., 2006), social dominance (Schubert, 2005), emotional valence (Meier & Robinson, 2004), similarity (Casasanto, 2008), and musical pitch (Rusconi et al., 2006). Thus, it is interesting to consider how our results fit with spatial metaphor theories. Specifically, perhaps when people hear a message, they activate a metaphor and, as a result, evaluate the information as being more unpleasant, dishonest, disloyal, false, or secretive when coming from behind than from the front. Our results suggest that reasons for the rear negativity of verbal information go beyond simple metaphorical explanation. This is because this negativity occurs solely or is augmented for information that is personally relevant to the listener, and that it occurs even in paradigms that require fast automatic processing, leaving little time for activation of a conceptual metaphor. Of course, once the valence-to-location mapping is metaphorically established, it could manifest quickly and be stronger for personally-important information. In short, it would be useful to conduct further investigation of the metaphorical account, perhaps by manipulating the degree of metaphor activation, its specific form or its relevance.
Associative learning and cultural interpretations. The valence-location link could have metaphorical origins but could also result from an individual’s personal experiences that create a mental association (Casasanto, 2009). One could potentially examine an individual’s personal history and her cultural setting and see whether a rear location has linked to negative events. Specifically, everyday experiences could lead to a location-valence association. For example, during conversations an individual may have encountered more high-status people in front of her rather than behind, thus creating an association of respect and location. Or, the individual may have experienced more sounds from behind that are associated with criticism or harassment rather than compliments. Beyond an individual’s own associative history, there is also culture. For example, European cultures used to have a strong preference for facing objects of respect (e.g., not turning your back to the monarch, always facing the church altar). As a result, sounds coming from behind may be interpreted as coming from sources of less respect. More complex interpretative processes may also be involved. As discussed in the context of Study 3, hearing from behind from an out-group about one’s own group can increase the tendency to attribute negative biases to the outgroup. It can lead then to interpreting the outgroup’s utterances as being more critical or even threatening, especially when such utterances are negative (e.g. Judd et al., 2005; Yzerbyt, Judd, & Muller, 2009). However, these speculations are clearly post-hoc and further research is needed to understand the full pattern of results. Simila [cut here!!!!]
Fluency. One simple mechanistic explanation of the current results draws on the idea that difficult (disfluent) processing lowers stimulus evaluations, while easy (fluent) processing enhances evaluations (Winkielman et al., 2003). People usually listen to sounds positioned in front. So, it is possible that sounds coming from behind are perceived as more negative because they are less fluent (or less familiar). However, fluency, besides increasing the experience of positive affect, is also manifested through the speed of processing (i.e. fluent stimuli are recognized faster). Yet, it is worth mentioning that in Study 4 we did not observe the effect of location on overall reaction times. Moreover, previous research suggests that if anything, information presented from behind is processed faster (Asutay & Västfjäll, 2015). For these reasons, and because the effect is limited to self-relevant information, the fluency approach does not explain the presented effects. However, future research may consider a potential of fluency manipulations to reduce the rear negativity effect.
Affordances. Yet another possible explanation draws on classic affordance theory suggesting that the world is perceived not only in terms of objects and their spatial relationships, but also in terms of one’s possible actions (Gibson, 1950, 1966). Thus, verbal information located in the back may restrict possible actions to the listener and hence may cause negative evaluation. However, this explanation is weakened by our observations that reward negativity effect also appears when participants are seated and blindfolded, so they cannot see in the front. Further examination of this account could include a set-up that involves restricting participants’ hands or using virtual reality to manipulate perspective and embodied affordances.
Subscribe to:
Posts (Atom)