Younger, Dissatisfied, and Unhealthy - Relative Age in Adolescence. L. Fumarco, S. Baert, F. Sarracino. Economics & Human Biology, January 30 2020, 100858. https://doi.org/10.1016/j.ehb.2020.100858
Highlights
• The youngest students in a class are less satisfied with their life.
• They have worse general health.
• They have more frequent psychosomatic complaints and are more likely overweight.
Abstract: We investigate whether relative age (i.e. the age gap between classmates) affects life satisfaction and health in adolescence. We analyse data on students between 10 and 17 years of age from the international survey ‘Health Behaviour in School-Aged Children’ and find robust evidence that a twelve-month increase in relative age (i.e. the hypothetical maximum age gap between classmates) i) increases life satisfaction by 0.168 standard deviations, ii) increases self-rated general health by 0.108 standard deviations, iii) decreases psychosomatic complaints by 0.072 standard deviations, and iv) decreases chances of being overweight by 2.4%. These effects are comparable in size to the effects of students’ household socio-economic status. Finally, gaps in life satisfaction are the only ones to reduce with the increase in absolute age, but only in countries where the first tracking of students occurs at 14 years of age or later.
Thursday, January 30, 2020
Neanderthal genes might have helped Homo sapiens adjust to life beyond Africa, influencing skin pigmentation towards fairer skin & then increased life expectancy (at a cost of more skin cancer)
Women with fair phenotypes seem to confer a survival advantage in a low UV milieu. A nested matched case control study. Pelle G. Lindqvist et al. PLOS, January 30, 2020. https://doi.org/10.1371/journal.pone.0228582
Abstract
Background Sun exposure in combination with skin pigmentation is the main determinant for vitamin D status. Human skin color seems to be adapted and optimized for regional sun ultraviolet (UV) intensity. However, we do not know if fair, UV-sensitive skin is a survival advantage in regions with low UV radiation.
Methods A population-based nested case–control study of 29,518 Caucasian women, ages 25 to 64 years from Southern Sweden who responded to a questionnaire regarding risk-factors for malignant melanoma in 1990 and followed for 25 years. For each fair woman, defined as having red hair or freckles (n = 11,993), a control was randomly selected from all non-fair women from within the cohort of similar age, smoking habits, education, marital status, income, and comorbidity, i.e., 11,993 pairs. The main outcome was the difference in all-cause mortality between fair and non-fair women in a low UV milieu, defined as living in Sweden and having low-to-moderate sun exposure habits. Secondary outcomes were mortality by sun exposure, and among those non-overweight.
Results In a low UV milieu, fair women were at a significantly lower all-cause mortality risk as compared to non-fair women (log rank test p = 0.04) with an 8% lower all-cause mortality rate (hazard ratio [HR] = 0.92, 95% CI 0.84‒1.0), including a 59% greater risk of dying from skin cancer among fair women (HR 1.59, 95% CI 1.26‒2.0). Thus, it seem that the beneficial health effect from low skin coloration outweigh the risk of skin cancer at high latitudes.
Conclusion In a region with low UV milieu, evolution seems to improve all-cause survival by selecting a fair skin phenotype, i.e., comprising fair women with a survival advantage.
It has been hypothesized that the inbreeding with Neanderthals some 47,000 to 65,000 years ago in northern Canaan might have helped Homo sapiens adjust to life beyond Africa [25–27]. Studies of the ancient Neanderthal genome have shown that Westerners carry approximately 1% to 3% of Neanderthal DNA [25, 26]. People of European origin are highly likely (≈ 60% to 70%) to have the Neanderthal DNA that affects keratin filaments, i.e., zinc finger protein basonuclin-2 (BNC2). The latter alleles are thought to be involved in the adaptive variation of skin coloration, influencing skin pigmentation towards fairer skin [6, 28]. With our finding of increased life expectancy with fair skin, we speculate that the preserved high carriership of the Neanderthal BNC2 allel might be an advantage at high latitudes.
We interpret our findings to support that a fair, UV-sensitive phenotype in Sweden seems to be related to prolonged life expectancy in a low UV milieu, but at the cost of an increased risk of death due to skin cancer. Over thousands of years a fair UV-sensitive phenotype has possibly been selected for optimal health at high latitudes
Abstract
Background Sun exposure in combination with skin pigmentation is the main determinant for vitamin D status. Human skin color seems to be adapted and optimized for regional sun ultraviolet (UV) intensity. However, we do not know if fair, UV-sensitive skin is a survival advantage in regions with low UV radiation.
Methods A population-based nested case–control study of 29,518 Caucasian women, ages 25 to 64 years from Southern Sweden who responded to a questionnaire regarding risk-factors for malignant melanoma in 1990 and followed for 25 years. For each fair woman, defined as having red hair or freckles (n = 11,993), a control was randomly selected from all non-fair women from within the cohort of similar age, smoking habits, education, marital status, income, and comorbidity, i.e., 11,993 pairs. The main outcome was the difference in all-cause mortality between fair and non-fair women in a low UV milieu, defined as living in Sweden and having low-to-moderate sun exposure habits. Secondary outcomes were mortality by sun exposure, and among those non-overweight.
Results In a low UV milieu, fair women were at a significantly lower all-cause mortality risk as compared to non-fair women (log rank test p = 0.04) with an 8% lower all-cause mortality rate (hazard ratio [HR] = 0.92, 95% CI 0.84‒1.0), including a 59% greater risk of dying from skin cancer among fair women (HR 1.59, 95% CI 1.26‒2.0). Thus, it seem that the beneficial health effect from low skin coloration outweigh the risk of skin cancer at high latitudes.
Conclusion In a region with low UV milieu, evolution seems to improve all-cause survival by selecting a fair skin phenotype, i.e., comprising fair women with a survival advantage.
Discussion
Women with a fair UV-sensitive phenotype living in a low UV milieu had a significantly increased life expectancy as compared to non-fair women. Fair women were at an eight percent lower all-cause mortality rate, as compared to those with non-fair skin. There is a strong inverse dose-dependent risk between increasing sun-exposure habits and all-cause mortality.Strengths and limitations
Our large sample, comprising 20% of all women in the south Swedish region between 25 and 64 ages, as drawn by random selection from the population registry at the study inception 1990 is a strength. It was thus a representative sample of the South Swedish population at the time of recruitment before the large immigration of the 2000’s. It comprises almost exclusively European Caucasian women. Thus, the comparison between fair and non-fair was mainly a comparison between Fitzpatrick types 1 skin vs. type 2‒3 skin. Since the questionnaire was administrated at the inception of the study, there was no recall bias. Since we earlier have been criticized that our adjustments in Cox regression might not be adequate, we decided to perform a one-to-one matched design. Historically, during evolution there was no possibility to use solarium or to travel for sunbathing. Therefore, we were predetermined to make the main outcome comparison in a low UV milieu, i.e., among those with low-to-moderate sun exposure habits. As secondary outcome we assessed mortality by sun-exposure with adjustment for exercise or stratified for low BMI, only including the time period after year 2000. A major limitation is that the significance level of the lower risk of all-cause mortality among fair women was close to the 5% significance level in all analyses regarding skin type, but it was according to the predetermined hypothesis. Another strength is that the analyses from year 2000 including exercise habits, and BMI showed similar results, but with wider CIs. The results might not be generalized into regions with more intense UV radiation. The aim of the study was not to assess cause specific mortality. However, it is impossible to publish on beneficial effects by sun exposure without including data on skin cancer mortality. Thus, our study is in agreement with the large amount of papers showing an increased incidence of skin cancer with fair skin and we also showed increased mortality in skin cancer. Since fair skin is selected at high latitudes, an improved all-cause survival is also expected from an evolutionary perspective [2]. Frost and coworkers reported in an open internet-based study that red-haired women were particularly prone to ovarian-, uterus-, cervical, and colorectal cancer, our results could not reproduce these findings and we did not find an increased incidence of these groups among fair women in our study [15]. There has been somewhat conflicting evidence regarding sun exposure and all-cause mortality. The Swedish Women´s Lifestyle and Health Study reported that increased sun exposure (measured as sunbathing holidays, i.e., which was one of our four questions) was related to reduced HRs for all-cause mortality [16]. On the other hand, a large US epidemiological study based on regional, not personal, UV radiation reported a positive relation between increasing UV radiation and all-cause mortality [17]. A possible explanation for the opposing results might be the differences in latitude and, therefore, UV intensity (Sweden latitude 55o to 59o and continental US latitude 24o to 42o. While the mean level of the biomarker vitamin D for sun exposure was 48.6 (± 20.5) nmol/L in Sweden it was 77.0 (± 25.0) nmol/L in the US, indicating a greater problem with sun deficiency at high latitudes [9, 18]. Based on data from the Swedish Meteorological and Hydrological Institute (SMHI), in 2014 there was one day with strong UV exposure, i.e., UV-index ≥ 6.Skin cancer mortality
When we investigated whether the increased mortality associated with skin cancer influenced the strong inverse relationship between all-cause mortality and increasing sun exposure habits and found that this was not the case. Women with fair skin were at a 59% increased risk of death in skin cancer. This was counterbalanced by the health benefits, as measured by all-cause mortality, of fair skin and sun exposure. There is an increased risk of skin cancer with both fair skin and increasing sun exposure, but the prognosis of skin cancer seem to improve with increasing sun exposure [19, 20]. Thus, there seem to be a tradeoff between health benefits and skin cancer and in regions with scarcity of solar UV radiation fair skin have been selected [2]. In our modern society there is not unusual with a mismatch between skin coloration and geography/climate/ habits that might cause increased morbidity and mortality [2].Sun exposure and overweight
Overweight and obese women do not seem to obtain the same benefit from having fair skin or from sun exposure as non-overweight women. We have seen similar findings in prior studies, where the lower risk of type 2 diabetes mellitus and endometrial cancer after UV exposure was mainly seen in non-overweight women [21, 22]. Wortsman and coworkers have clearly demonstrated that obesity has a detrimental effect on vitamin D levels for a given amount of UV exposure [23]. Thus, lower sun exposure habits among overweight is not the cause. It appears that vitamin D is either produced in a smaller quantity or consumed/inactivated among overweight women. Further, a study using Mendelian randomization analysis showed that increasing BMI leads to lower vitamin D levels [24]. The differential impact of BMI by sun exposure on all-cause mortality is an area that would benefit from additional research. Since BMI seem to be in the causal pathway of sun exposure and all-cause mortality, we chose not to adjust for BMI and present only stratified analysis.It has been hypothesized that the inbreeding with Neanderthals some 47,000 to 65,000 years ago in northern Canaan might have helped Homo sapiens adjust to life beyond Africa [25–27]. Studies of the ancient Neanderthal genome have shown that Westerners carry approximately 1% to 3% of Neanderthal DNA [25, 26]. People of European origin are highly likely (≈ 60% to 70%) to have the Neanderthal DNA that affects keratin filaments, i.e., zinc finger protein basonuclin-2 (BNC2). The latter alleles are thought to be involved in the adaptive variation of skin coloration, influencing skin pigmentation towards fairer skin [6, 28]. With our finding of increased life expectancy with fair skin, we speculate that the preserved high carriership of the Neanderthal BNC2 allel might be an advantage at high latitudes.
We interpret our findings to support that a fair, UV-sensitive phenotype in Sweden seems to be related to prolonged life expectancy in a low UV milieu, but at the cost of an increased risk of death due to skin cancer. Over thousands of years a fair UV-sensitive phenotype has possibly been selected for optimal health at high latitudes
Despite a longstanding expert consensus about the importance of cognitive ability for life outcomes, contrary views continue to proliferate in scholarly & popular literature; we find no threshold beyond which greater IQ cease to be beneficial
Brown, Matt, Jonathan Wai, and Christopher Chabris. 2020. “Can You Ever Be Too Smart for Your Own Good? Linear and Nonlinear Effects of Cognitive Ability.” PsyArXiv. January 30. https://psyarxiv.com/rpgea/
Abstract: Despite a longstanding expert consensus about the importance of cognitive ability for life outcomes, contrary views continue to proliferate in scholarly and popular literature. This divergence of beliefs among researchers, practitioners, and the general public presents an obstacle for evidence-based policy and decision-making in a variety of settings. One commonly held idea is that greater cognitive ability does not matter or is actually harmful beyond a certain point (sometimes stated as either 100 or 120 IQ points). We empirically test these notions using data from four longitudinal, representative cohort studies comprising a total of 48,558 participants in the U.S. and U.K. from 1957 to the present. We find that cognitive ability measured in youth has a positive association with most occupational, educational, health, and social outcomes later in life. Most effects were characterized by a moderate-to-strong linear trend or a practically null effect (mean R2 = .002 to .256). Although we detected several nonlinear effects, they were small in magnitude (mean incremental R2= .001). We found no support for any detrimental effects of cognitive ability and no evidence for a threshold beyond which greater scores cease to be beneficial. Thus, greater cognitive ability is generally advantageous—and virtually never detrimental.
Abstract: Despite a longstanding expert consensus about the importance of cognitive ability for life outcomes, contrary views continue to proliferate in scholarly and popular literature. This divergence of beliefs among researchers, practitioners, and the general public presents an obstacle for evidence-based policy and decision-making in a variety of settings. One commonly held idea is that greater cognitive ability does not matter or is actually harmful beyond a certain point (sometimes stated as either 100 or 120 IQ points). We empirically test these notions using data from four longitudinal, representative cohort studies comprising a total of 48,558 participants in the U.S. and U.K. from 1957 to the present. We find that cognitive ability measured in youth has a positive association with most occupational, educational, health, and social outcomes later in life. Most effects were characterized by a moderate-to-strong linear trend or a practically null effect (mean R2 = .002 to .256). Although we detected several nonlinear effects, they were small in magnitude (mean incremental R2= .001). We found no support for any detrimental effects of cognitive ability and no evidence for a threshold beyond which greater scores cease to be beneficial. Thus, greater cognitive ability is generally advantageous—and virtually never detrimental.
The vast majority of dogs and cats were reported to remember past events; both species reportedly remembered single-occurrence events that happened years ago
Pet memoirs: The characteristics of event memories in cats and dogs, as reported by their owners. Amy Lewis, Dorthe Berntsen. Applied Animal Behaviour Science, Volume 222, January 2020, 104885. https://doi.org/10.1016/j.applanim.2019.104885
Highlights
• The vast majority of dogs and cats were reported to remember past events.
• Both species reportedly remembered single-occurrence events that happened years ago.
• The events were diverse and often involved an interaction with an animal or person.
• They were often recalled when current external stimuli overlapped with the memory.
Abstract: The case for episodic memory in non-human animals has been intensely debated. Although a variety of paradigms have shown elements of episodic memory in non-human animals, research has focused on rodents, birds and primates, using standardized experimental designs, limiting the types of events that can be investigated. Using a novel survey methodology to address memories in everyday life, we conducted two studies asking a total of 375 dog and cat owners if their pet had ever remembered an event, and if so, to report on their pet’s memory of the event. In both studies, cats and dogs were reported to remember a variety of events, with only 20% of owners reporting that their pet had never remembered an event. The reported events were often temporally specific and were remembered when commonalities (particularly location) occurred between the current environment and the remembered event, analogous to retrieval of involuntary memories in humans.
Keywords: Event memoryEpisodic-like memoryDog cognitionCat cognitionInvoluntary autobiographical memory
Highlights
• The vast majority of dogs and cats were reported to remember past events.
• Both species reportedly remembered single-occurrence events that happened years ago.
• The events were diverse and often involved an interaction with an animal or person.
• They were often recalled when current external stimuli overlapped with the memory.
Abstract: The case for episodic memory in non-human animals has been intensely debated. Although a variety of paradigms have shown elements of episodic memory in non-human animals, research has focused on rodents, birds and primates, using standardized experimental designs, limiting the types of events that can be investigated. Using a novel survey methodology to address memories in everyday life, we conducted two studies asking a total of 375 dog and cat owners if their pet had ever remembered an event, and if so, to report on their pet’s memory of the event. In both studies, cats and dogs were reported to remember a variety of events, with only 20% of owners reporting that their pet had never remembered an event. The reported events were often temporally specific and were remembered when commonalities (particularly location) occurred between the current environment and the remembered event, analogous to retrieval of involuntary memories in humans.
Keywords: Event memoryEpisodic-like memoryDog cognitionCat cognitionInvoluntary autobiographical memory
Domestic dogs respond correctly to verbal cues issued by an artificial agent; generalisation of previously learned behaviours to the novel agent in all conditions was rapidly achieved
Domestic dogs respond correctly to verbal cues issued by an artificial agent. Nicky Shaw, Lisa M. Riley. Applied Animal Behaviour Science, January 30 2020, 104940. https://doi.org/10.1016/j.applanim.2020.104940
Highlights
• Domestic dogs can recall to an artificial agent and respond correctly to its pre-recorded, owner spoken verbal cues as reliably as to their owners in person and while alone in the test room.
• Generalisation of previously learned behaviours to the novel agent in all conditions was rapidly achieved.
• No behavioural indicators of poor welfare were recorded during interaction with the agent directly.
Abstract: Human-canine communication technology for the home-alone domestic dog is in its infancy. Many criteria need to be fulfilled in order for successful communication to be achieved remotely via artificial agents. Notably, the dogs’ capacity for correct behavioural responses to unimodal verbal cues is of primary consideration. Previous studies of verbal cues given to dogs alone in the test room have revealed a deterioration in correct behavioural responses in the absence of a source of attentional focus and reward. The present study demonstrates the ability of domestic pet dogs to respond correctly to an artificial agent. Positioned at average human eye level to replicate typical human-dog interaction, the agent issues a recall sound followed by two pre-recorded, owner spoken verbal cues known to each dog, and dispenses food rewards for correct behavioural responses. The agent was used to elicit behavioural responses in three test conditions; owner and experimenter present; experimenter present; and dog alone in the test room. During the fourth (baseline) condition, the same cues were given in person by the owner of each dog. The experiments comprised a familiarisation phase followed by a test phase of the four conditions, using a counterbalanced design. Data recorded included latency to correct response, number of errors before correct response given and behavioural welfare indicators during agent interaction. In all four conditions, at least 16/20 dogs performed the correct recall, cue 1 response, and cue 2 response sequence; there were no significant differences in the number of dogs who responded correctly to the sequence between the four conditions (p = 0.972). The order of test conditions had no effect on the dogs’ performances (p = 0.675). Significantly shorter response times were observed when cues were given in person than from the agent (p = 0.001). Behavioural indicators of poor welfare recorded were in response to owners leaving the test room, rather than as a direct result of agent interaction. Dogs left alone in the test room approached and responded correctly to verbal cues issued from an artificial agent, where rapid generalisation of learned behaviours and adjustment to the condition was achieved.
Keywords: DogDog-human communicationDog trainingUnimodal verbal cuesArtificial AgentWelfare
Highlights
• Domestic dogs can recall to an artificial agent and respond correctly to its pre-recorded, owner spoken verbal cues as reliably as to their owners in person and while alone in the test room.
• Generalisation of previously learned behaviours to the novel agent in all conditions was rapidly achieved.
• No behavioural indicators of poor welfare were recorded during interaction with the agent directly.
Abstract: Human-canine communication technology for the home-alone domestic dog is in its infancy. Many criteria need to be fulfilled in order for successful communication to be achieved remotely via artificial agents. Notably, the dogs’ capacity for correct behavioural responses to unimodal verbal cues is of primary consideration. Previous studies of verbal cues given to dogs alone in the test room have revealed a deterioration in correct behavioural responses in the absence of a source of attentional focus and reward. The present study demonstrates the ability of domestic pet dogs to respond correctly to an artificial agent. Positioned at average human eye level to replicate typical human-dog interaction, the agent issues a recall sound followed by two pre-recorded, owner spoken verbal cues known to each dog, and dispenses food rewards for correct behavioural responses. The agent was used to elicit behavioural responses in three test conditions; owner and experimenter present; experimenter present; and dog alone in the test room. During the fourth (baseline) condition, the same cues were given in person by the owner of each dog. The experiments comprised a familiarisation phase followed by a test phase of the four conditions, using a counterbalanced design. Data recorded included latency to correct response, number of errors before correct response given and behavioural welfare indicators during agent interaction. In all four conditions, at least 16/20 dogs performed the correct recall, cue 1 response, and cue 2 response sequence; there were no significant differences in the number of dogs who responded correctly to the sequence between the four conditions (p = 0.972). The order of test conditions had no effect on the dogs’ performances (p = 0.675). Significantly shorter response times were observed when cues were given in person than from the agent (p = 0.001). Behavioural indicators of poor welfare recorded were in response to owners leaving the test room, rather than as a direct result of agent interaction. Dogs left alone in the test room approached and responded correctly to verbal cues issued from an artificial agent, where rapid generalisation of learned behaviours and adjustment to the condition was achieved.
Keywords: DogDog-human communicationDog trainingUnimodal verbal cuesArtificial AgentWelfare
That confidence people have in their memory is weakly related to its accuracy, that false memories of fictitious childhood events can be easily implanted, are claims that rest on shaky foundations: Memory is malleable but essentially reliable
Regaining Consensus on the Reliability of Memory. Chris R. Brewin, Bernice Andrews, Laura Mickes. Current Directions in Psychological Science, January 30, 2020. https://doi.org/10.1177/0963721419898122
Abstract: In the last 20 years, the consensus about memory being essentially reliable has been neglected in favor of an emphasis on the malleability and unreliability of memory and on the public’s supposed unawareness of this. Three claims in particular have underpinned this popular perspective: that the confidence people have in their memory is weakly related to its accuracy, that false memories of fictitious childhood events can be easily implanted, and that the public wrongly sees memory as being like a video camera. New research has clarified that all three claims rest on shaky foundations, suggesting there is no reason to abandon the old consensus about memory being malleable but essentially reliable.
Keywords: false memory, memory accuracy, confidence, lay beliefs
Abstract: In the last 20 years, the consensus about memory being essentially reliable has been neglected in favor of an emphasis on the malleability and unreliability of memory and on the public’s supposed unawareness of this. Three claims in particular have underpinned this popular perspective: that the confidence people have in their memory is weakly related to its accuracy, that false memories of fictitious childhood events can be easily implanted, and that the public wrongly sees memory as being like a video camera. New research has clarified that all three claims rest on shaky foundations, suggesting there is no reason to abandon the old consensus about memory being malleable but essentially reliable.
Keywords: false memory, memory accuracy, confidence, lay beliefs
Academic dishonesty—to cheat, fabricate, falsify, and plagiarize in an academic context—is positively correlated with the dark traits, and negatively correlated with openness, conscientiousness, agreeableness, & honesty-humility
Plessen, Constantin Y., Marton L. Gyimesi, Bettina M. J. Kern, Tanja M. Fritz, Marcela Victoria Catalán Lorca, Martin Voracek, and Ulrich S. Tran. 2020. “Associations Between Academic Dishonesty and Personality: A Pre-registered Multilevel Meta-analysis.” PsyArXiv. January 30. doi:10.31234/osf.io/pav2f
Abstract: Academic dishonesty—the inclination to cheat, fabricate, falsify, and plagiarize in an academic context—is a highly prevalent problem with dire consequences for society. The present meta-analysis systematically examined associations between academic dishonesty and personality traits of the Big Five, the HEXACO model, Machiavellianism, narcissism, subclinical psychopathy, and the Dark Core. We provide an update and extension of the only meta-analysis on this topic by Giluk and Postlethwaite (2015), synthesizing in total 89 effect sizes from 50 studies—containing 38,189 participants from 23 countries. Multilevel meta-analytical modelling showed that academic dishonesty was positively correlated with the dark traits, and negatively correlated with openness, conscientiousness, agreeableness, and honesty-humility. The moderate-to-high effect size heterogeneity—ranging from I2 = 57% to 91%—could only be partially explained by moderator analyses. The observed relationships appear robust with respect to publication bias and measurement error, and can be generalized to a surprisingly large scope (across sexes, continents, scales, and study quality). Future research needs to examine these associations with validated and more nuanced scales for academic dishonesty.
Abstract: Academic dishonesty—the inclination to cheat, fabricate, falsify, and plagiarize in an academic context—is a highly prevalent problem with dire consequences for society. The present meta-analysis systematically examined associations between academic dishonesty and personality traits of the Big Five, the HEXACO model, Machiavellianism, narcissism, subclinical psychopathy, and the Dark Core. We provide an update and extension of the only meta-analysis on this topic by Giluk and Postlethwaite (2015), synthesizing in total 89 effect sizes from 50 studies—containing 38,189 participants from 23 countries. Multilevel meta-analytical modelling showed that academic dishonesty was positively correlated with the dark traits, and negatively correlated with openness, conscientiousness, agreeableness, and honesty-humility. The moderate-to-high effect size heterogeneity—ranging from I2 = 57% to 91%—could only be partially explained by moderator analyses. The observed relationships appear robust with respect to publication bias and measurement error, and can be generalized to a surprisingly large scope (across sexes, continents, scales, and study quality). Future research needs to examine these associations with validated and more nuanced scales for academic dishonesty.
High emotion recognition ability may inadvertently harm romantic and professional relationships when one perceives potentially disruptive information; also, high-ERA individuals do not appear to be happier with their lives
Inter- and Intrapersonal Downsides of Accurately Perceiving Others’ Emotions. Katja Schlegel. In: Social Intelligence and Nonverbal Communication pp 359-395, Jan 26 2020. https://link.springer.com/chapter/10.1007/978-3-030-34964-6_13
Abstract: The ability to accurately perceive others’ emotions from nonverbal cues (emotion recognition ability [ERA]) is typically conceptualized as an adaptive skill. Accordingly, many studies have found positive correlations between ERA and measures of social and professional success. This chapter, in contrast, examines whether high ERA can also have downsides, both for other people and for oneself. A literature review revealed little evidence that high-ERA individuals use their skill to hurt others. However, high ERA may inadvertently harm romantic and professional relationships when one perceives potentially disruptive information. Furthermore, high-ERA individuals do not appear to be happier with their lives than low-ERA individuals. Overall, the advantages of high ERA outweigh the downsides, but many open questions regarding negative effects remain to be studied.
Keywords: Emotion recognition Emotional intelligence Well-being Dark side Interpersonal accuracy
Summary and Conclusion
The ability to accurately recognize others’ emotions from the face, voice,
and body is typically considered to be an adaptive skill contributing to
social and professional success. This has been supported by various studies
(see Schmid Mast & Hall, 2018; Hall et al., 2009; Elfenbein et al.,
2007, for reviews). Much less research has looked into the potential
downsides or disadvantages of high ERA for oneself (i.e., for one’s wellbeing)
and for others (i.e., by manipulating other people or hampering
smooth interactions with others). The present chapter reviewed this
research in non-clinical adults, specifically focusing on the following
questions: Is there a “dark” side to high ERA in that people use it to hurt
others? Can high ERA negatively affect the quality of relationships? Why
is high ERA uncorrelated with psychological well-being? Finally, is there
an optimal level of ERA?
Although more research is clearly needed to answer these questions
with more confidence, the current state of the literature suggests that
ERA is a double-edged sword that affects one’s well-being and social outcomes
both positively and negatively. One common theme that emerged
as a possible explanation for both positive and negative pathways is the
heightened emotional awareness of or attunement to others’ feelings in
persons with high ERA. Because high-ERA individuals are more perceptive
of others’ positive and negative emotions, their own emotions also
appear be more affected by what is happening around them, contributing
to various inter- and intrapersonal outcomes.
For instance, high-ERA individuals seem to be more prosocial and
cooperative, maybe in order to perceive more positive emotions in others
and to preserve their own psychological well-being. Heightened emotional
awareness for others’ feelings can also explain the positive associations
between ERA and social and workplace effectiveness found in many
studies. On the other hand, “hyperawareness” in high-ERA individuals
can inadvertently contribute to lower rapport, less favorable impressions
in others, and lower relationship quality due to “eavesdropping” and the
failure to show “motivated inaccuracy” when it might be adaptive.
Because high emotional awareness appears to amplify the effects of
perceived positive and negative emotions, in stable environments with
only few stressors, the adaptive advantages of high ERA may outweigh
the downsides. However, as adversity or instability increases, the higher
proportion of perceived and experienced negative affect may contribute
to lower well-being and the development of depressive symptoms. A
higher tendency to suffer with others in distress might represent one possible
mechanism negatively influencing psychological well-being.
Taken together, the various positive and negative pathways between
high ERA and well-being as well as interpersonal relationships may
explain why ERA does not appear to be positively correlated with wellbeing,
although this had been found for emotional intelligence more
broadly (e.g., Sánchez-Álvarez et al., 2015). One may speculate that other
components of emotional intelligence such as the ability to regulate one’s
own negative emotions efficiently or the ability to manage others’ emotions
have fewer potential downsides than ERA with respect to one’s own
well-being, although they may be more “useful” when it comes to manipulating
others (e.g., Côté et al., 2011).
An interesting question is whether the terms “emotional hyperawareness”
(e.g., Davis & Nichols, 2016) or “hypersensitivity” (Fiori & Ortony,
2016) are appropriate to describe high-ERA individuals. These terms are
often used to describe an exaggerated, maladaptive reactivity of neurophysiological
structures related to mental disorders (e.g., Frick et al.,
2012; Neuner et al., 2010). In healthy individuals with high ERA, however,
the elevated attunement to emotions might represent a more realistic
and holistic view of the social world rather than a bias (Scherer, 2007). If
this is the case, then the absence of a correlation between ERA and wellbeing
or life satisfaction may also reflect that those high in ERA evaluate
these constructs more realistically and thus more negatively, although
they might be “happier” than others if different criteria were used. It may
also be that high-ERA individuals, compared to low-ERA individuals, are
relatively more satisfied with some life domains (e.g., friendships) and
less satisfied with others (e.g., work), which may cancel each other out
when global well-being or life satisfaction is considered.
The current literature can be expanded in several ways. In particular,
more studies that examine the moderating effects of personality traits on
the link between ERA and outcomes are needed. In particular, traits
related to the processing and regulation of emotions in oneself and others
might moderate the effects of ERA not only on intrapersonal outcomes
such as psychological well-being but also on interpersonal outcomes such
as relationship quality. For example, it would be interesting to examine
how ERA, empathic concern, and detachment interact in predicting
stress, emotional exhaustion, or work engagement in helping professions.
One can hypothesize that a high ability to detach oneself from stressful
negative work experiences protects professionals that are highly perceptive
of clients’ negative feelings and express empathic concern from negative
effects on well-being. Other possible moderating variables include
“positivity offset” (Ito & Cacioppo, 2005) and stable appraisal biases
(Scherer, 2019). In addition, “dark” personality traits might moderate the
effects on interpersonal behaviors such as deception, such that high ERA
may, for example, amplify the effects of high Machiavellianism or trait
exploitativeness (Konrath et al., 2014). Future studies should also look
into curvilinear relationships to examine which levels of ERA are the
most beneficial or detrimental for various outcomes and situations.
Furthermore, longitudinal studies may shed light on the causality
underlying ERA and the development of psychological well-being over
time as a function of a person’s environment. For example, it could be
tested whether Wilson and Csikszentmihalyi’s (2007) finding that prosociality
is beneficial in stable environments but detrimental in adverse
ones also holds for ERA. Such studies would also allow investigating the
causal pathways linking ERA and depressive symptoms, including testing
the possibilities that dysphoria increases ERA (Harkness et al., 2005) and
that ERA, due to a more realistic perception of the social world, makes
people “wiser but sadder” (Scherer, 2007).
Many of the above conclusions rely on the assumption that high ERA
relates to a higher attunement to emotions in our surroundings. However,
only few studies to date examined this association. Fiori and Ortony
(2016) and Freudenthaler and Neubauer (2007) pointed out that ability
tests of emotional intelligence measure maximal performance and crystallized
knowledge, but do not necessarily capture typical performance
and more fluid emotion processing. More research is thus needed to corroborate
the idea that being good at accurately labeling emotional expressions
when one is explicitly instructed to do so is related to paying more
attention to emotions in everyday life when an abundance of different
types of information is available. Future research should involve the
development of new standard tests tapping into typical performance
regarding emotion perception. Future studies could also benefit from
using methods such as portable eye tracking or experience sampling to be
able to study more real-life situations. Finally, future studies may examine
satisfaction in specific life domains as outcome measures of ERA in addition
to general measures of well-being.
The current review also raises the question whether available trainings
for increasing ERA (see Blanch-Hartigan, Andrzejewski, & Hill, 2012
for a meta-analysis) are useful if high ERA can have detrimental effects.
The answer may depend on what outcomes are considered. If an ERA
training improves law enforcement officers’ job performance (Hurley,
Anker, Frank, Matsumoto, & Hwang, 2014) or helps doctors to better
understand their patients (Blanch-Hartigan, 2012), the answer would be
that trainings are useful. When psychological well-being is considered as
the outcome, stand-alone ERA trainings may not always be useful, for
example, if a person is experiencing chronic stress or depressive symptoms.
In these cases, it may be beneficial to combine an ERA training
with a training targeted at the use of adaptive emotion regulation strategies
to prevent potentially detrimental effects.
To conclude, I would like to emphasize that, overall, ERA should still
be considered an adaptive and valuable skill, especially when effective
interpersonal interactions in the workplace or close relationships are
considered
(e.g., reviews by Elfenbein et al., 2007; Schmid Mast & Hall,
2018). High-ERA individuals receive better ratings from others on various
positive traits (e.g., socio-emotional competence) and report being
more open, more conscientious, and more tolerant (Hall et al., 2009).
The interpersonal downsides and “dark” aspects of high ERA in healthy
adults discussed in the present chapter seem to be limited to relatively
specific situations or ERA profiles, although more research is needed.
With respect to psychological well-being, however, the picture seems to
be more nuanced, implying both positive and negative pathways that
may be more or less influential based on a person’s life situation and personality
traits. More sophisticated study designs, novel data collection
methods, and more complex statistical analyses can help us better understand
these mechanisms.
Being so important to identify deception, we are really bad at it, so we developed the equivalent of an intelligence network that would pass along information and evidence, thus rendering the need for an individual lie detector moot
Nonverbal Communication: Evolution and Today. Mark G. Frank, Anne Solbu. In: Social Intelligence and Nonverbal Communication pp 119-162, January 26 2020. https://link.springer.com/chapter/10.1007/978-3-030-34964-6_5
Abstract: One aspect of social intelligence is the ability to identify when others are being deceptive. It would seem that individuals who were bestowed with such an ability to recognize honest signals of emotion, particularly when attempts to suppress them are made, would have a reproductive advantage over others without it. Yet the research literature suggests that on average people are good at detecting only overt manifestations of these signals. We argue instead that our evolution as a social species living in groups permitted discovery of deceptive incidents due to the factual evidence of the deception transmitted verbally through social connections. Thus the same principles that pressed for our evolution as a cooperative social species enabled us to develop the equivalent of an intelligence network that would pass along information and evidence, thus rendering a press for an individual lie detector moot.
Keywords: Deception detection Evolution Emotions Behavioral signals Social life
Conclusion
Taken together, it is clear that there are strong signals for various emotions and intentions and a strong rationale for why these signals would be
‘engineered’ to solve a recurrent problem. And despite being wired to
detect these signals, humans are poor detectors of these signals once they
become subtle through efforts to conceal them. Yet this ability to spot
these dishonest and/or subtle versions of the signals would seem to be of
great benefit to any given individual in his or her quest to survive and
pass on his or her genes to the next generation. This sense that evolution
did not bestow our species with these internal event detectors seems puzzling, until we unpack some of the social structures of the ancient world.
It seems the cooperative structures, and little (at least initially) opportunities to ‘cheat’, often may have allowed, in essence, an intelligence network to be developed where pejorative information could be passed along
easily and cheaply to identify any particular cheater. Thus, the evolution
of cooperative behavior was the key to lie-catching. It seems logical that
there would be no strong independent press to develop internal cheater
detectors, when a strong social network would do the job for at a greatly
reduced cost (Smith, 2010).
Importantly, lie detection in the laboratory or in single case studies
does not fully translate to the real world, where gossip and relationships
with others matter (Haidt, 2001). People rely on gossip, even when accuracy may be limited (Sommerfeld, Krambeck, & Milinski, 2008); it may
nevertheless actually improve lie detection (Klein & Epley, 2015).
Moreover, it is through the influence from others that we may decide to
override our tendency to cooperate (Bear & Rand, 2016) and employ
conscious deliberation to make our decisions (Haidt, 2001). The alignment of emotions through empathy, and increased goal sharing (Tomasello,
Carpenter, Call, Behne, & Moll, 2005), as evidenced by the #MeToo
movement (Rodino-Colocino, 2018), gave rise to the same powerful
group thinking and sociality as seen in the emergence of human morality
(Jensen, Vaish, & Schmidt, 2014). Haidt (2001) states “A group of judges
independently seeking truth is unlikely to reach an effective consensus,
but a group of people linked together in a large web of mutual influence
may eventually settle into a stable configuration” (p. 826). This becomes,
functionally, a long-range radar type system that has agents reporting
back actions, behaviors, and relationships to each other, which in turn sets
the groundwork for recognizing inconsistencies regarding people not
being where they say they are, people being with people they deny knowing, and so forth. The presence of this communication network would
reduce the need to make individuals hyper-vigilant in every interaction,
or to individually develop super-acute deception detection skills. Likewise,
unusual interpersonal behaviors can trigger individuals to search for evidence to verify their hypotheses about someone’s veracity, and they can
then activate their social networks to verify the information provided by
the unusually behaving person (Novotny et al., 2018). These networks are
not just passive providers of information. Thus, the socially intelligent
person is the one who has the best access to the collective intelligence—
and likely the most friends, as believed by the Ugandans (Wober, 1974).
We believe the research literature has neglected this larger system in which
our social structures exist, which often detect the deception for us. Even
as our society expands, social media and movements like #MeToo have
become like the global village, where previously unacquainted individuals
can now verify the truth or falsity of each other, thus (hopefully) betraying
the attempted liar.
Abstract: One aspect of social intelligence is the ability to identify when others are being deceptive. It would seem that individuals who were bestowed with such an ability to recognize honest signals of emotion, particularly when attempts to suppress them are made, would have a reproductive advantage over others without it. Yet the research literature suggests that on average people are good at detecting only overt manifestations of these signals. We argue instead that our evolution as a social species living in groups permitted discovery of deceptive incidents due to the factual evidence of the deception transmitted verbally through social connections. Thus the same principles that pressed for our evolution as a cooperative social species enabled us to develop the equivalent of an intelligence network that would pass along information and evidence, thus rendering a press for an individual lie detector moot.
Keywords: Deception detection Evolution Emotions Behavioral signals Social life
Conclusion
Taken together, it is clear that there are strong signals for various emotions and intentions and a strong rationale for why these signals would be
‘engineered’ to solve a recurrent problem. And despite being wired to
detect these signals, humans are poor detectors of these signals once they
become subtle through efforts to conceal them. Yet this ability to spot
these dishonest and/or subtle versions of the signals would seem to be of
great benefit to any given individual in his or her quest to survive and
pass on his or her genes to the next generation. This sense that evolution
did not bestow our species with these internal event detectors seems puzzling, until we unpack some of the social structures of the ancient world.
It seems the cooperative structures, and little (at least initially) opportunities to ‘cheat’, often may have allowed, in essence, an intelligence network to be developed where pejorative information could be passed along
easily and cheaply to identify any particular cheater. Thus, the evolution
of cooperative behavior was the key to lie-catching. It seems logical that
there would be no strong independent press to develop internal cheater
detectors, when a strong social network would do the job for at a greatly
reduced cost (Smith, 2010).
Importantly, lie detection in the laboratory or in single case studies
does not fully translate to the real world, where gossip and relationships
with others matter (Haidt, 2001). People rely on gossip, even when accuracy may be limited (Sommerfeld, Krambeck, & Milinski, 2008); it may
nevertheless actually improve lie detection (Klein & Epley, 2015).
Moreover, it is through the influence from others that we may decide to
override our tendency to cooperate (Bear & Rand, 2016) and employ
conscious deliberation to make our decisions (Haidt, 2001). The alignment of emotions through empathy, and increased goal sharing (Tomasello,
Carpenter, Call, Behne, & Moll, 2005), as evidenced by the #MeToo
movement (Rodino-Colocino, 2018), gave rise to the same powerful
group thinking and sociality as seen in the emergence of human morality
(Jensen, Vaish, & Schmidt, 2014). Haidt (2001) states “A group of judges
independently seeking truth is unlikely to reach an effective consensus,
but a group of people linked together in a large web of mutual influence
may eventually settle into a stable configuration” (p. 826). This becomes,
functionally, a long-range radar type system that has agents reporting
back actions, behaviors, and relationships to each other, which in turn sets
the groundwork for recognizing inconsistencies regarding people not
being where they say they are, people being with people they deny knowing, and so forth. The presence of this communication network would
reduce the need to make individuals hyper-vigilant in every interaction,
or to individually develop super-acute deception detection skills. Likewise,
unusual interpersonal behaviors can trigger individuals to search for evidence to verify their hypotheses about someone’s veracity, and they can
then activate their social networks to verify the information provided by
the unusually behaving person (Novotny et al., 2018). These networks are
not just passive providers of information. Thus, the socially intelligent
person is the one who has the best access to the collective intelligence—
and likely the most friends, as believed by the Ugandans (Wober, 1974).
We believe the research literature has neglected this larger system in which
our social structures exist, which often detect the deception for us. Even
as our society expands, social media and movements like #MeToo have
become like the global village, where previously unacquainted individuals
can now verify the truth or falsity of each other, thus (hopefully) betraying
the attempted liar.
Subscribe to:
Posts (Atom)