Gender Differences in Multitasking Experience and Performance. Kelvin F. H. Lui, Ken H. M. Yip, Alan C.-N. Wong. Quarterly Journal of Experimental Psychology, September 16, 2020. https://doi.org/10.1177/1747021820960707
Rolf Degen's take: https://twitter.com/DegenRolf/status/1306478689375248385
Abstract: There is a widespread stereotype that women are better at multitasking. Previous studies examining gender difference in multitasking used either a concurrent or sequential multitasking paradigm and offered mixed results. The present study examined a possibility that men were better at concurrent multitasking while women were better at task switching. In addition, men and women were also compared in terms of multitasking experience, measured by a computer monitoring software, a self-reported Media Use Questionnaire, a lab task switching paradigm, and a self-reported Multitasking Prevalence Inventory. Results showed a smaller concurrent multitasking (dual-task) cost for men than women and no gender difference in sequential multitasking (task switching) cost. Men had more experience in multitasking involving video games while women were more experienced in multitasking involving music, instant messaging, and web surfing. The gender difference in dual-task performance, however, was not mediated by the gender differences in multitasking experience but completely explained by difference in the processing speed. The findings suggest that men have an advantage in concurrent multitasking, and that may be a result of the individual differences in cognitive abilities.
Keywords gender difference, multitasking, dual-task performance, task switching, experience
Check also Moderate amounts of media multitasking are associated with optimal task performance and minimal mind wandering. Myoungju Shin, Astrid Linke, Eva Kemps. Computers in Human. Behavior, Volume 111, October 2020, 106422. https://www.bipartisanalliance.com/2020/05/moderate-amounts-of-media-multitasking.html
Thursday, September 17, 2020
Wednesday, September 16, 2020
Anger increases susceptibility to misinformation: Anger did not affect either recognition or source accuracy for true details about the initial event, but suggestibility for false details increased with anger
Greenstein, M., & Franklin, N. (2020). Anger increases susceptibility to misinformation. Experimental Psychology, 67(3), 202–209. https://doi.org/10.1027/1618-3169/a000489
Abstract: The effect of anger on acceptance of false details was examined using a three-phase misinformation paradigm. Participants viewed an event, were presented with schema-consistent and schema-irrelevant misinformation about it, and were given a surprise source monitoring test to examine the acceptance of the suggested material. Between each phase of the experiment, they performed a task that either induced anger or maintained a neutral mood. Participants showed greater susceptibility to schema-consistent than schema-irrelevant misinformation. Anger did not affect either recognition or source accuracy for true details about the initial event, but suggestibility for false details increased with anger. In spite of this increase in source errors (i.e., misinformation acceptance), both confidence in the accuracy of source attributions and decision speed for incorrect judgments also increased with anger. Implications are discussed with respect to both the general effects of anger and real-world applications such as eyewitness memory.
Discussion
Anger is an approach-oriented emotion adapted to guide
behavior under circumstances that involve time pressure
and consequences for safety. Because rapid cognition and
disinhibition would support effective action under these
circumstances, we made three predictions for how anger
would impact outcomes in a misinformation paradigm as
follows:
1. Reduced skepticism would increase susceptibility to postevent misinformation.
2. Disinhibition would be manifested as increased confidence.
3. More streamlined cognition would lead to faster responding.
We found evidence for all three of these outcomes.
When time is of the essence, there is value in suppressing
self-doubt to act quickly and decisively. While the experiment
did not call for urgent action, the cognitive adaptations
associated with anger should, as observed,
impact cognition pervasively, suggesting that these findings
reflect a broad cognitive style associated with anger.
Furthermore, anger increased suggestibility for schemairrelevant
and schema-consistent details, demonstrating
its broad impact on cognition.
Interestingly, anger did not seem to impair memory for
events that actually did occur as it affected neither recognition
memory nor source accuracy for details actually
present in the original event. Instead, anger impaired the
ability to dismiss errors that were subsequently suggested.
This is consistent with the characterization of anger as
streamlining cognition in support of action rather than
additional reflection. Coupled with the observed rapid and
confident memory decisions, this points to a constellation
of risks associated with anger’s impact on memory.
Because anger affected confidence and accuracy in
opposite directions, it affected the confidence–accuracy
relationship. The two are traditionally moderately correlated,
as observed in the neutral condition. Anger, however,
led to the opposite pattern where increased
confidence was associated with decreased accuracy. To
the extent that people use expressed confidence to judge
the reliability of other people’s memory (Wells et al., 1979),
this may be problematic. Because anger did not impair
participants’ source accuracy for events that actually had
occurred in the Film, outside observers with corroborating
evidence of those film details would have a basis for accepting
the participants’ confident additional claims. This
is precisely the sort of situation that jurors are exposed to
in a courtroom.
The current work has clear implications for witness
memory. Crimes can induce anger (Matsumoto & Hwang,
2015) and are associated with a risk of highly consequential
memory impairment (Deffenbacher et al., 2004).
Following an incident, witness memory is subject to
postevent input, such as cowitness accounts or leading
questions, which can distort memory (e.g., Roediger et al.,
2001; Wade et al., 2002). With each discussion or interview
comes renewed opportunity for misinformation effects,
which our results suggest will disproportionately
impact angry witnesses. To the extent that an angry witness
becomes more prone to errors of action (incorporating
postevent information into memory) rather than
inaction (rejecting new information), their memory reports
may become particularly unreliable.
Indeed, for at least three reasons, the observed effects
may underpredict those that would occur following a
crime. First, criminal cases may involve more repetition of
and elaboration upon postevent misinformation than occurred
in this experiment. For instance, criminal cases that
lead to prosecution generally involve multiple interviews
of the same eyewitness (e.g., by police officers, detectives,
and prosecutors). Separate from these, witnesses often
also relate details of the incident to cowitnesses, family,
and friends and potentially receive incorrectly suggested
details from these sources as well. Second, the anger
experienced in a criminal case is likely directed at the
source of the memory (the perpetrator) rather than at an
incidental target (the experimenter). Although this work
demonstrates that anger affects processing of content
unrelated to the source of the anger, it is possible that the
effects increase for related content. Third, the anger experienced
by many victims and witnesses would likely
exceed that of our laboratory participants. Inasmuch as the
degree of anger that affects the tendency to fall prey to
these biases, the potentially angrier victims and other
witnesses may be impacted more than our participants
were. It is also possible for the above factors to interact
with one another, further increasing risk in real-world
situations.
With regard to schematicity, participants in both
emotion induction conditions accepted more schemaconsistent
than schema-irrelevant misinformation. The
streamlined cognitive processing style associated with
anger did not increase preexisting tendencies to incorporate
this type of information into memory (Kleider
et al., 2008), although this may also reflect the high
plausibility of both schematic and schema-irrelevant
items. While this work intentionally sought to use only
highly plausible misinformation, future research should
continue to explore this question using less plausible
misinformation.
The current findings, coupled with the frequency with
which anger is experienced (Matsumoto & Hwang, 2015),
call for a greater understanding of its effects on memory.
Much is already known about memory’s vulnerability to
misinformation, and the current work finds that anger can
increase the frequency of these errors as well as one’s
confidence in them. Applied to criminal contexts, a richly
detailed witness report, combined with high confidence in
the associated memory, can contribute to heightened
perception of credibility in the eyes of cowitnesses, investigators,
judges, and jurors (Wells et al., 1979). Thus,
the dangers of anger, particularly in the justice system,
where the errors have real consequences, appear to be
more serious than previously understood.
Abstract: The effect of anger on acceptance of false details was examined using a three-phase misinformation paradigm. Participants viewed an event, were presented with schema-consistent and schema-irrelevant misinformation about it, and were given a surprise source monitoring test to examine the acceptance of the suggested material. Between each phase of the experiment, they performed a task that either induced anger or maintained a neutral mood. Participants showed greater susceptibility to schema-consistent than schema-irrelevant misinformation. Anger did not affect either recognition or source accuracy for true details about the initial event, but suggestibility for false details increased with anger. In spite of this increase in source errors (i.e., misinformation acceptance), both confidence in the accuracy of source attributions and decision speed for incorrect judgments also increased with anger. Implications are discussed with respect to both the general effects of anger and real-world applications such as eyewitness memory.
Discussion
Anger is an approach-oriented emotion adapted to guide
behavior under circumstances that involve time pressure
and consequences for safety. Because rapid cognition and
disinhibition would support effective action under these
circumstances, we made three predictions for how anger
would impact outcomes in a misinformation paradigm as
follows:
1. Reduced skepticism would increase susceptibility to postevent misinformation.
2. Disinhibition would be manifested as increased confidence.
3. More streamlined cognition would lead to faster responding.
We found evidence for all three of these outcomes.
When time is of the essence, there is value in suppressing
self-doubt to act quickly and decisively. While the experiment
did not call for urgent action, the cognitive adaptations
associated with anger should, as observed,
impact cognition pervasively, suggesting that these findings
reflect a broad cognitive style associated with anger.
Furthermore, anger increased suggestibility for schemairrelevant
and schema-consistent details, demonstrating
its broad impact on cognition.
Interestingly, anger did not seem to impair memory for
events that actually did occur as it affected neither recognition
memory nor source accuracy for details actually
present in the original event. Instead, anger impaired the
ability to dismiss errors that were subsequently suggested.
This is consistent with the characterization of anger as
streamlining cognition in support of action rather than
additional reflection. Coupled with the observed rapid and
confident memory decisions, this points to a constellation
of risks associated with anger’s impact on memory.
Because anger affected confidence and accuracy in
opposite directions, it affected the confidence–accuracy
relationship. The two are traditionally moderately correlated,
as observed in the neutral condition. Anger, however,
led to the opposite pattern where increased
confidence was associated with decreased accuracy. To
the extent that people use expressed confidence to judge
the reliability of other people’s memory (Wells et al., 1979),
this may be problematic. Because anger did not impair
participants’ source accuracy for events that actually had
occurred in the Film, outside observers with corroborating
evidence of those film details would have a basis for accepting
the participants’ confident additional claims. This
is precisely the sort of situation that jurors are exposed to
in a courtroom.
The current work has clear implications for witness
memory. Crimes can induce anger (Matsumoto & Hwang,
2015) and are associated with a risk of highly consequential
memory impairment (Deffenbacher et al., 2004).
Following an incident, witness memory is subject to
postevent input, such as cowitness accounts or leading
questions, which can distort memory (e.g., Roediger et al.,
2001; Wade et al., 2002). With each discussion or interview
comes renewed opportunity for misinformation effects,
which our results suggest will disproportionately
impact angry witnesses. To the extent that an angry witness
becomes more prone to errors of action (incorporating
postevent information into memory) rather than
inaction (rejecting new information), their memory reports
may become particularly unreliable.
Indeed, for at least three reasons, the observed effects
may underpredict those that would occur following a
crime. First, criminal cases may involve more repetition of
and elaboration upon postevent misinformation than occurred
in this experiment. For instance, criminal cases that
lead to prosecution generally involve multiple interviews
of the same eyewitness (e.g., by police officers, detectives,
and prosecutors). Separate from these, witnesses often
also relate details of the incident to cowitnesses, family,
and friends and potentially receive incorrectly suggested
details from these sources as well. Second, the anger
experienced in a criminal case is likely directed at the
source of the memory (the perpetrator) rather than at an
incidental target (the experimenter). Although this work
demonstrates that anger affects processing of content
unrelated to the source of the anger, it is possible that the
effects increase for related content. Third, the anger experienced
by many victims and witnesses would likely
exceed that of our laboratory participants. Inasmuch as the
degree of anger that affects the tendency to fall prey to
these biases, the potentially angrier victims and other
witnesses may be impacted more than our participants
were. It is also possible for the above factors to interact
with one another, further increasing risk in real-world
situations.
With regard to schematicity, participants in both
emotion induction conditions accepted more schemaconsistent
than schema-irrelevant misinformation. The
streamlined cognitive processing style associated with
anger did not increase preexisting tendencies to incorporate
this type of information into memory (Kleider
et al., 2008), although this may also reflect the high
plausibility of both schematic and schema-irrelevant
items. While this work intentionally sought to use only
highly plausible misinformation, future research should
continue to explore this question using less plausible
misinformation.
The current findings, coupled with the frequency with
which anger is experienced (Matsumoto & Hwang, 2015),
call for a greater understanding of its effects on memory.
Much is already known about memory’s vulnerability to
misinformation, and the current work finds that anger can
increase the frequency of these errors as well as one’s
confidence in them. Applied to criminal contexts, a richly
detailed witness report, combined with high confidence in
the associated memory, can contribute to heightened
perception of credibility in the eyes of cowitnesses, investigators,
judges, and jurors (Wells et al., 1979). Thus,
the dangers of anger, particularly in the justice system,
where the errors have real consequences, appear to be
more serious than previously understood.
Replication and Extension of Alicke (1985) Better-Than-Average Effect for Desirable and Controllable Traits
Replication and Extension of Alicke (1985) Better-Than-Average Effect for Desirable and Controllable Traits. Ignazio Ziano, Pui Yan (Cora) Mok, Gilad Feldman. Social Psychological and Personality Science, September 16, 2020. https://doi.org/10.1177/1948550620948973
Rolf Degen's take: https://twitter.com/DegenRolf/status/1306294580925607937
Abstract: People tend to regard themselves as better than average. We conducted a replication and extension of Alicke’s classic study on trait dimensions in evaluations of self versus others with U.S. American Mechanical Turk workers in two waves (total N = 1,573; 149 total traits). We successfully replicated the trait desirability effect, such that participants rated more desirable traits as being more descriptive of themselves than of others (original: η2p = .78, 95% confidence interval [CI] [.73, .81]; replication: sr 2 = .54, 95% CI [.43, .65]). The effect of desirability was stronger for more controllable traits (effect of Desirability × Controllability interaction on self–other-ratings difference; original: η2p = .21, 95% CI [.12, .28]; replication: sr 2 = .07, 95% CI [.02, .12]). In an extension, we found that desirable traits were rated as more common for others, but not for the self. Thirty-five years later, the better-than-average effect appears to remain robust. All materials, data, and code are available at https://osf.io/2y6wj/.
Keywords: better-than-average effect, self-evaluation, comparative judgment, replication
Rolf Degen's take: https://twitter.com/DegenRolf/status/1306294580925607937
Abstract: People tend to regard themselves as better than average. We conducted a replication and extension of Alicke’s classic study on trait dimensions in evaluations of self versus others with U.S. American Mechanical Turk workers in two waves (total N = 1,573; 149 total traits). We successfully replicated the trait desirability effect, such that participants rated more desirable traits as being more descriptive of themselves than of others (original: η2p = .78, 95% confidence interval [CI] [.73, .81]; replication: sr 2 = .54, 95% CI [.43, .65]). The effect of desirability was stronger for more controllable traits (effect of Desirability × Controllability interaction on self–other-ratings difference; original: η2p = .21, 95% CI [.12, .28]; replication: sr 2 = .07, 95% CI [.02, .12]). In an extension, we found that desirable traits were rated as more common for others, but not for the self. Thirty-five years later, the better-than-average effect appears to remain robust. All materials, data, and code are available at https://osf.io/2y6wj/.
Keywords: better-than-average effect, self-evaluation, comparative judgment, replication
Mortality risk halves during the period of incarceration, with large declines in murders, overdoses, and better diet
Norris, Samuel and Pecenco, Matthew and Weaver, Jeffrey, The Effect of Incarceration on Mortality (July 2, 2020). SSRN: http://dx.doi.org/10.2139/ssrn.3644719
Abstract: This paper analyzes the effect of incarceration on mortality using administrative data from Ohio between 1992 and 2017. Using event study and difference-in-differences approaches, we compare mortality risk across incarcerated and non-incarcerated individuals before and after pre-scheduled releases from prison. Mortality risk halves during the period of incarceration, with large declines in murders, overdoses, and medical causes of death. However, there is no detectable effect on post-release mortality risk, meaning that incarceration increases overall longevity. We estimate that incarceration averts nearly two thousand deaths annually in the US, comparable to the 2014 Medicaid expansion.
Keywords: Incarceration, health, mortality, crime
Abstract: This paper analyzes the effect of incarceration on mortality using administrative data from Ohio between 1992 and 2017. Using event study and difference-in-differences approaches, we compare mortality risk across incarcerated and non-incarcerated individuals before and after pre-scheduled releases from prison. Mortality risk halves during the period of incarceration, with large declines in murders, overdoses, and medical causes of death. However, there is no detectable effect on post-release mortality risk, meaning that incarceration increases overall longevity. We estimate that incarceration averts nearly two thousand deaths annually in the US, comparable to the 2014 Medicaid expansion.
Keywords: Incarceration, health, mortality, crime
2.2 Direct effects of incarceration on mortality
The first column of Panel A of Table 1 estimates the DiD specification from Equation 3 on
mortality risk, measured in deaths per hundred thousand individuals annually. The coefficient
on “Incarcerated in quarter,” corresponding to β in the previous section, measures the direct
effect of incarceration. We estimate that incarceration reduces mortality risk by 365 deaths per
hundred thousand (p < 0.001) relative to the post-release mean of 622.4. This is a nearly sixty
percent reduction in mortality risk and is approximately equal to the difference in mortality
between smokers and non-smokers aged 45-54 (Banks et al., 2015).
We use detailed cause of death information to understand the factors underlying this effect.
The most common non-medical cause of death in our sample is overdose (29% of post-release
deaths), which approximately halves during the period of incarceration, declining by 99.8
deaths per hundred thousand (column 2).10 This reduction may reflect addiction treatment
or more difficulty obtaining narcotics while incarcerated.11
Contrary to popular portrayals of correctional facilities, murder and suicide are greatly
reduced during the period of incarceration, though not completely eliminated (columns 3 and
6).12 Murder is particularly important since it is the third most common risk-factor in this
sample (19.4% of deaths post-release), highlighting the dangerous environment faced by the
criminally-involved outside of correctional facilities. The presence of correctional officers and
lack of access to firearms while incarcerated both likely play a significant role; firearms are
involved in 85% of homicides and 38% of suicides in our sample.
Inmates are constitutionally guaranteed medical care, and there may be changes to diet
or lifestyle that affect mortality risk. We find a large reduction in deaths (55 per hundred
thousand) due to medical causes during the period of incarceration (column (4) of Table 1).
Panel B of Table 1 finds the gains mostly come from reductions in heart disease, infection, and
non-classified causes. Even if the quality of prison medical care is suboptimal, many inmates
receive better care than they would otherwise. For example, 79.9% of inmates with persistent
medical problems reported being examined by a medical professional upon intake and twice
as many inmates with serious mental health conditions receive psychiatric medication during
incarceration as compared to prior to arrest (Wilper et al., 2009).
In summary, we find that incarceration dramatically reduces mortality.
Tuesday, September 15, 2020
We were more likely to later claim that we knew the answers all along after having the opportunity to cheat to find the correct answers – relative to exposure to the correct answers without the opportunity to cheat
Cheaters claim they knew the answers all along. Matthew L. Stanley, Alexandria R. Stone & Elizabeth J. Marsh. Psychonomic Bulletin & Review (2020). Sep 15 2020. https://link.springer.com/article/10.3758/s13423-020-01812-w
Rolf Degen's take: https://twitter.com/DegenRolf/status/1306097666816909312
Abstract: Cheating has become commonplace in academia and beyond. Yet, almost everyone views themselves favorably, believing that they are honest, trustworthy, and of high integrity. We investigate one possible explanation for this apparent discrepancy between people’s actions and their favorable self-concepts: People who cheat on tests believe that they knew the answers all along. We found consistent correlational evidence across three studies that, for those particular cases in which participants likely cheated, they were more likely to report that they knew the answers all along. Experimentally, we then found that participants were more likely to later claim that they knew the answers all along after having the opportunity to cheat to find the correct answers – relative to exposure to the correct answers without the opportunity to cheat. These findings provide new insights into relationships between memory, metacognition, and the self-concept.
Rolf Degen's take: https://twitter.com/DegenRolf/status/1306097666816909312
Abstract: Cheating has become commonplace in academia and beyond. Yet, almost everyone views themselves favorably, believing that they are honest, trustworthy, and of high integrity. We investigate one possible explanation for this apparent discrepancy between people’s actions and their favorable self-concepts: People who cheat on tests believe that they knew the answers all along. We found consistent correlational evidence across three studies that, for those particular cases in which participants likely cheated, they were more likely to report that they knew the answers all along. Experimentally, we then found that participants were more likely to later claim that they knew the answers all along after having the opportunity to cheat to find the correct answers – relative to exposure to the correct answers without the opportunity to cheat. These findings provide new insights into relationships between memory, metacognition, and the self-concept.
Exposure to a seemingly unhealthy consumer increases others' visual attention towards products perceived to be healthy; effect was stronger for products that managed to convey the impression of being healthy
Cereal Deal: How the Physical Appearance of Others Affects Attention to Healthy Foods. Tobias Otterbring, Kerstin Gidlöf, Kristian Rolschau & Poja Shams. Perspectives on Behavior Science volume 43, pages451–468(2020). Feb 19 2020. https://link.springer.com/article/10.1007/s40614-020-00242-2
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305871898107142147
Abstract: This eye-tracking study investigated whether the physical appearance of another consumer can influence people’s visual attention and choice behavior in a grocery shopping context. Participants (N = 96) took part in a lab-based experiment and watched a brief video recording featuring a female consumer standing in front of a supermarket shelf. The appearance and body type of the consumer was manipulated between conditions, such that she was perceived as 1) healthy and of normal weight, 2) unhealthy by means of overweight, or 3) unhealthy through visual signs associated with a potentially unhealthy lifestyle, but not by means of overweight. Next, participants were exposed to a supermarket shelf with cereals and were asked to choose one alternative they could consider buying. Prior exposure to a seemingly unhealthy (vs. healthy) consumer resulted in a relative increase in participants’ visual attention towards products perceived to be healthy (vs. unhealthy), which prompted cereal choices deemed to be healthier. This effect was stronger for products that holistically, through their design features, managed to convey the impression that they are healthy rather than products with explicit cues linked to healthiness (i.e., the keyhole label). These results offer important implications regarding packaging design for marketers, brand owners, and policy makers. Moreover, the findings highlight the value of technological tools, such as eye-tracking methodology, for capturing consumers’ entire decision-making processes instead of focusing solely on outcome-based metrics, such as choice data or purchase behavior.
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305871898107142147
Abstract: This eye-tracking study investigated whether the physical appearance of another consumer can influence people’s visual attention and choice behavior in a grocery shopping context. Participants (N = 96) took part in a lab-based experiment and watched a brief video recording featuring a female consumer standing in front of a supermarket shelf. The appearance and body type of the consumer was manipulated between conditions, such that she was perceived as 1) healthy and of normal weight, 2) unhealthy by means of overweight, or 3) unhealthy through visual signs associated with a potentially unhealthy lifestyle, but not by means of overweight. Next, participants were exposed to a supermarket shelf with cereals and were asked to choose one alternative they could consider buying. Prior exposure to a seemingly unhealthy (vs. healthy) consumer resulted in a relative increase in participants’ visual attention towards products perceived to be healthy (vs. unhealthy), which prompted cereal choices deemed to be healthier. This effect was stronger for products that holistically, through their design features, managed to convey the impression that they are healthy rather than products with explicit cues linked to healthiness (i.e., the keyhole label). These results offer important implications regarding packaging design for marketers, brand owners, and policy makers. Moreover, the findings highlight the value of technological tools, such as eye-tracking methodology, for capturing consumers’ entire decision-making processes instead of focusing solely on outcome-based metrics, such as choice data or purchase behavior.
Bias Blind Spot (BBS) is the phenomenon that people tend to perceive themselves as less susceptible to biases than others; these authors could replicate the original findings
Free-will and self-other asymmetries in perceived bias and shortcomings: Replications of the Bias Blind Spot and extensions linking to free will beliefs. Prasad Chandrashekar et al. April 2020. DOI: 10.13140/RG.2.2.19878.16961/2
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305866994122731520
Description: Bias Blind Spot (BBS) is the phenomenon that people tend to perceive themselves as less susceptible to biases than others. In three pre-registered experiments (overall N = 969), we replicated two experiments of the first demonstration of the phenomenon by Pronin, Lin, and Ross (2002). We found support of the BBS hypotheses, with effects in line with findings in the original study: Participants rated themselves as less susceptible to biases than others (d = -1.00 [-1.33, -0.67]) and as having fewer shortcomings (d = - 0.34 [-0.46, -0.23] with differences between effects: d = -0.43 [-0.56, -0.29]). Extending the replications, we found that beliefs in own free will were positively associated with BBS (r ~ 0.17-0.22) and that beliefs in both self and general free will were positively associated with self-other asymmetry related to personal shortcomings (r ~ 0.16-0.24). Materials, datasets, and code are available on https://osf.io/3df5s/
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305866994122731520
Description: Bias Blind Spot (BBS) is the phenomenon that people tend to perceive themselves as less susceptible to biases than others. In three pre-registered experiments (overall N = 969), we replicated two experiments of the first demonstration of the phenomenon by Pronin, Lin, and Ross (2002). We found support of the BBS hypotheses, with effects in line with findings in the original study: Participants rated themselves as less susceptible to biases than others (d = -1.00 [-1.33, -0.67]) and as having fewer shortcomings (d = - 0.34 [-0.46, -0.23] with differences between effects: d = -0.43 [-0.56, -0.29]). Extending the replications, we found that beliefs in own free will were positively associated with BBS (r ~ 0.17-0.22) and that beliefs in both self and general free will were positively associated with self-other asymmetry related to personal shortcomings (r ~ 0.16-0.24). Materials, datasets, and code are available on https://osf.io/3df5s/
Relationship satisfaction mediates the association between perceived partner mate retention strategies and relationship commitment
Relationship satisfaction mediates the association between perceived partner mate retention strategies and relationship commitment. Bruna Nascimento & Anthony Little. Current Psychology (2020). Sep 12 2020. https://link.springer.com/article/10.1007/s12144-020-01045-z
Abstract: This study investigated whether relationship satisfaction mediates the association between own and perceived partner mate-retention strategies and commitment. One hundred and fifty individuals (Mage = 23.87, SDage = 7.28; 78.7% women) in a committed relationship participated in this study. We found an association between perceived partner mate-retention strategies and commitment and that relationship satisfaction mediated this link. Similarly, we found that relationship satisfaction also mediated the association between individuals’ own cost-inflicting strategies and commitment. Specifically, perceived partner benefit-provisioning strategies are positively associated with commitment through increased relationship satisfaction and, conversely, both perceived partner and own cost-inflicting strategies are negatively associated with commitment through decreased relationship satisfaction. Additionally, we observed that relationship satisfaction moderated the association between perceived partner cost-inflicting strategies and participants’ own frequency of cost-inflicting strategies. That is, participants’ cost inflicting strategies are associated with their partner’s cost inflicting strategies, such that this association is stronger among individuals with higher relationship satisfaction. The current research extends previous findings by demonstrating that the association between perceived partner and own mate-retention strategies and commitment is mediated by relationship satisfaction. Additionally, we showed that an individual’s expression of mate retention is associated with their perception of the strategies displayed by their partner, which also depends on relationship satisfaction.
Abstract: This study investigated whether relationship satisfaction mediates the association between own and perceived partner mate-retention strategies and commitment. One hundred and fifty individuals (Mage = 23.87, SDage = 7.28; 78.7% women) in a committed relationship participated in this study. We found an association between perceived partner mate-retention strategies and commitment and that relationship satisfaction mediated this link. Similarly, we found that relationship satisfaction also mediated the association between individuals’ own cost-inflicting strategies and commitment. Specifically, perceived partner benefit-provisioning strategies are positively associated with commitment through increased relationship satisfaction and, conversely, both perceived partner and own cost-inflicting strategies are negatively associated with commitment through decreased relationship satisfaction. Additionally, we observed that relationship satisfaction moderated the association between perceived partner cost-inflicting strategies and participants’ own frequency of cost-inflicting strategies. That is, participants’ cost inflicting strategies are associated with their partner’s cost inflicting strategies, such that this association is stronger among individuals with higher relationship satisfaction. The current research extends previous findings by demonstrating that the association between perceived partner and own mate-retention strategies and commitment is mediated by relationship satisfaction. Additionally, we showed that an individual’s expression of mate retention is associated with their perception of the strategies displayed by their partner, which also depends on relationship satisfaction.
Discussion
This study examined the association between own and perceived partner mate retention, relationship satisfaction, and commitment. Based on the Investment Model (Rusbult 1983) and on the game-theoretic model (Conroy-Beam et al. 2015) and on previous literature (Shackelford and Buss 2000; Dainton 2015), we tested four hypotheses in this study. We anticipated that perceived partner mate-retention strategies were associated with commitment (Hypothesis 1a) and that relationship satisfaction would mediate this association (Hypothesis 1b). Similarly, we anticipated that relationship satisfaction would mediate the association between own mate retention and commitment (Hypothesis 2). We also predicted that relationship satisfaction would moderate the association between perceived partner mate-retention strategies and an individual’s own mate-retention strategies (Hypothesis 3).
Consistent with the first two Hypotheses, perceived partner mate-retention strategies were associated with individual’s commitment to the relationship (Hypothesis 1a), and relationship satisfaction mediated this association (Hypothesis 1b) as suggested by a previous study (Dainton 2015). Consistent with previous literature (Albert and Arnocky 2016; Shackelford and Buss 2000), our study found that benefit-provisioning strategies, such as appearance enhancement and expression of affection, enhance commitment to the relationship by improving relationship satisfaction. In contrast, cost-inflicting strategies, which include monopolising the partner’s time and violence and threats directed to rivals, are detrimental to relationship satisfaction, which in turn, reduces commitment (Dandurand and Lafontaine 2014).
Similarly, we found that relationship satisfaction mediates the association between individuals’ own mate-retention strategies and commitment, but only for cost-inflicting strategies (Hypothesis 2). As suggested by previous literature, we found that individuals who display positive inducements more often tend to be more committed to their relationships (Buss et al. 2008; Dainton 2015), but this association was not explained by relationship satisfaction. On the other hand, our results suggest that individuals who engage in cost-inflicting strategies more frequently tend to experience lower relationship satisfaction, which is in turn associated with lower committed to the relationship (Miguel and Buss 2010). These findings reinforce the idea that cost-inflicting strategies, either performed by the individual or the partner are linked to poorer relationship satisfaction and low commitment.
As demonstrated in this study and supported by previous research, because relationship satisfaction and commitment are strong predictors of relationship dissolution (Le et al. 2010; Rhoades et al. 2010), these findings suggest that the usage of cost-inflicting strategies may negatively influence an individual’s likelihood to stay in the relationship. Although such strategies may be useful to some extent to keep mate poachers away, for example, they may have a negative influence on the quality of the relationship, and if used too often, they may lead to relationship dissolution. On the other hand, benefit-provisioning strategies seem to be the most useful strategies to preserve a relationship by maintaining higher relationship quality, which is associated with increased commitment to the relationship.
In the current study, relationship satisfaction was also associated with individuals’ own reporting of mate-retention strategies. Specifically, those individuals who are satisfied with their relationships tend to engage more often in benefit-provisioning strategies. On the other hand, lower relationship satisfaction is associated with higher frequency of cost-inflicting strategies. These findings are consistent with evolutionary theory, supporting the idea that relationship satisfaction works as a monitor of relationship quality (Conroy-Beam et al. 2015). As shown here, individuals who are happy in their relationships are more committed to the relationship and tend to engage more often in positive mate-retention strategies.
To test our third hypothesis, we examined whether relationship satisfaction moderates the relation between participants’ reporting of their partners’ mate-retention strategies and their own use of mate retention. We found that when individuals perceive their partners to engage more often in benefit-provisioning strategies, they tended to respond by engaging in similar positive strategies too. However, this association does not vary according to the level of relationship satisfaction. Thus, regardless of how happy individuals are with their relationships, if their partners treat them well, they will reciprocate, consistent with previous findings (Shackelford et al. 2005; Welling et al. 2012). It may also be the case that, consistent with homogamy, individuals tend to mate with individuals that perform similar mate-retention strategies to theirs.
Similarly, those participants who perceived their partners to conceal them and monopolise their time, for example, were more likely to engage in such cost-inflicting strategies themselves. However, relationship satisfaction altered this association, such that this association was stronger among individuals with higher relationship satisfaction. Thus, if individuals perceive that their partners are investing more in the relationship even if they do this by using cost-inflicting strategies, they tend to respond in a similar way by increasing their mate-retention efforts, especially if they perceive the quality of the relationship to be high. These findings also give partial support to the assumption that relationship satisfaction monitors relationship quality and as such, results in higher investment in the relationship (Conroy-Beam et al. 2016; Shackelford and Buss 1997). Therefore, Hypothesis 3 was confirmed only for cost-inflicting strategies, but not for benefit-provisioning strategies.
One limitation of this study is that the sex-imbalanced sample that did not allow for comparisons across sexes. Future research could investigate how the patterns found here vary across sexes because men and women use mate-retention strategies differently, such that men tend to engage more often in strategies such as resource display than women do, whereas women tend to engage more often in strategies such as appearance enhancement in comparison to men (Albert and Arnocky 2016). A second limitation is the non-probability and convenience nature (i.e., non-random internet recruitment so participants are self-selected) of the sample, which can limit the generalisability of our findings. Another limitation of note is that we relied on people’s report of their partners’ behaviour, and we have no way of identifying the extent to which their perception corresponds to reality. However, previous literature has demonstrated that individuals’ self-reports of their mate-retention behaviours are congruent with their partners’ reports of their mate-retention behaviours, demonstrating that individuals can provide reliable accounts of their partners’ mate-retention strategies (Shackelford et al. 2005). Moreover, people’s perceptions of their partners’ behaviour may be more important than their actual behaviour in predicting relationship satisfaction and commitment (see Montoya et al. 2008). This is another potential area for future research, where studies could obtain reports from both partners to address this. Finally, the current study only explored mate-retention strategies among heterosexual individuals. Given that sexual orientation influences the performance of mate-retention strategies (Brewer and Hamilton 2014), future studies should address homosexual relationship dynamics.
Despite these limitations, the current research extends previous findings on the association between mate retention, relationship satisfaction and commitment. Additionally, partners’ mate-retention strategies appear to be mutually related, such that partners respond to each other’s strategies, which also depends on relationship satisfaction. Our findings suggest that different mate-retention strategies have different levels of effectiveness, by demonstrating that whereas benefit-provisioning strategies are associated with high relationship satisfaction, which, in turn, is associated with high commitment, cost-inflicting strategies do so negatively. Although our findings suggest that cost-inflicting strategies are damaging for the relationship, individuals may still find them useful in specific situations under the threat of imminent infidelity, for example, otherwise individuals would not rely on them. Despite the existence of cost-inflicting strategies, however, benefit-provisioning strategies appear to be more effective in maintaining stable and satisfying relationships.
For the 2020-2021 graduate admissions cycle, the University of Chicago English Department is accepting only applicants interested in working in and with Black Studies
Department of English Language and Literature, The University of Chicago. Jul 2020. https://english.uchicago.edu/?fbclid=IwAR1vW5HOB42Rf6q5ETmvR9k2iWRFnLtJOKXU7y_BUnBb0GxwIqrdSsMTmck
[...]
Faculty Statement (July 2020)
The English department at the University of Chicago believes that Black Lives Matter, and that the lives of George Floyd, Breonna Taylor, Tony McDade, and Rayshard Brooks matter, as do thousands of others named and unnamed who have been subject to police violence. As literary scholars, we attend to the histories, atmospheres, and scenes of anti-Black racism and racial violence in the United States and across the world. We are committed to the struggle of Black and Indigenous people, and all racialized and dispossessed people, against inequality and brutality.
For the 2020-2021 graduate admissions cycle, the University of Chicago English Department is accepting only applicants interested in working in and with Black Studies. We understand Black Studies to be a capacious intellectual project that spans a variety of methodological approaches, fields, geographical areas, languages, and time periods. For more information on faculty and current graduate students in this area, please visit our Black Studies page.
The department is invested in the study of African American, African, and African diaspora literature and media, as well as in the histories of political struggle, collective action, and protest that Black, Indigenous and other racialized peoples have pursued, both here in the United States and in solidarity with international movements. Together with students, we attend both to literature’s capacity to normalize violence and derive pleasure from its aesthetic expression, and ways to use the representation of that violence to reorganize how we address making and breaking life. Our commitment is not just to ideas in the abstract, but also to activating histories of engaged art, debate, struggle, collective action, and counterrevolution as contexts for the emergence of ideas and narratives.
English as a discipline has a long history of providing aesthetic rationalizations for colonization, exploitation, extraction, and anti-Blackness. Our discipline is responsible for developing hierarchies of cultural production that have contributed directly to social and systemic determinations of whose lives matter and why. And while inroads have been made in terms of acknowledging the centrality of both individual literary works and collective histories of racialized and colonized people, there is still much to do as a discipline and as a department to build a more inclusive and equitable field for describing, studying, and teaching the relationship between aesthetics, representation, inequality, and power.
In light of this historical reality, we believe that undoing persistent, recalcitrant anti-Blackness in our discipline and in our institutions must be the collective responsibility of all faculty, here and elsewhere. In support of this aim, we have been expanding our range of research and teaching through recent hiring, mentorship, and admissions initiatives that have enriched our department with a number of Black scholars and scholars of color who are innovating in the study of the global contours of anti-Blackness and in the equally global project of Black freedom. Our collective enrichment is also a collective debt; this department reaffirms the urgency of ensuring institutional and intellectual support for colleagues and students working in the Black studies tradition, alongside whom we continue to deepen our intellectual commitments to this tradition. As such, we believe all scholars have a responsibility to know the literatures of African American, African diasporic, and colonized peoples, regardless of area of specialization, as a core competence of the profession.
We acknowledge the university's and our field's complicated history with the South Side. While we draw intellectual inspiration from the work of writers deeply connected to Chicago's south side, including Ida B. Wells, Gwendolyn Brooks, Lorraine Hansberry, and Richard Wright, we are also attuned to the way that the university has been a vehicle of intellectual and economic opportunity for some in the community, and a site of exclusion and violence for others. Part of our commitment to the struggle for Black lives entails vigorous participation in university-wide conversations and activism about the university's past and present role in the historically Black neighborhood that houses it.
Division of the Humanities, The University of Chicago
[...]
Faculty Statement (July 2020)
The English department at the University of Chicago believes that Black Lives Matter, and that the lives of George Floyd, Breonna Taylor, Tony McDade, and Rayshard Brooks matter, as do thousands of others named and unnamed who have been subject to police violence. As literary scholars, we attend to the histories, atmospheres, and scenes of anti-Black racism and racial violence in the United States and across the world. We are committed to the struggle of Black and Indigenous people, and all racialized and dispossessed people, against inequality and brutality.
For the 2020-2021 graduate admissions cycle, the University of Chicago English Department is accepting only applicants interested in working in and with Black Studies. We understand Black Studies to be a capacious intellectual project that spans a variety of methodological approaches, fields, geographical areas, languages, and time periods. For more information on faculty and current graduate students in this area, please visit our Black Studies page.
The department is invested in the study of African American, African, and African diaspora literature and media, as well as in the histories of political struggle, collective action, and protest that Black, Indigenous and other racialized peoples have pursued, both here in the United States and in solidarity with international movements. Together with students, we attend both to literature’s capacity to normalize violence and derive pleasure from its aesthetic expression, and ways to use the representation of that violence to reorganize how we address making and breaking life. Our commitment is not just to ideas in the abstract, but also to activating histories of engaged art, debate, struggle, collective action, and counterrevolution as contexts for the emergence of ideas and narratives.
English as a discipline has a long history of providing aesthetic rationalizations for colonization, exploitation, extraction, and anti-Blackness. Our discipline is responsible for developing hierarchies of cultural production that have contributed directly to social and systemic determinations of whose lives matter and why. And while inroads have been made in terms of acknowledging the centrality of both individual literary works and collective histories of racialized and colonized people, there is still much to do as a discipline and as a department to build a more inclusive and equitable field for describing, studying, and teaching the relationship between aesthetics, representation, inequality, and power.
In light of this historical reality, we believe that undoing persistent, recalcitrant anti-Blackness in our discipline and in our institutions must be the collective responsibility of all faculty, here and elsewhere. In support of this aim, we have been expanding our range of research and teaching through recent hiring, mentorship, and admissions initiatives that have enriched our department with a number of Black scholars and scholars of color who are innovating in the study of the global contours of anti-Blackness and in the equally global project of Black freedom. Our collective enrichment is also a collective debt; this department reaffirms the urgency of ensuring institutional and intellectual support for colleagues and students working in the Black studies tradition, alongside whom we continue to deepen our intellectual commitments to this tradition. As such, we believe all scholars have a responsibility to know the literatures of African American, African diasporic, and colonized peoples, regardless of area of specialization, as a core competence of the profession.
We acknowledge the university's and our field's complicated history with the South Side. While we draw intellectual inspiration from the work of writers deeply connected to Chicago's south side, including Ida B. Wells, Gwendolyn Brooks, Lorraine Hansberry, and Richard Wright, we are also attuned to the way that the university has been a vehicle of intellectual and economic opportunity for some in the community, and a site of exclusion and violence for others. Part of our commitment to the struggle for Black lives entails vigorous participation in university-wide conversations and activism about the university's past and present role in the historically Black neighborhood that houses it.
Division of the Humanities, The University of Chicago
Egypt: Employed women report more coital frequency, more occurrence of spontaneous desire and being more able to obtain orgasm than unemployed women
Does Employment Affect Female Sexuality? Enas H. Abdallah, IhabYounis, Hala M. Elhady. Benha University Medical Journal, Sep 2020. DOI: 10.21608/bmfj.2020.18498.1119
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305786559690571777
Abstract:
Introduction: Female sexual dysfunction (FSD) is a multifactorial condition that has anatomical, physiological, medical, psychological and social components. With increasing trend in the participation of women in the work force and due to the competing demands between work and family, the metaphor of work family conflict (WFC) as an increasing pressure in professional life has emerged. WFC seems to be more in women than men due to more overload and stress.
Aim of the work: to compare female sexuality between employed women and unemployed ones. Subjects and methods: The current study was a cross sectional study. The subjects of this study were sexually active married women. The tool of the study was a selfreport questionnaire.
Results: Employed women were higher in coital frequency than unemployed ones (60.2% & 39.4% respectively). Spontaneous desire was reported by 41% of employed women to occur once per week compared to 34.7% of unemployed ones. Among the employed women, 38.2% could reach orgasm in almost all their sexual encounters compared to 12.7% of unemployed ones. Among unemployed women, 10.4% reported sexual pain compared to 3.6% among employed ones.
Conclusion: Employed women have better sexual functioning than unemployed ones. Employed women have more coital frequency, more occurrence of spontaneous desire and are more able to obtain orgasm than unemployed women.
Keywords: employment, sexual dysfunction, women.
---
In the present study, among unemployed group, 10.4% reported sexual pain compared to 3.6% among employed ones. This is consistent with a study which found that unemployment was a significant risk factor in reporting sexual problems, desire 60%, and pain problems 36.8% [24]. This result disagrees with another study[4] who reported sexual pain in working women to be 26.7% and suggested that there was a strong relation between job stress, anxiety and sexual dysfunction.
In our study, 7.2% of employed women experienced sexual harassment several times at work place. This finding is consistent Cochran and co-workers, [25] who revealed a reporting rate as low as 2%. In contrast, a study based on more than 86,000 respondents in the US, 58% of women reported having experienced potentially harassing behavior and 24% reported having experienced sexual harassment at work [26].
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305786559690571777
Abstract:
Introduction: Female sexual dysfunction (FSD) is a multifactorial condition that has anatomical, physiological, medical, psychological and social components. With increasing trend in the participation of women in the work force and due to the competing demands between work and family, the metaphor of work family conflict (WFC) as an increasing pressure in professional life has emerged. WFC seems to be more in women than men due to more overload and stress.
Aim of the work: to compare female sexuality between employed women and unemployed ones. Subjects and methods: The current study was a cross sectional study. The subjects of this study were sexually active married women. The tool of the study was a selfreport questionnaire.
Results: Employed women were higher in coital frequency than unemployed ones (60.2% & 39.4% respectively). Spontaneous desire was reported by 41% of employed women to occur once per week compared to 34.7% of unemployed ones. Among the employed women, 38.2% could reach orgasm in almost all their sexual encounters compared to 12.7% of unemployed ones. Among unemployed women, 10.4% reported sexual pain compared to 3.6% among employed ones.
Conclusion: Employed women have better sexual functioning than unemployed ones. Employed women have more coital frequency, more occurrence of spontaneous desire and are more able to obtain orgasm than unemployed women.
Keywords: employment, sexual dysfunction, women.
---
In the present study, among unemployed group, 10.4% reported sexual pain compared to 3.6% among employed ones. This is consistent with a study which found that unemployment was a significant risk factor in reporting sexual problems, desire 60%, and pain problems 36.8% [24]. This result disagrees with another study[4] who reported sexual pain in working women to be 26.7% and suggested that there was a strong relation between job stress, anxiety and sexual dysfunction.
In our study, 7.2% of employed women experienced sexual harassment several times at work place. This finding is consistent Cochran and co-workers, [25] who revealed a reporting rate as low as 2%. In contrast, a study based on more than 86,000 respondents in the US, 58% of women reported having experienced potentially harassing behavior and 24% reported having experienced sexual harassment at work [26].
Creative individuals are better than their peers at identifying uncreative products; expert ratings of the quality of a creative product are driven more by the ability to identify low quality work as opposed to high quality work
Are Creative People Better than Others at Recognizing Creative Work? Steven E.Stemler, James C. Kaufman. Thinking Skills and Creativity, September 15 2020, 100727. https://doi.org/10.1016/j.tsc.2020.100727
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305757801172684800
Highlights
• Rating paradigms often focus on identifying the “Best” candidate, product, or solution
• Highly creative individuals are better than their peers at identifying uncreative products
• Rater creativity was not related to the ability to recognizing highly creative products
• Expert ratings of the quality of a creative product are driven more by the ability to identify low quality work as opposed to high quality work
• Ruling out the least creative candidate, product, or solution may be more important – or at least require more creative expertise – than identifying the “Best” of the bunch.
Abstract: It is often assumed that people with high ability in a domain will be excellent raters of quality within that same domain. This assumption is an underlying principle of using raters for creativity tasks, as in the Consensual Assessment Technique. While several prior studies have examined expert-novice differences in ratings, none have examined whether experts’ ability to identify the quality of a creative product is being driven more by their ability to identify high quality work, low quality work, or both. To address this question, a sample of 142 participants completed individual difference measures and rated the quality of several sets of creative captions. Unbeknownst to the participants, the captions had been identified a prior by expert raters as being of particularly high or low quality. Hierarchical regression analyses revealed that after controlling for participants’ background and personality, those who scored significantly higher on any of three external measures of creativity also rated low-quality captions significantly lower than their peers; however, they did not rate the high-quality captions significantly higher. These findings support research in other domains suggesting that ratings of quality may be driven more by the lower end of the quality spectrum than the high end.
Keywords: CreativityAssessmentRatingsExpertise
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305757801172684800
Highlights
• Rating paradigms often focus on identifying the “Best” candidate, product, or solution
• Highly creative individuals are better than their peers at identifying uncreative products
• Rater creativity was not related to the ability to recognizing highly creative products
• Expert ratings of the quality of a creative product are driven more by the ability to identify low quality work as opposed to high quality work
• Ruling out the least creative candidate, product, or solution may be more important – or at least require more creative expertise – than identifying the “Best” of the bunch.
Abstract: It is often assumed that people with high ability in a domain will be excellent raters of quality within that same domain. This assumption is an underlying principle of using raters for creativity tasks, as in the Consensual Assessment Technique. While several prior studies have examined expert-novice differences in ratings, none have examined whether experts’ ability to identify the quality of a creative product is being driven more by their ability to identify high quality work, low quality work, or both. To address this question, a sample of 142 participants completed individual difference measures and rated the quality of several sets of creative captions. Unbeknownst to the participants, the captions had been identified a prior by expert raters as being of particularly high or low quality. Hierarchical regression analyses revealed that after controlling for participants’ background and personality, those who scored significantly higher on any of three external measures of creativity also rated low-quality captions significantly lower than their peers; however, they did not rate the high-quality captions significantly higher. These findings support research in other domains suggesting that ratings of quality may be driven more by the lower end of the quality spectrum than the high end.
Keywords: CreativityAssessmentRatingsExpertise
Intelligence, alcohol consumption, and adverse consequences in young Norwegian men: Intelligence was not associated with intoxication frequency at any age
Intelligence, alcohol consumption, and adverse consequences. A study of young Norwegian men. Adrian F. Rogne, Willy Pedersen, Tilmann Von Soest. Scandinavian Journal of Public Health, September 11, 2020. https://doi.org/10.1177/1403494820944719
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305755292236484609
Abstract
Aims: Research suggests that intelligence is positively related to alcohol consumption. However, some studies of people born around 1950, particularly from Sweden, have reported that higher intelligence is associated with lower consumption and fewer alcohol-related problems. We investigated the relationships between intelligence, alcohol consumption, and adverse consequences of drinking in young men from Norway (a neighboring Scandinavian country) born in the late 1970s.
Methods: This analysis was based on the population-based Young in Norway Longitudinal Study. Our sample included young men who had been followed from their mid-teens until their late 20s (n = 1126). Measures included self-reported alcohol consumption/intoxication, alcohol use disorders (AUDIT), and a scale measuring adverse consequences of drinking. Controls included family background, parental bonding, and parents’ and peers’ drinking. Intelligence test scores—scaled in 9 “stanines” (population mean of 5 and standard deviation of 2)—were taken from conscription assessment.
Results: Men with higher intelligence scores reported average drinking frequency and slightly fewer adverse consequences in their early 20s. In their late 20s, they reported more frequent drinking than men with lower intelligence scores (0.30 more occasions per week, per stanine, age adjusted; 95% CI: 0.12 to 0. 49). Intelligence was not associated with intoxication frequency at any age and did not moderate the relationships between drinking frequency and adverse consequences.
Conclusions: Our results suggest that the relationship between intelligence and drinking frequency is age dependent. Discrepancies with earlier findings from Sweden may be driven by changes in drinking patterns.
Keywords: Intelligence, alcohol, intoxication, Norway, consequences, young adults
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305755292236484609
Abstract
Aims: Research suggests that intelligence is positively related to alcohol consumption. However, some studies of people born around 1950, particularly from Sweden, have reported that higher intelligence is associated with lower consumption and fewer alcohol-related problems. We investigated the relationships between intelligence, alcohol consumption, and adverse consequences of drinking in young men from Norway (a neighboring Scandinavian country) born in the late 1970s.
Methods: This analysis was based on the population-based Young in Norway Longitudinal Study. Our sample included young men who had been followed from their mid-teens until their late 20s (n = 1126). Measures included self-reported alcohol consumption/intoxication, alcohol use disorders (AUDIT), and a scale measuring adverse consequences of drinking. Controls included family background, parental bonding, and parents’ and peers’ drinking. Intelligence test scores—scaled in 9 “stanines” (population mean of 5 and standard deviation of 2)—were taken from conscription assessment.
Results: Men with higher intelligence scores reported average drinking frequency and slightly fewer adverse consequences in their early 20s. In their late 20s, they reported more frequent drinking than men with lower intelligence scores (0.30 more occasions per week, per stanine, age adjusted; 95% CI: 0.12 to 0. 49). Intelligence was not associated with intoxication frequency at any age and did not moderate the relationships between drinking frequency and adverse consequences.
Conclusions: Our results suggest that the relationship between intelligence and drinking frequency is age dependent. Discrepancies with earlier findings from Sweden may be driven by changes in drinking patterns.
Keywords: Intelligence, alcohol, intoxication, Norway, consequences, young adults
Discussion and conclusions
In our sample of Norwegian men born in the 1970s, there was no association between intelligence and frequency of alcohol use when participants were in their early 20s. In their late 20s, we observed a positive association. However, we found no significant link between intelligence and intoxication frequency. Our findings are not consistent with findings from neighboring Sweden, which were based on cohorts born in the 1950s. Rather, our findings resemble those from other recent studies showing a positive association between intelligence and alcohol use, but this association is age-dependent and not very strong. One possible explanation may be that alcohol use in Norway changed considerably in the 1990s and early 2000s, when the so-called weekend binge drinking culture was supplemented by more frequent alcohol consumption. If intelligence is more positively associated with alcohol consumption in cultures with more frequent but less intensive drinking, such changes may have led to an emerging positive (or less negative) association between intelligence and drinking frequency. However, to disentangle these relationships, more research into changes in drinking patterns over time in relation to intelligence is necessary.
The positive association between intelligence and drinking frequency in the late 20s, when most men have entered the labor market, is also consistent with the notion that selection into longer educational programs or high-status jobs may be relevant to this association. Our findings also indicated a small, negative association between intelligence and alcohol related problems at around 22 years of age. While one possible interpretation of this finding is that high intelligence may protect against adverse consequences from drinking, additional analyses (not shown) indicate that the negative association is driven solely by higher adverse consequences scores in the two lowest stanines. Finally, our results do not support the notion that intelligence moderates the relationship between drinking frequency and adverse consequences of drinking.
Our study has several limitations. We cannot rule out the importance of selective attrition, measurement error, and similar survey-related issues. If people with greater cognitive abilities are more reflexive of, and concerned with, potentially adverse consequences of their drinking, they may be more likely to report alcohol use and related problems accurately in surveys [6], which may result in systematic measurement error. The group of nondrinkers may be highly diverse, possibly including both former heavy drinkers and lifetime abstainers [15]. Several of the included control variables in models 3 and 4 may also be affected by intelligence, drinking habits, or adverse consequences, and controlling for these may have introduced overcontrol bias. Our results may also be affected by reverse causation, since heavy alcohol consumption in adolescence may adversely affect cognitive ability [32]. Moreover, our data did not enable us to study women.
In conclusion, studying young Norwegian men born in the 1970s, our findings suggest that the association between intelligence and alcohol consumption is only positive when they are in their late 20s, not when they are in their early 20s. In other words, the association appears to be age dependent. This finding also contrasts with Swedish findings from older cohorts, suggesting that the relationship may also be context-dependent. Our results also suggest that intelligence does not moderate the relationship between frequent drinking and adverse consequences.
Seasonality of mood and affect in a large general population sample: Only participants higher on neuroticism showing seasonality
Seasonality of mood and affect in a large general population sample. Wim H. Winthorst ,Elisabeth H. Bos,Annelieke M. Roest,Peter de Jonge. PLoS, September 14, 2020. https://doi.org/10.1371/journal.pone.0239033
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305747281346519041
Abstract: Mood and behaviour are thought to be under considerable influence of the seasons, but evidence is not unequivocal. The purpose of this study was to investigate whether mood and affect are related to the seasons, and what is the role of neuroticism in this association. In a national internet-based crowdsourcing project in the Dutch general population, individuals were invited to assess themselves on several domains of mental health. ANCOVA was used to test for differences between the seasons in mean scores on the Positive and Negative Affect Schedule (PANAS) and Quick Inventory of Depressive Symptomatology (QIDS). Within-subject seasonal differences were tested as well, in a subgroup that completed the PANAS twice. The role of neuroticism as a potential moderator of seasonality was examined. Participants (n = 5,282) scored significantly higher on positive affect (PANAS) and lower on depressive symptoms (QIDS) in spring compared to summer, autumn and winter. They also scored significantly lower on negative affect in spring compared to autumn. Effect sizes were small or very small. Neuroticism moderated the effect of the seasons, with only participants higher on neuroticism showing seasonality. There was no within-subject seasonal effect for participants who completed the questionnaires twice (n = 503), nor was neuroticism a significant moderator of this within-subjects effect. The findings of this study in a general population sample participating in an online crowdsourcing study do not support the widespread belief that seasons influence mood to a great extent. For, as far as the seasons did influence mood, this only applied to highly neurotic participants and not to low-neurotic participants. The underlying mechanism of cognitive attribution may explain the perceived relation between seasonality and neuroticism.
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305747281346519041
Abstract: Mood and behaviour are thought to be under considerable influence of the seasons, but evidence is not unequivocal. The purpose of this study was to investigate whether mood and affect are related to the seasons, and what is the role of neuroticism in this association. In a national internet-based crowdsourcing project in the Dutch general population, individuals were invited to assess themselves on several domains of mental health. ANCOVA was used to test for differences between the seasons in mean scores on the Positive and Negative Affect Schedule (PANAS) and Quick Inventory of Depressive Symptomatology (QIDS). Within-subject seasonal differences were tested as well, in a subgroup that completed the PANAS twice. The role of neuroticism as a potential moderator of seasonality was examined. Participants (n = 5,282) scored significantly higher on positive affect (PANAS) and lower on depressive symptoms (QIDS) in spring compared to summer, autumn and winter. They also scored significantly lower on negative affect in spring compared to autumn. Effect sizes were small or very small. Neuroticism moderated the effect of the seasons, with only participants higher on neuroticism showing seasonality. There was no within-subject seasonal effect for participants who completed the questionnaires twice (n = 503), nor was neuroticism a significant moderator of this within-subjects effect. The findings of this study in a general population sample participating in an online crowdsourcing study do not support the widespread belief that seasons influence mood to a great extent. For, as far as the seasons did influence mood, this only applied to highly neurotic participants and not to low-neurotic participants. The underlying mechanism of cognitive attribution may explain the perceived relation between seasonality and neuroticism.
Discussion
The purpose of this study was to investigate whether mood and affect are related to the seasons. Secondly, we examined the role of neuroticism as a potential moderator of seasonality. The main findings of this study were: on a population level, participants scored higher on positive affect in spring compared to the other seasons, lower on negative affect in spring compared to autumn, and lower on QIDS depressive symptoms in spring compared to the other seasons. The same pattern was visible in the separate “seasonality-related” questions of the QIDS (except for weight change and increased appetite): participants felt less sad, slept less, had more energy, more general interest in spring compared to the other seasons, mainly autumn and winter. In summary, this study shows that participants, in general, feel better in spring compared to the other seasons, but effect sizes were small or very small. The personality factor neuroticism moderated the effect of the season in all three outcomes. There were no within-subject seasonal differences in the scores of positive and negative affect, as shown in the repeated measures analysis in participants who filled out the questionnaires twice. The power of these analyses may have been insufficient to detect significant seasonal differences, due to smaller numbers and the fact that effect sizes were already very small or small in the first group. This may also explain that neuroticism did not moderate within-subject seasonal differences.
The finding that seasonal differences were only seen in the group of high-neurotic participants is in line with our previous study, in which we hypothesised that subjects who score high on neuroticism tend to attribute their symptoms and unhappiness to the seasons [26]. This finding is also in line with the findings of Rosellini and Nooteboom that the symptoms of depression are related to the personality trait neuroticism [58, 59].
In the crowdsourcing study HND procedure, the general public volunteered to assist in scientific research. In return, participants received feedback on their scores and were able to follow the results of the research on the internet [31]. Brabham described the internet crowdsourcing procedure as a relatively new model for application in public health [60]. Possible advantages mentioned by Bevelander are that by this sampling methodology already existing hypotheses can be reproduced but also that this methodology can generate ideas that are less well-documented or otherwise tend to be overlooked [61]. In previous crowdsourcing studies, the participants recruited were more diverse than in other means of recruitment [62]. Possible disadvantages of this method are selection bias and the impossibility to calculate non-response percentages, as it is not possible to know how many people have heard of the project or visited the website but did not enter the study [63, 64]. In order to find a group of participants for HND that could be representative for the general population (and thereby attempting to reduce the limitation of selection bias), publicity for HND was sought using several newspaper articles, magazine articles, public lectures, radio interviews, and other media. In order to examine possible selection effects, Van der Krieke et al. [31] compared the HND sample with the governmental data of the general Dutch population (Central Bureau of Statistics) and two large population studies: the Netherlands Mental Health Survey (NEMESIS-2) and the Lifelines population study [32,33]. They confirmed a certain selection bias. Compared to the general Dutch population, the HND participants were more often women (65.2% versus 50.5%; NEMESIS = 55.2%, Lifelines = 57,9%), on average 6 years older (45 versus 39 years; NEMESIS = 44, Lifelines = 42), more often with a partner (74% versus 58%;), more often living together (61 versus 47%; NEMESIS = 68%) and had higher education levels (> 20 years 76% versus 35%; NEMESIS = 35%) [31].
This selection bias clearly is a limitation of the present research. Moreover, in our study, a majority of the participants completed the questionnaires in spring. Although we adjusted for the differences between the seasons due to this selective inclusion by using the demographic variables as covariates, we cannot rule out the possibility that the results were still partly due to some unmeasured confounder. Since our sample was a general population sample, another potential limitation is that the proportion of participants suffering from SAD can be expected to be low (ranging from 3%–10%), implying that the contribution of SAD patients to our study results will be limited[11].
Depressive disorders and anxiety disorders show a high comorbidity [65, 66]. For this reason, it would have been interesting to include a measure of anxiety. However, in a previous article, we showed that the administered depression scale (QIDS) and the BAI (Becks Anxiety Inventory) showed a correlation of 0.80 [27]. In the HND study, the Anxiety subscale of the Depression Anxiety Stress Scale (DASS) was used to assess anxiety over the previous week. In our data, there was a correlation of 0.70 between the QIDS and the anxiety scale of the DASS (S1 Table). Since our main objective was to investigate the seasonality of depression and positive and negative affect, we did not include this measure of anxiety as a confounder because it could have masked the seasonal effect on depression.
A strength of this study is its large sample size for the analyses in the entire group and the spring–winter group in the repeated measures analyses. Other strengths are the use of validated instruments, comparability with other Dutch population studies, the use of questionnaires covering a short period guaranteeing a relative absence of memory bias, and the inclusion of a personality factor in the analyses.
The mechanism of cognitive attribution may underlie the relation between (perceived) seasonality and neuroticism [27, 67, 68]. For future studies on seasonality of mood and behaviour, we recommend including the personality measure neuroticism and a measure to establish the attribution style. Other confounding factors like presence or absence of pre-existing physical or mental health conditions, treatment and stressful life events should be measured as well. The objective then is to further disentangle the relationship between neuroticism, attribution style and (perceived) seasonality of mood and behaviour.
Monday, September 14, 2020
Hating magic was marked by lower Openness to Experience, lower awe-proneness, & lower creative self-concepts; & higher socially aversive traits (lower Agreeableness, higher psychopathy, & lower faith in humanity)
Silvia, Paul, Gil Greengross, Maciej Karwowski, Rebekah Rodriguez, and Sara J. Crasson. 2020. “Who Hates Magic? Exploring the Loathing of Legerdemain.” PsyArXiv. September 14. doi:10.31234/osf.io/mzry6
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305498014207946752
Abstract: Magic is an ancient, universal, diverse, and wide-ranging domain of artistic performance. Despite its worldwide popularity, however, any working magician will tell you that some people really hate magic. They seem to see every illusion as a challenge to be solved and every performance as an insult to their intelligence. A distinctive feature of magic is that it seeks to create wonder and amazement through deception—practitioners create the illusion of the impossible, which can provoke intense curiosity, but will not explain the method—so we speculate that disliking magic could stem from (1) low propensity for curiosity, awe, and wonder, and (2) high needs for social status and dominance, which make a person averse to being fooled and manipulated. The present research explored people’s attitudes toward magic with our Loathing of Legerdemain (LOL) scale. In a multinational sample of 1295 adults, we found support for these two broad classes of predictors. People who hated magic were marked by (1) lower Openness to Experience, lower awe-proneness, and lower creative self-concepts; and (2) higher socially aversive traits, such as lower Agreeableness, higher psychopathy, and lower faith in humanity. We suggest that magic is an interesting case for researchers interested in audience and visitor studies and that the psychology of art would benefit from a richer understanding of negative attitudes more generally.
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305498014207946752
Abstract: Magic is an ancient, universal, diverse, and wide-ranging domain of artistic performance. Despite its worldwide popularity, however, any working magician will tell you that some people really hate magic. They seem to see every illusion as a challenge to be solved and every performance as an insult to their intelligence. A distinctive feature of magic is that it seeks to create wonder and amazement through deception—practitioners create the illusion of the impossible, which can provoke intense curiosity, but will not explain the method—so we speculate that disliking magic could stem from (1) low propensity for curiosity, awe, and wonder, and (2) high needs for social status and dominance, which make a person averse to being fooled and manipulated. The present research explored people’s attitudes toward magic with our Loathing of Legerdemain (LOL) scale. In a multinational sample of 1295 adults, we found support for these two broad classes of predictors. People who hated magic were marked by (1) lower Openness to Experience, lower awe-proneness, and lower creative self-concepts; and (2) higher socially aversive traits, such as lower Agreeableness, higher psychopathy, and lower faith in humanity. We suggest that magic is an interesting case for researchers interested in audience and visitor studies and that the psychology of art would benefit from a richer understanding of negative attitudes more generally.
Based on an analysis of all authoritarian regimes between 1900 and 2015, the authors argue that regimes founded in violent social revolution are especially durable
Social Revolution and Authoritarian Durability. Jean Lachapelle, Steven Levitsky, Lucan A. Way and Adam E. Casey. World Politics, September 3 2020. DOI: https://doi.org/10.1017/S0043887120000106
Abstract: This article explores the causes of authoritarian durability. Why do some authoritarian regimes survive for decades, often despite severe crises, while others collapse quickly, even absent significant challenges? Based on an analysis of all authoritarian regimes between 1900 and 2015, the authors argue that regimes founded in violent social revolution are especially durable. Revolutionary regimes, such as those in Russia, China, Cuba, and Vietnam, endured for more than half a century in the face of strong external pressure, poor economic performance, and large-scale policy failures. The authors develop and test a theory that accounts for such durability using a novel data set of revolutionary regimes since 1900. The authors contend that autocracies that emerge out of violent social revolution tend to confront extraordinary military threats, which lead to the development of cohesive ruling parties and powerful and loyal security apparatuses, as well as to the destruction of alternative power centers. These characteristics account for revolutionary regimes’ unusual longevity.
Abstract: This article explores the causes of authoritarian durability. Why do some authoritarian regimes survive for decades, often despite severe crises, while others collapse quickly, even absent significant challenges? Based on an analysis of all authoritarian regimes between 1900 and 2015, the authors argue that regimes founded in violent social revolution are especially durable. Revolutionary regimes, such as those in Russia, China, Cuba, and Vietnam, endured for more than half a century in the face of strong external pressure, poor economic performance, and large-scale policy failures. The authors develop and test a theory that accounts for such durability using a novel data set of revolutionary regimes since 1900. The authors contend that autocracies that emerge out of violent social revolution tend to confront extraordinary military threats, which lead to the development of cohesive ruling parties and powerful and loyal security apparatuses, as well as to the destruction of alternative power centers. These characteristics account for revolutionary regimes’ unusual longevity.
We generally find extreme runs of success by individuals to be more captivating; people appear to be more moved by individual success than group success
Walker, J., & Gilovich, T. (2020). The streaking star effect: Why people want superior performance by individuals to continue more than identical performance by groups. Journal of Personality and Social Psychology, Sep 2020. https://doi.org/10.1037/pspa0000256
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305478259765923840
Abstract: We present evidence in 9 studies (n = 2,625) for the Streaking Star Effect—people’s greater desire to see runs of successful performance by individuals continue more than identical runs of success by groups. We find this bias in an obscure Italian sport (Study 1), a British trivia competition (Study 2), and a tennis competition in which the number of individual versus team competitors is held constant (Study 3). This effect appears to result from individual streaks of success inspiring more awe than group streaks—and that people enjoying being awe-inspired. In Studies 4 and 5, we found that the experience of awe inspired by an individual streak drives the effect, a result that is itself driven by the greater dispositional attributions people make for the success of individuals as opposed to groups (Study 6). We demonstrate in Studies 7a and 7b that this effect is not an artifact of identifiability. Finally, Study 8 illustrates how the Streaking Star Effect impacts people’s beliefs about the appropriate market share for companies run by a successful individual versus a successful management team. We close by discussing implications of this effect for consumer behavior, and for how people react to economic inequality reflected in the success of individuals versus groups.
General discussion, from the Jesse Taylor Walker's PhD Thesis, August 2019 https://core.ac.uk/download/pdf/233840797.pdf
Although past researchers have exerted considerable energy studying streaks of success and failure, very little attention has been paid to the conditions that influence whether or not observers want a given streak to continue. The original aim of this work was to fill that gap. In the first eight studies, we identified a reliable bias such that people desire streaks of success by individuals to continue more than identical streaks by groups. We demonstrated two mechanisms that drive this effect. One key factor is that people experience a greater sense of awe at the prospect of seeing an individual continue a run of dominance than a group. A second is that people take the other competitors into greater consideration when a group is on a streak than when an individual is on a streak. The remaining studies illustrated how the implications of this work extend far beyond people’s preference for the continuation of streaks of success by individuals. Chapter 3 demonstrated ways in which the Streaking Star Effect can impact consumer behavior. We found that consumers were willing to pay more for products associated with individual runs of dominance than group runs of dominance, presumably because products associated with individual dominance are imbued with greater feelings of awe. As an additional extension, we showed how the psychology underlying the Streaking Star Effect may be used to influence attitudes toward inequality. In Chapter 4, inequality was judged to be more acceptable and fair when people perceived the top rung of the income ladder to be occupied by a successful individual as opposed to a successful group. - 69 - The differing attributions that people make for individuals and groups is at the root of many of these findings. Part of the reason that individual dominance is more awe inspiring may be because people tend to make greater dispositional attributions for the behavior of individuals than groups. Similarly, we found in Chapter 4 that people are more likely to make dispositional attributions for the success of wealthy individuals than wealthy groups. Mechanisms themselves often have their own psychological explanations, and these results raise the question as to why people make more dispositional attributions for individuals as opposed to groups. Although other research has supported this attributional pattern (Critcher & Dunning, 2014), no work has identified why people may follow this pattern when making judgments about individuals and groups. One possible explanation is that groups are more abstract than individuals, which may lead people to focus on different factors when making judgments about individuals and groups. The concrete nature of an individual target may call to mind specific characteristics like the target’s will and determination. These kinds of characteristics may seem especially difficult to ascribe to an abstract group of people who do not possess a single consciousness. As a result, outside social and environmental forces may be seen as acting more easily on a group of people than on specific individual. The ultimate reason, though, as to why people follow this attributional pattern is beyond the empirical goals of this work and would be better addressed by future research. Although we have explored at great length a condition that dictates whether people prefer a streak of success to continue, we have not examined the preferences people may have when the streak in question is one of failure rather than success. Do people prefer individuals to discontinue losing streaks more than they prefer groups to dis-continue identical streaks? While - 70 - people are often sensitive to the plight of a long-suffering individual (e.g. Small & Loewenstein, 2015), anecdotal evidence suggests that the preference for losing streaks to end may not follow the same kind of pattern as winning streaks. As an example, for over 100 years, The Chicago Cubs had suffered the longest championship drought of any professional team. But in 2016, they made it to the World Series and defeated the Cleveland Indians. The national reaction leading up to the World Series suggested that many people everywhere, regardless of location or prior allegiance, were pulling for the Cubs to end their run of futility (this author included). The number of people jumping on the Cubs’ “Bandwagon” was so great that it inspired a series of popular memes in addition to several news articles noting the sudden nationwide popularity of the Cubs (Linder, 2016). It seemed possible that the prospect of witnessing the Cubs’ put an end to over 100 years of losing may have been awe-inspiring in its own right. In a more formal test, we asked 200 participants on Mturk to imagine that an individual Calcio player or Calcio team had failed to qualify for the playoffs for 6 consecutive years. We then asked how much people would like to see these streaks come to an end. We suspected it may be possible that the prospect of a team ending a losing streak may inspire greater awe than individuals ending losing streaks (a reversal of the Streaking Star Effect). But this did not prove to be the case. In fact, there was no difference in the amount that participants wanted to see the individual end his of run futility and how much they wanted to see the team do the same. It is possible that people do want to see a team turn around a stretch futility (maybe even more than they would want that team to continue a streak of success) but people appear equally interested in seeing an individual on a run of futility turn around his fortunes.
Rolf Degen's take: https://twitter.com/DegenRolf/status/1305478259765923840
Abstract: We present evidence in 9 studies (n = 2,625) for the Streaking Star Effect—people’s greater desire to see runs of successful performance by individuals continue more than identical runs of success by groups. We find this bias in an obscure Italian sport (Study 1), a British trivia competition (Study 2), and a tennis competition in which the number of individual versus team competitors is held constant (Study 3). This effect appears to result from individual streaks of success inspiring more awe than group streaks—and that people enjoying being awe-inspired. In Studies 4 and 5, we found that the experience of awe inspired by an individual streak drives the effect, a result that is itself driven by the greater dispositional attributions people make for the success of individuals as opposed to groups (Study 6). We demonstrate in Studies 7a and 7b that this effect is not an artifact of identifiability. Finally, Study 8 illustrates how the Streaking Star Effect impacts people’s beliefs about the appropriate market share for companies run by a successful individual versus a successful management team. We close by discussing implications of this effect for consumer behavior, and for how people react to economic inequality reflected in the success of individuals versus groups.
General discussion, from the Jesse Taylor Walker's PhD Thesis, August 2019 https://core.ac.uk/download/pdf/233840797.pdf
Although past researchers have exerted considerable energy studying streaks of success and failure, very little attention has been paid to the conditions that influence whether or not observers want a given streak to continue. The original aim of this work was to fill that gap. In the first eight studies, we identified a reliable bias such that people desire streaks of success by individuals to continue more than identical streaks by groups. We demonstrated two mechanisms that drive this effect. One key factor is that people experience a greater sense of awe at the prospect of seeing an individual continue a run of dominance than a group. A second is that people take the other competitors into greater consideration when a group is on a streak than when an individual is on a streak. The remaining studies illustrated how the implications of this work extend far beyond people’s preference for the continuation of streaks of success by individuals. Chapter 3 demonstrated ways in which the Streaking Star Effect can impact consumer behavior. We found that consumers were willing to pay more for products associated with individual runs of dominance than group runs of dominance, presumably because products associated with individual dominance are imbued with greater feelings of awe. As an additional extension, we showed how the psychology underlying the Streaking Star Effect may be used to influence attitudes toward inequality. In Chapter 4, inequality was judged to be more acceptable and fair when people perceived the top rung of the income ladder to be occupied by a successful individual as opposed to a successful group. - 69 - The differing attributions that people make for individuals and groups is at the root of many of these findings. Part of the reason that individual dominance is more awe inspiring may be because people tend to make greater dispositional attributions for the behavior of individuals than groups. Similarly, we found in Chapter 4 that people are more likely to make dispositional attributions for the success of wealthy individuals than wealthy groups. Mechanisms themselves often have their own psychological explanations, and these results raise the question as to why people make more dispositional attributions for individuals as opposed to groups. Although other research has supported this attributional pattern (Critcher & Dunning, 2014), no work has identified why people may follow this pattern when making judgments about individuals and groups. One possible explanation is that groups are more abstract than individuals, which may lead people to focus on different factors when making judgments about individuals and groups. The concrete nature of an individual target may call to mind specific characteristics like the target’s will and determination. These kinds of characteristics may seem especially difficult to ascribe to an abstract group of people who do not possess a single consciousness. As a result, outside social and environmental forces may be seen as acting more easily on a group of people than on specific individual. The ultimate reason, though, as to why people follow this attributional pattern is beyond the empirical goals of this work and would be better addressed by future research. Although we have explored at great length a condition that dictates whether people prefer a streak of success to continue, we have not examined the preferences people may have when the streak in question is one of failure rather than success. Do people prefer individuals to discontinue losing streaks more than they prefer groups to dis-continue identical streaks? While - 70 - people are often sensitive to the plight of a long-suffering individual (e.g. Small & Loewenstein, 2015), anecdotal evidence suggests that the preference for losing streaks to end may not follow the same kind of pattern as winning streaks. As an example, for over 100 years, The Chicago Cubs had suffered the longest championship drought of any professional team. But in 2016, they made it to the World Series and defeated the Cleveland Indians. The national reaction leading up to the World Series suggested that many people everywhere, regardless of location or prior allegiance, were pulling for the Cubs to end their run of futility (this author included). The number of people jumping on the Cubs’ “Bandwagon” was so great that it inspired a series of popular memes in addition to several news articles noting the sudden nationwide popularity of the Cubs (Linder, 2016). It seemed possible that the prospect of witnessing the Cubs’ put an end to over 100 years of losing may have been awe-inspiring in its own right. In a more formal test, we asked 200 participants on Mturk to imagine that an individual Calcio player or Calcio team had failed to qualify for the playoffs for 6 consecutive years. We then asked how much people would like to see these streaks come to an end. We suspected it may be possible that the prospect of a team ending a losing streak may inspire greater awe than individuals ending losing streaks (a reversal of the Streaking Star Effect). But this did not prove to be the case. In fact, there was no difference in the amount that participants wanted to see the individual end his of run futility and how much they wanted to see the team do the same. It is possible that people do want to see a team turn around a stretch futility (maybe even more than they would want that team to continue a streak of success) but people appear equally interested in seeing an individual on a run of futility turn around his fortunes.
Sunday, September 13, 2020
Frequency of persuasive bullshitting positively predicts bullshit receptivity (sensitivity) and this association is robust to individual differences in cognitive ability and analytic cognitive style
Littrell, Shane, and Jonathan A. Fugelsang. 2020. ““You Can’t Bullshit a Bullshitter” (or Can You?): Bullshitting Frequency Predicts Receptivity to Various Types of Bullshit” PsyArXiv. September 14. doi:10.31234/osf.io/5c2ej
Abstract: Research into both receptivity to falling for bullshit and the propensity to produce it have recently emerged as active, independent areas of inquiry into the spread of misinformation. However, it remains unclear whether those who frequently produce bullshit are inoculated from its influence. For example, both bullshit receptivity and bullshitting frequency are negatively related to cognitive ability and aspects of analytic thinking style, suggesting that those who frequently engage in bullshitting may be more likely to fall for bullshit. However, separate research suggests that individuals who frequently engage in deception are better at detecting it, thus leading to the possibility that frequent bullshitters may be less likely to fall for bullshit. Here we present 3 studies (N = 826) attempting to distinguish between these competing hypotheses, finding that frequency of persuasive bullshitting positively predicts bullshit receptivity (sensitivity) and that this association is robust to individual differences in cognitive ability and analytic cognitive style.
Abstract: Research into both receptivity to falling for bullshit and the propensity to produce it have recently emerged as active, independent areas of inquiry into the spread of misinformation. However, it remains unclear whether those who frequently produce bullshit are inoculated from its influence. For example, both bullshit receptivity and bullshitting frequency are negatively related to cognitive ability and aspects of analytic thinking style, suggesting that those who frequently engage in bullshitting may be more likely to fall for bullshit. However, separate research suggests that individuals who frequently engage in deception are better at detecting it, thus leading to the possibility that frequent bullshitters may be less likely to fall for bullshit. Here we present 3 studies (N = 826) attempting to distinguish between these competing hypotheses, finding that frequency of persuasive bullshitting positively predicts bullshit receptivity (sensitivity) and that this association is robust to individual differences in cognitive ability and analytic cognitive style.
As a predictor of violence (indexed with attitudinal, intentional, & behavioral measures), autocratic orientation outperformed other variables highlighted until now, including socioeconomic status & group-based injustice
Dominance-Driven Autocratic Political Orientations Predict Political Violence in Western, Educated, Industrialized, Rich, and Democratic (WEIRD) and Non-WEIRD Samples. Henrikas Bartusevičius, Florian van Leeuwen & Michael Bang Petersen. Psychological Science, Jul 24 2020. https://journals.sagepub.com/doi/abs/10.1177/0956797620922476
Abstract: Given the costs of political violence, scholars have long sought to identify its causes. We examined individual differences related to participation in political violence, emphasizing the central role of political orientations. We hypothesized that individuals with dominance-driven autocratic political orientations are prone to political violence. Multilevel analysis of survey data from 34 African countries (N = 51,587) indicated that autocracy-oriented individuals, compared with democracy-oriented individuals, are considerably more likely to participate in political violence. As a predictor of violence (indexed with attitudinal, intentional, and behavioral measures), autocratic orientation outperformed other variables highlighted in existing research, including socioeconomic status and group-based injustice. Additional analyses of original data from South Africa (N = 2,170), Denmark (N = 1,012), and the United States (N = 1,539) indicated that the link between autocratic orientations and political violence reflects individual differences in the use of dominance to achieve status and that the findings generalize to societies extensively socialized to democratic values.
Keywords: political violence, political orientation, autocracy, dominance, aggression, open data, open materials, preregistered
Abstract: Given the costs of political violence, scholars have long sought to identify its causes. We examined individual differences related to participation in political violence, emphasizing the central role of political orientations. We hypothesized that individuals with dominance-driven autocratic political orientations are prone to political violence. Multilevel analysis of survey data from 34 African countries (N = 51,587) indicated that autocracy-oriented individuals, compared with democracy-oriented individuals, are considerably more likely to participate in political violence. As a predictor of violence (indexed with attitudinal, intentional, and behavioral measures), autocratic orientation outperformed other variables highlighted in existing research, including socioeconomic status and group-based injustice. Additional analyses of original data from South Africa (N = 2,170), Denmark (N = 1,012), and the United States (N = 1,539) indicated that the link between autocratic orientations and political violence reflects individual differences in the use of dominance to achieve status and that the findings generalize to societies extensively socialized to democratic values.
Keywords: political violence, political orientation, autocracy, dominance, aggression, open data, open materials, preregistered
They Know How to Prevent Megafires. Why Won’t Anybody Listen?
They Know How to Prevent Megafires. Why Won’t Anybody Listen? Elizabeth Weil Aug. 28, 1010. https://www.propublica.org/article/they-know-how-to-prevent-megafires-why-wont-anybody-listen
Academics believe that between 4.4 million and 11.8 million acres burned each year in prehistoric California. Between 1982 and 1998, California’s agency land managers burned, on average, about 30,000 acres a year. Between 1999 and 2017, that number dropped to an annual 13,000 acres. The state passed a few new laws in 2018 designed to facilitate more intentional burning. But few are optimistic this, alone, will lead to significant change. We live with a deathly backlog. In February 2020, Nature Sustainability published this terrifying conclusion: California would need to burn 20 million acres — an area about the size of Maine — to restabilize in terms of fire.
[...]
[...] When I reached Malcolm North, a research ecologist with the U.S. Forest Service who is based in Mammoth, California, and asked if there was any meaningful scientific dissent to the idea that we need to do more controlled burning, he said, “None that I know of.”
[...]
When asked how we were doing on closing the gap between what we need to burn in California and what we actually light, Goulette fell into the familiar fire Cassandra stutter. “Oh gosh. … I don’t know. …” The QFR acknowledged there was no way prescribed burns and other kinds of forest thinning could make a dent in the risk imposed by the backlog of fuels in the next 10 or even 20 years. “We’re at 20,000 acres a year. We need to get to a million. What’s the reasonable path toward a million acres?” Maybe we could get to 40,000 acres, in five years. But that number made Goulette stop speaking again. “Forty thousand acres? Is that meaningful?” That answer, obviously, is no.
Check also Barriers and enablers for prescribed burns for wildfire management in California. Rebecca K. Miller, Christopher B. Field & Katharine J. Mach. Nature Sustainability volume 3, pages101–109 (2020). Jan 20 2020. https://www.nature.com/articles/s41893-019-0451-7
Academics believe that between 4.4 million and 11.8 million acres burned each year in prehistoric California. Between 1982 and 1998, California’s agency land managers burned, on average, about 30,000 acres a year. Between 1999 and 2017, that number dropped to an annual 13,000 acres. The state passed a few new laws in 2018 designed to facilitate more intentional burning. But few are optimistic this, alone, will lead to significant change. We live with a deathly backlog. In February 2020, Nature Sustainability published this terrifying conclusion: California would need to burn 20 million acres — an area about the size of Maine — to restabilize in terms of fire.
[...]
[...] When I reached Malcolm North, a research ecologist with the U.S. Forest Service who is based in Mammoth, California, and asked if there was any meaningful scientific dissent to the idea that we need to do more controlled burning, he said, “None that I know of.”
[...]
When asked how we were doing on closing the gap between what we need to burn in California and what we actually light, Goulette fell into the familiar fire Cassandra stutter. “Oh gosh. … I don’t know. …” The QFR acknowledged there was no way prescribed burns and other kinds of forest thinning could make a dent in the risk imposed by the backlog of fuels in the next 10 or even 20 years. “We’re at 20,000 acres a year. We need to get to a million. What’s the reasonable path toward a million acres?” Maybe we could get to 40,000 acres, in five years. But that number made Goulette stop speaking again. “Forty thousand acres? Is that meaningful?” That answer, obviously, is no.
Check also Barriers and enablers for prescribed burns for wildfire management in California. Rebecca K. Miller, Christopher B. Field & Katharine J. Mach. Nature Sustainability volume 3, pages101–109 (2020). Jan 20 2020. https://www.nature.com/articles/s41893-019-0451-7
Abstract: Prescribed burns to reduce fuel can mitigate the risk of catastrophic wildfires. However, multiple barriers limit their deployment, resulting in their underutilization, particularly in forests. We evaluate sociopolitical barriers and opportunities for greater deployment in California, an area recurrently affected by catastrophic fires. We use a mixed-methods approach combining expert interviews, state legislative policy analysis and prescribed-burn data from state records. We identify three categories of barriers. Risk-related barriers (fear of liability and negative public perceptions) prevent landowners from beginning the burn planning process. Both resource-related barriers (limited funding, crew availability and experience) and regulations-related barriers (poor weather conditions for burning and environmental regulations) prevent landowners from conducting burns, creating a gap between planning and implementation. Recent policies have sought to address mainly risk-related challenges, although these and regulations-related challenges remain. Fundamental shifts in prescribed-burn policies, beyond those currently under consideration, are needed to address wildfires in California and worldwide.
Subscribe to:
Posts (Atom)