Ryan, Anastacia Elle (2019) The sanctions of justice: a comparative study of the lived experiences of female sex workers in Scotland and New Zealand. PhD thesis. https://theses.gla.ac.uk/41136
Abstract
Female sex workers have become marked as women in transgression, viewed as bad or fallen, depending on thediscourse applied to the wider institution of prostitution. Bad women, if motivated to sell sexual services by self-interest, and fallen women, if prey to malicious male violence and abuse. Be they agents or victims, the construction of female sex workersin the public imagination is enmeshed with historical, political and social anxiety related to women’s transgression from appropriate norms of femininity, sexuality and good citizenship. Female sex workers are distinguished from “virtuous women” by the associated stigma of being a “whore”. Furthermore, political functions of the whore stigma serve to render female sex workers as “othered” and thus, they face oppression globally, yet disavow governance responsibilities for the injustices they face. In the advanced capitalist society, a whole range of liberties can be seen to be rendered as incompatible with female legitimacy.
This thesis examines how the historical, social and political meanings attached to prostitution/sex work affect the practices of governance in this area, inquiring into their implications for sex work policy, and their effects on sex working women’s lives in the comparative legal settings of Scotland and New Zealand. The overarching research aim driving this thesis is: to compare ways in which sex work laws, policies and frameworks in Scotland and New Zealand enable or constrain sex workers’ access to justice. This thesis adopts a participatory feminist methodological approach that centers women’s voices and experiences in addressing theresearch aims, in the endeavor to engage with social justice as both a task and a process.
Tasked with the desire to facilitate subjective accounts of women’s experiences, an overall qualitative approach to collating and analysing data was taken, with in-depth narrative style interviews and ethnographic-informed fieldwork observations being utilised to seek women’s own understandings of their lived experiences. To gain additional context to these experiences, semi-structured interviews were also conducted with key informants, identified as being people whose everyday work involved a translation of policy to practice and an operalisation in some sense of the laws and policies on sex work in each context. Fieldwork took place over a twelve-month period between the comparative contexts of New Zealand (mainly Wellington and Auckland) and Scotland (mainly Edinburgh and Glasgow). No fixed geographical boundaries were put in place in the research in acknowledgement of the transient and often mobile nature of women’s work in the sex industry. Sampling strategies were tailored to each research context, with the intention to reach women who were involved or in contact with local services and collectives and also women who were not in contact with such possible gatekeeping organisations. Thus, a sampling through gatekeepers alongside a snowball sampling technique arose in both contexts, with the later proving more effective in Scotland where a quasi-criminalised legal framework was found to make female sex workers work in more hidden and clandestine ways.
Over the five months spent in New Zealand, thirteen interviews with sex workers were conducted (with four repeated during follow up interviews), two interviews with managers and three interviews with those involved in the provision of service to sex workers. Additionally, over 150 hours of observation was collected in field notes, which ranged from time spent in the base of the New Zealand Prostitutes Collective (NZPC), time spent accompanying NZPC workers to outreach, collecting litter, such as used prevention commodities and their packaging, from the street, meeting with other agencies and organisations, and spending time in a sex working premise with sex workers and management. In Scotland, twelve sex workers were interviewed, of which three were interviewed for a second time. Six sex workers were approached through the national Charity, SCOT-PEP, and three were approached through the social media advertisement of other sex workers who were involved in the formation of Umbrella Lane. Three sex working women were recruited through charity outreach to a sex working premise, and around 150 hours of ethnographic-informed observational field notes were recorded from online forums, during face-to-face sex worker meetings, in organizing spaces, and in a sex workpremise.
The material effects of the comparative laws and policies on sex work on women’s lived experiences in Scotland and New Zealand were explored through thematic analysis of the data collated. Emergent themes of agency, risk, engagement, stigma and violence allowed for these experiences to form the basis of a critical analysis of structural forces at play. This impacted on women’s entry into sex work, occupational health and safety, sex workers engagement with health and other support services, experiences of stigma, and recourse to justice in cases of violence. Whilst the legal frameworks were not the only structural factor at play, decriminalization was experienced to enhance women’s agency and autonomy in their choosing of how to do sex work, with a prioritisation of their safety being supported by occupational health and safety (OHS) frameworks in place. These OHS frameworks also enabled the New Zealand based participants to minimize perceived risks to their safety, and further provisions in the Prostitution Reform Act allowed sex workers to access justice in cases where their rights were violated concretely through exploitation, abuse or violence, and more subtely in their development of resilience and resistanceto stigma. Furthermore, women in New Zealand felt empowered to access essential support services extending beyond basic harm reduction services. In Scotland, under a somewhat paradoxical setting whereby the legal framework criminalises the way sex workerswork, alongside a policy context that pivots on an understanding of prostitution as commercial sexual exploitation, women experienced less agency than their New Zealand counterparts in choosing how and where to work, increasing their risk to exploitative working conditions and violence, as their priotisation of avoiding state attention and criminalisation was experienced to override their safety concerns and yet felt comparatively more stigmatised and marginalised from services, including health, social and justice based service provision.
By reading empirical research with comparative policy, law and theory, this thesis contributes to the development of an ‘agenda for change’ for sex workers (McGarry and Fitzgerald, 2018). Such an agenda promotes a paradigm shift in rethinking the relationship between knowledge, discourse, and legal and policy governance, to explore women’s lived experiences in relation to social justice. By making visible the injustices enacted and perpetuated towards women, in this study, and assessing the role of the legal and policy frameworks in supporting, subverting or interrupting injustice and oppression of female sex workers, a politics of justice is envisaged in conclusions that make pertinent three points.Firstly, the urgent need for engagement with the New Zealand Model of decriminalisation of sex work as a task of promoting a politics of recognition to enable female sex workers the institutional rights necessary to minimise their occupational health and safety risks and enhance their access to formal justice in cases of exploitation, abuse and violence. Secondly, the required task for the development of a politics of redistribution in both comparative contexts that renders visible the structural injustices faced by femalesex workers by prioritising the facilitation of sex working women’s’ voices and experiences, in such justice claims, to avoid misrepresentation and misframing of these economic injustice concerns, which appear to dominate current redistribution justice understandings in the Scottish context. And finally, a renewed commitment of status equality to sex working women in policy frameworks that allows for social justice concerns. The exploration of women’s economic, political and social vulnerability to influence discourse, and subsequent policy objectives that are not causing further harm, marginalisation and exclusion of sex workers from vital support services and legal recourse to justice as further evidenced in the Scottish context.
Thursday, November 7, 2019
Processing speed for deservingness-relevant info is greater than deservingness-irrelevant info; the construct of deservingness is central in human social relations, as if we evolved to have embedded gauges that measure justice
Evidence of a Processing Advantage for Deservingness-Relevant Information. Carolyn L. Hafer et al. Social Psychology, October 7, 2019. https://doi.org/10.1027/1864-9335/a000396
Abstract. We investigated processing speed for deservingness-relevant versus deservingness-irrelevant information. Female students read stories involving deserved, undeserved, or neutral outcomes. We recorded participants’ reaction time (RT) in processing the outcomes. We also measured individual differences in “belief in a just world” as a proxy for deservingness schematicity. RTs for deserved and undeserved outcomes were faster than for neutral outcomes, B = −8.45, p = .011, an effect that increased the stronger the belief in a just world (e.g., B = −3.18, p = .006). These findings provide novel evidence that the construct of deservingness is central in human social relations, and suggest both universal and particularistic schemas for deservingness.
Keywords: deservingness, processing speed, reaction time, schema, belief in a just world
RT = reaction time
BJW= belief in a just world
Hypothesis 1 (H1): Deservingness-relevant information will be processed faster than deservingness-irrelevant information.
Hypothesis 2 (H2): As BJW increases, the greater the processing advantage for deservingness-relevant information.
Discussion
We examined processing speed for deservingness-relevant information. Supporting H1, participants processed deservingness-relevant information faster than deservingness-irrelevant information. This novel finding converges with previous research (e.g., Feather, 1999; Lerner, 1977; Price & Brosnan, 2012) to suggest that people have a concern with deservingness that is deeply ingrained. Supporting H2, the stronger participants’ BJW, the greater the processing advantage for deservingness-relevant information. Thus, deservingness appears to be more central for some individuals.
Affective priming (Fazio, Sanbonmatsu, Powell,& Kardes, 1986) cannot account for our findings. The deserved and neutral outcomes followed a stimulus (i.e., the protagonist’s behavior) of the same valence, whereas the undeserved outcomes involved stimuli (behavior and outcome) with mismatching valences. Yet, RTs to deserved versus undeserved outcomes did not differ (contrary to affective priming predictions). Furthermore, if affective priming were the mechanism, RTs for deserved/undeserved outcomes would not be faster than for neutral outcomes. Our findings have several implications. First, if people rapidly process deservingness-relevant information, they can quickly make judgments and act based on that information. Rapid responses to quickly processed deservingnessrelevant information are not always beneficial. For example, peoplemight automatically dismiss a potential human rights abuse based on quick processing of information implying the individual deserves severe treatment (see Drolet et al., 2016), while failing to consider more appropriate information that takes longer to process.
Second, support for H1 suggests that there is a universal schema, as well as a particularistic schema that is characteristic of people with a strong BJW (see Markus, Hamill, & Sentis, 1987). A universal deservingness schema is consistent with the justice motive theory and evolutionary perspectives noted earlier.
We presume our findings would generalize beyond female university students, given evidence that deservingness is central to people in general (see Hafer, 2011). However, researchers should test our hypotheses with other groups of people.
Unfortunately, our memory items were not designed to test schema-driven recall. Researchers should test whether a strong BJW acts as a schema by biasing memory (see Callan, Kay, Davidenko, & Ellard, 2009), as well as the mechanism underlying deservingness schematicity (e.g., construct accessibility or importance, richness of cognitive structures, etc.; see Ruble & Stangor, 1986). Furthermore, aside from people with a strong BJW, researchers should test whether people with a strong belief that people get what is undeserved are also particularly schematic for deservingness.
In conclusion, we found evidence that people, especially those with a strong BJW, show a processing advantage for deservingness-relevant versus deservingness–irrelevant information. These findings add novel support to the idea that the construct of deservingness is central in human social relations.
Electronic Supplementary Material: https://doi.org/10.1027/1864-9335/a000396
ESM 1. Details on Mixed Models Analyses
Our main finding is that height does have a strong positive effect on life satisfaction
Height and life satisfaction: Evidence from 27 nations. Nazim Habibov, Rong Luo, Alena Auchynnikava, Lida Fan. American Journal of Human Biology, November 6 2019. https://doi.org/10.1002/ajhb.23351
Abstract
Objectives: To evaluate the effect of height on life satisfaction.
Methods: We use data from a recent multi‐country survey that was conducted in 27 nations.
Results: Our main finding is that height does have a strong positive effect on life satisfaction. These findings remain positive and significant when we use a comprehensive set of well‐known covariates of life‐satisfaction at both the individual and country levels. These findings also remain robust to alternative statistical specifications.
Conclusions: From a theoretical standpoint, our findings suggest that height is important in explaining life‐satisfaction independent of other well‐known determinants. From a methodological standpoint, the findings of this study highlight the need to explicitly control for the effect of heights in studies on subjective well‐being, happiness, and life‐satisfaction.
---
5 | CONCLUSION
Recent developments in the literature have highlighted the important role that height plays in explaining life satisfac-tion. However, despite the strong theoretical underpinnings of a positive height effect on life satisfaction, surprisingly few studies have attempted to confirm the positive influence of height on life satisfaction, especially outside of developed countries. In the light of this evidence, the present study evaluates the effect of height on life satisfaction in 27 post-communist nations.
Our main finding is that height indeed does have a positive effect on life satisfaction. This finding remains significant when we use a comprehensive set of well-known covariates of life-satisfaction at both the individual and country levels. These findings also remain robust to alternative statistical specifications. We also found that greater height is associated with higher levels of satisfaction with one's financial situation, although the effect of height on job satisfaction is not significant. However, we found that the magnitude of the height effect is lower when compared with well-known covariates of life-satisfaction at the individual level such as health, age, income, education, and marital status. Likewise, the magnitude of the height effect is lower than well-known covariates of life-satisfaction at the country level as well as GDP growth, poverty level, economic freedom, social transfers to population, and democracy level.
The empirical findings discussed above have theoretical and methodological implications. From the theoretical stand-point, our findings suggest that lack of investments into high quality nutrition, hygiene, health, education, and positive environmental conditions prevent some children from attaining their full potential height. Not reaching full height will then be translated into poorer health, emotional, reproductive, educational, and labor market outcomes. In turn, these factors are responsible for the lower levels of life evaluation provided by shorter people. From the methodological stand-point, the findings of this study highlight the need to explicitly control for the effect of heights in studies on subjective well-being, happiness, and life-satisfaction. However, a number of limitations should be mentioned. Small country samples prevent us from conducting country-by-country analysis to uncover variations in the height effect across countries. In addition, even though we used a comprehensive set of control variables, we were not able to control for abilities due to limitations in our data set. Hence, we were not able to fully overcome the omitted variable problem inasmuch as cognitive and non-cognitive abilities could be correlated with both height and life-satisfaction. Future studies should focus on country differences in the height to life satisfaction effect by controlling for abilities. In addition, comparing the height to life satisfaction effect across developed, post-communist, and developing countries could provide another promising line of enquiry.
Abstract
Objectives: To evaluate the effect of height on life satisfaction.
Methods: We use data from a recent multi‐country survey that was conducted in 27 nations.
Results: Our main finding is that height does have a strong positive effect on life satisfaction. These findings remain positive and significant when we use a comprehensive set of well‐known covariates of life‐satisfaction at both the individual and country levels. These findings also remain robust to alternative statistical specifications.
Conclusions: From a theoretical standpoint, our findings suggest that height is important in explaining life‐satisfaction independent of other well‐known determinants. From a methodological standpoint, the findings of this study highlight the need to explicitly control for the effect of heights in studies on subjective well‐being, happiness, and life‐satisfaction.
---
5 | CONCLUSION
Recent developments in the literature have highlighted the important role that height plays in explaining life satisfac-tion. However, despite the strong theoretical underpinnings of a positive height effect on life satisfaction, surprisingly few studies have attempted to confirm the positive influence of height on life satisfaction, especially outside of developed countries. In the light of this evidence, the present study evaluates the effect of height on life satisfaction in 27 post-communist nations.
Our main finding is that height indeed does have a positive effect on life satisfaction. This finding remains significant when we use a comprehensive set of well-known covariates of life-satisfaction at both the individual and country levels. These findings also remain robust to alternative statistical specifications. We also found that greater height is associated with higher levels of satisfaction with one's financial situation, although the effect of height on job satisfaction is not significant. However, we found that the magnitude of the height effect is lower when compared with well-known covariates of life-satisfaction at the individual level such as health, age, income, education, and marital status. Likewise, the magnitude of the height effect is lower than well-known covariates of life-satisfaction at the country level as well as GDP growth, poverty level, economic freedom, social transfers to population, and democracy level.
The empirical findings discussed above have theoretical and methodological implications. From the theoretical stand-point, our findings suggest that lack of investments into high quality nutrition, hygiene, health, education, and positive environmental conditions prevent some children from attaining their full potential height. Not reaching full height will then be translated into poorer health, emotional, reproductive, educational, and labor market outcomes. In turn, these factors are responsible for the lower levels of life evaluation provided by shorter people. From the methodological stand-point, the findings of this study highlight the need to explicitly control for the effect of heights in studies on subjective well-being, happiness, and life-satisfaction. However, a number of limitations should be mentioned. Small country samples prevent us from conducting country-by-country analysis to uncover variations in the height effect across countries. In addition, even though we used a comprehensive set of control variables, we were not able to control for abilities due to limitations in our data set. Hence, we were not able to fully overcome the omitted variable problem inasmuch as cognitive and non-cognitive abilities could be correlated with both height and life-satisfaction. Future studies should focus on country differences in the height to life satisfaction effect by controlling for abilities. In addition, comparing the height to life satisfaction effect across developed, post-communist, and developing countries could provide another promising line of enquiry.
Associations of "positive" temperament and personality traits with frequency of physical activity in adulthood: The nicer they were, the more they exercised
Associations of temperament and personality traits with frequency of physical activity in adulthood. Jenni Karvonen et al. Journal of Research in Personality, November 7 2019, 103887. https://doi.org/10.1016/j.jrp.2019.103887
Highlights
• Both temperament and personality traits are associated with adult physical activity.
• Different traits are related to physical activity engagement among women and men.
• Temperament and personality traits relate to rambling in nature and watching sports.
• Combinations of temperament and personality characteristics need further research.
Abstract: Temperament and physical activity (PA) have been examined in children and adolescents, but little is known about these associations in adulthood. Personality traits, however, are known to contribute to PA in adults. This study, which examined both temperament and personality characteristics at age 42 in relation to frequency of PA at age 50 (JYLS, n = 214-261), also found associations with temperament traits. Positive associations were found between Orienting sensitivity and overall PA and between Extraversion and vigorous PA among women and between low Negative affectivity and overall and vigorous PA among men. Furthermore, Orienting sensitivity and Agreeableness were associated with vigorous PA among men. Temperament and personality characteristics also showed gender-specific associations with rambling in nature and watching sports.
3.4 Discussion
The present study investigated the associations of temperament and personality characteristics,
assessed at age 42, with frequency of engagement in physical activity, assessed at age 50,
among Finnish women and men. Our main findings for the temperament traits were as follows:
(1) Orienting sensitivity showed a positive association with overall physical activity
engagement among women, and with vigorous physical activity engagement and an increased
likelihood of frequent rambling in nature among men. As expected, (2) Negative affectivity
was found to be negatively associated with both overall as well as vigorous physical activity
engagement, but only among men, meaning that men who scored lower in Negative affectivity
were more likely to engage in both overall and vigorous physical activity. In support of this,
men who scored high in Negative affectivity were more likely to watch sports frequently,
which, in this study, was seen to reflect physical inactivity. However, alternative
interpretations, such as the view that watching sports reflects an interest in other people’s
physical activity, are possible, and thus our results suggest that Negative affectivity is
associated with higher physical inactivity but also with interest in sports. Lastly, (3) Surgency
was negatively associated with rambling in nature among women.
Due to the novelty of the present observations for adult temperament, we can only speculate as
to their reasons. It is possible that awareness of extraneous low intensity stimulation, which
characterizes individuals who score high in Orienting sensitivity (Evans & Rothbart, 2007),
leads these individuals to experience physical activity-related physical responses as particularly
pleasant or satisfying, which in turn encourages them to exercise more frequently. Similarly,
the positive association observed here between Orienting sensitivity and rambling in nature
among men may stem from their conscious awareness of their surroundings and its visual
features. Evans and Rothbart (2007) characterize individuals with high levels of Surgency as
needing high levels of strength, complexity or novelty of arousal. This may help to explain
why, among the present women, those with high Surgency scores showed reduced willingness
to ramble in nature, as this type of physical activity does not provide them with adequate
stimuli. Our observation of the negative association between Negative affectivity and overall
and vigorous engagement in physical activity is in line with our study hypothesis. However, as
sports can be watched either individually or in the company of a larger group of people, we are
unable to provide confirmation for our hypothesis that Negative affectivity is positively
associated with, in particular, individually performed exercise types. Some explanation for the
negative association between Negative affectivity and physical activity engagement may be
gained from the general characterization of individuals who score high in Negative affectivity:
these individuals are prone to negative emotional states, such as anxiousness and selfconsciousness
(Watson & Clark, 1984), which, understandably, may decrease, restrict or
altogether erode their willingness to participate in situations involving physical activity, as
could have been the case among the present sample of men. As our results on adult
temperament and physical activity are first of their kind, additional research to confirm them
is clearly needed.
Our results on regards personality traits support earlier findings (Allen & Vella, 2015;
Courneya & Hellsten, 1998; Hagan & Hausenblas, 2005) in that high scores in Extraversion
were positively associated with engagement in vigorous physical activity, and that individuals
with higher levels of Openness were more likely to engage in outdoor exercise. Our results did
not provide confirmation for the previously found positive association between Agreeableness
and recreational exercise (Courneya & Hellsten, 1998). Interestingly, we also found
Extraversion to have a positive association with watching sports. Individuals scoring high in
Extraversion have been characterized by gregariousness and a need for intense sensory stimuli
(Costa & McCrae, 1992b; McCrae & John, 1992), needs which, as argued by Wilson and
Dishman (2015), may be met by physical activity. This is not to say that the same needs could
not also be satisfied by more sedentary activities, like watching a football game with friends.
It is probable, therefore, that the present observation between Extraversion and watching sports
relates to the social rather than the sedentary aspect of watching sports. It may also be that
different lower-order facets within the Extraversion trait relate differently to different types of
exercise. According to Artese, Ehley, Sutin and Terracciano (2017), the Activity facet
especially is associated with more frequent engagement in physical activity and less sedentary
time when measured via an accelerometer. The same phenomenon has been noted by Vo and
Bogg (2015) for self-reported physical activity. However, more extensive research on the
lower-order facets of personality traits in relation to physical activity is called for.
Our finding on the positive association between Agreeableness and vigorous physical activity,
despite its being surprising and in contradiction to previous findings (Aşçı et al., 2015; Sutin
et al., 2016; Wilson & Dishman, 2015), is supported by Artese et al. (2017), who reported
Agreeableness to be positively associated with moderate-vigorous physical activity and step
counts. Our results suggest that Agreeableness might be a significant factor in physical activity,
particularly among men. In the same JYLS data, Hietalahti, Rantanen and Kokko (2016) found
Agreeableness to be positively correlated with leisure and physical fitness goals among men.
Our results may, therefore, also be coincidental and reflective only of the present study
population. However, considering that most of the previous studies on personality traits and
physical activity have not taken gender differences into account, our results are hypothesisgenerating
and merit replication in a larger sample.
Our analyses on trait combinations shed light on both the relationship between adult
temperament and personality traits and their simultaneous association with physical activity.
Our results suggest that the women in the present study sample may be seeking something other
than high intensity or strong stimulus from their physical activity. The present results also
imply that these women may be looking for novel experiences when engaging in physical
activity and that men with high levels of negative emotionality are at especial risk for being
physically inactive. On the other hand, our results indicate that self-regulative processes are
related to the ability of men to follow up on high intensity training and perhaps to inhibit the
urge to cease exercise despite the unpleasant sensations possibly induced by intense physical
stimulation. Although generally described by attributes such as altruism, ingenuousness and
kindness (McCrae & John, 1992b), our observation on the association of the trait combination
of Effortful control and Agreeableness with vigorous physical activity engagement may in fact
support the findings of Jensen-Campbell et al. (2002), who suggested that Agreeableness has a
developmental basis in inhibition and self-control rather than social conformity. As our
analyses on individual traits and trait combinations also produced slightly different results,
more emphasis on examining the inter-relationships between temperament and personality
characteristics is needed. Similarly, while the trait combinations presented here gained
theoretical support from existing correlational evidence between temperament and personality
characteristics (Evans & Rothbart, 2007; Pulkkinen et al., 2012; Wiltink et al., 2006), the novel
analytic approach used merits further research.
The present findings add to the extensive line of personality research already conducted on the
JYLS study population, unique in its representativeness and length of follow-up. Previous
studies on the same data have linked personality traits to various meaningful aspects of adult
life, including parenting (Metsäpelto & Pulkkinen, 2003; Rantanen, Tillemann, Metsäpelto,
Kokko, & Pulkkinen, 2015), working life (Viinikainen, Kokko, Pulkkinen, & Pehkonen, 2010;
Viinikainen & Kokko, 2012) and well-being (e.g. Kokko et al., 2013; Mäkikangas et al., 2015).
Our findings extend this knowledge by indicating yet another domain of these individuals’
daily lives, habitual physical activity, to which individual differences in both temperament and
personality traits contribute. Following Kinnunen et al. (2012), our findings also point to the
utility of assessing larger groups of (temperament and) personality characteristics instead of
focusing on individual traits alone.
Highlights
• Both temperament and personality traits are associated with adult physical activity.
• Different traits are related to physical activity engagement among women and men.
• Temperament and personality traits relate to rambling in nature and watching sports.
• Combinations of temperament and personality characteristics need further research.
Abstract: Temperament and physical activity (PA) have been examined in children and adolescents, but little is known about these associations in adulthood. Personality traits, however, are known to contribute to PA in adults. This study, which examined both temperament and personality characteristics at age 42 in relation to frequency of PA at age 50 (JYLS, n = 214-261), also found associations with temperament traits. Positive associations were found between Orienting sensitivity and overall PA and between Extraversion and vigorous PA among women and between low Negative affectivity and overall and vigorous PA among men. Furthermore, Orienting sensitivity and Agreeableness were associated with vigorous PA among men. Temperament and personality characteristics also showed gender-specific associations with rambling in nature and watching sports.
3.4 Discussion
The present study investigated the associations of temperament and personality characteristics,
assessed at age 42, with frequency of engagement in physical activity, assessed at age 50,
among Finnish women and men. Our main findings for the temperament traits were as follows:
(1) Orienting sensitivity showed a positive association with overall physical activity
engagement among women, and with vigorous physical activity engagement and an increased
likelihood of frequent rambling in nature among men. As expected, (2) Negative affectivity
was found to be negatively associated with both overall as well as vigorous physical activity
engagement, but only among men, meaning that men who scored lower in Negative affectivity
were more likely to engage in both overall and vigorous physical activity. In support of this,
men who scored high in Negative affectivity were more likely to watch sports frequently,
which, in this study, was seen to reflect physical inactivity. However, alternative
interpretations, such as the view that watching sports reflects an interest in other people’s
physical activity, are possible, and thus our results suggest that Negative affectivity is
associated with higher physical inactivity but also with interest in sports. Lastly, (3) Surgency
was negatively associated with rambling in nature among women.
Due to the novelty of the present observations for adult temperament, we can only speculate as
to their reasons. It is possible that awareness of extraneous low intensity stimulation, which
characterizes individuals who score high in Orienting sensitivity (Evans & Rothbart, 2007),
leads these individuals to experience physical activity-related physical responses as particularly
pleasant or satisfying, which in turn encourages them to exercise more frequently. Similarly,
the positive association observed here between Orienting sensitivity and rambling in nature
among men may stem from their conscious awareness of their surroundings and its visual
features. Evans and Rothbart (2007) characterize individuals with high levels of Surgency as
needing high levels of strength, complexity or novelty of arousal. This may help to explain
why, among the present women, those with high Surgency scores showed reduced willingness
to ramble in nature, as this type of physical activity does not provide them with adequate
stimuli. Our observation of the negative association between Negative affectivity and overall
and vigorous engagement in physical activity is in line with our study hypothesis. However, as
sports can be watched either individually or in the company of a larger group of people, we are
unable to provide confirmation for our hypothesis that Negative affectivity is positively
associated with, in particular, individually performed exercise types. Some explanation for the
negative association between Negative affectivity and physical activity engagement may be
gained from the general characterization of individuals who score high in Negative affectivity:
these individuals are prone to negative emotional states, such as anxiousness and selfconsciousness
(Watson & Clark, 1984), which, understandably, may decrease, restrict or
altogether erode their willingness to participate in situations involving physical activity, as
could have been the case among the present sample of men. As our results on adult
temperament and physical activity are first of their kind, additional research to confirm them
is clearly needed.
Our results on regards personality traits support earlier findings (Allen & Vella, 2015;
Courneya & Hellsten, 1998; Hagan & Hausenblas, 2005) in that high scores in Extraversion
were positively associated with engagement in vigorous physical activity, and that individuals
with higher levels of Openness were more likely to engage in outdoor exercise. Our results did
not provide confirmation for the previously found positive association between Agreeableness
and recreational exercise (Courneya & Hellsten, 1998). Interestingly, we also found
Extraversion to have a positive association with watching sports. Individuals scoring high in
Extraversion have been characterized by gregariousness and a need for intense sensory stimuli
(Costa & McCrae, 1992b; McCrae & John, 1992), needs which, as argued by Wilson and
Dishman (2015), may be met by physical activity. This is not to say that the same needs could
not also be satisfied by more sedentary activities, like watching a football game with friends.
It is probable, therefore, that the present observation between Extraversion and watching sports
relates to the social rather than the sedentary aspect of watching sports. It may also be that
different lower-order facets within the Extraversion trait relate differently to different types of
exercise. According to Artese, Ehley, Sutin and Terracciano (2017), the Activity facet
especially is associated with more frequent engagement in physical activity and less sedentary
time when measured via an accelerometer. The same phenomenon has been noted by Vo and
Bogg (2015) for self-reported physical activity. However, more extensive research on the
lower-order facets of personality traits in relation to physical activity is called for.
Our finding on the positive association between Agreeableness and vigorous physical activity,
despite its being surprising and in contradiction to previous findings (Aşçı et al., 2015; Sutin
et al., 2016; Wilson & Dishman, 2015), is supported by Artese et al. (2017), who reported
Agreeableness to be positively associated with moderate-vigorous physical activity and step
counts. Our results suggest that Agreeableness might be a significant factor in physical activity,
particularly among men. In the same JYLS data, Hietalahti, Rantanen and Kokko (2016) found
Agreeableness to be positively correlated with leisure and physical fitness goals among men.
Our results may, therefore, also be coincidental and reflective only of the present study
population. However, considering that most of the previous studies on personality traits and
physical activity have not taken gender differences into account, our results are hypothesisgenerating
and merit replication in a larger sample.
Our analyses on trait combinations shed light on both the relationship between adult
temperament and personality traits and their simultaneous association with physical activity.
Our results suggest that the women in the present study sample may be seeking something other
than high intensity or strong stimulus from their physical activity. The present results also
imply that these women may be looking for novel experiences when engaging in physical
activity and that men with high levels of negative emotionality are at especial risk for being
physically inactive. On the other hand, our results indicate that self-regulative processes are
related to the ability of men to follow up on high intensity training and perhaps to inhibit the
urge to cease exercise despite the unpleasant sensations possibly induced by intense physical
stimulation. Although generally described by attributes such as altruism, ingenuousness and
kindness (McCrae & John, 1992b), our observation on the association of the trait combination
of Effortful control and Agreeableness with vigorous physical activity engagement may in fact
support the findings of Jensen-Campbell et al. (2002), who suggested that Agreeableness has a
developmental basis in inhibition and self-control rather than social conformity. As our
analyses on individual traits and trait combinations also produced slightly different results,
more emphasis on examining the inter-relationships between temperament and personality
characteristics is needed. Similarly, while the trait combinations presented here gained
theoretical support from existing correlational evidence between temperament and personality
characteristics (Evans & Rothbart, 2007; Pulkkinen et al., 2012; Wiltink et al., 2006), the novel
analytic approach used merits further research.
The present findings add to the extensive line of personality research already conducted on the
JYLS study population, unique in its representativeness and length of follow-up. Previous
studies on the same data have linked personality traits to various meaningful aspects of adult
life, including parenting (Metsäpelto & Pulkkinen, 2003; Rantanen, Tillemann, Metsäpelto,
Kokko, & Pulkkinen, 2015), working life (Viinikainen, Kokko, Pulkkinen, & Pehkonen, 2010;
Viinikainen & Kokko, 2012) and well-being (e.g. Kokko et al., 2013; Mäkikangas et al., 2015).
Our findings extend this knowledge by indicating yet another domain of these individuals’
daily lives, habitual physical activity, to which individual differences in both temperament and
personality traits contribute. Following Kinnunen et al. (2012), our findings also point to the
utility of assessing larger groups of (temperament and) personality characteristics instead of
focusing on individual traits alone.
Humans can have normal olfaction without apparent olfactory bulbs (seen in 0.6% of women, but not in men); this is associated with left-handedness
Human Olfaction without Apparent Olfactory Bulbs. Tali Weiss et al. Neuron, November 6 2019. https://doi.org/10.1016/j.neuron.2019.10.006
Highlights
• Humans can have normal olfaction without apparent olfactory bulbs
• Olfaction without apparent bulbs is seen in 0.6% of women, but not in men
• Olfaction without apparent bulbs is associated with left-handedness
Summary: The olfactory bulbs (OBs) are the first site of odor representation in the mammalian brain, and their unique ultrastructure is considered a necessary substrate for spatiotemporal coding of smell. Given this, we were struck by the serendipitous observation at MRI of two otherwise healthy young left-handed women, yet with no apparent OBs. Standardized tests revealed normal odor awareness, detection, discrimination, identification, and representation. Functional MRI of these women’s brains revealed that odorant-induced activity in piriform cortex, the primary OB target, was similar in its extent to that of intact controls. Finally, review of a public brain-MRI database with 1,113 participants (606 women) also tested for olfactory performance, uncovered olfaction without anatomically defined OBs in ∼0.6% of women and ∼4.25% of left-handed women. Thus, humans can perform the basic facets of olfaction without canonical OBs, implying extreme plasticity in the functional neuroanatomy of this sensory system.
Highlights
• Humans can have normal olfaction without apparent olfactory bulbs
• Olfaction without apparent bulbs is seen in 0.6% of women, but not in men
• Olfaction without apparent bulbs is associated with left-handedness
Summary: The olfactory bulbs (OBs) are the first site of odor representation in the mammalian brain, and their unique ultrastructure is considered a necessary substrate for spatiotemporal coding of smell. Given this, we were struck by the serendipitous observation at MRI of two otherwise healthy young left-handed women, yet with no apparent OBs. Standardized tests revealed normal odor awareness, detection, discrimination, identification, and representation. Functional MRI of these women’s brains revealed that odorant-induced activity in piriform cortex, the primary OB target, was similar in its extent to that of intact controls. Finally, review of a public brain-MRI database with 1,113 participants (606 women) also tested for olfactory performance, uncovered olfaction without anatomically defined OBs in ∼0.6% of women and ∼4.25% of left-handed women. Thus, humans can perform the basic facets of olfaction without canonical OBs, implying extreme plasticity in the functional neuroanatomy of this sensory system.
After controlling for car length, brand status, and car price, driver seat space remained a positive predictor of illegal parking
Does size matter? Spacious car cockpits may increase the probability of parking violations. Felix C. Meier, Markus Schöbel & Markus A. Feufel. Ergonomics, Volume 61, 2018 - Issue 12, Pages 1613-1618, Oct 26 2018. https://doi.org/10.1080/00140139.2018.1503727
Abstract: Cockpit design is a core area of human factors and ergonomics (HF/E). Ideally, good design compensates for human capacity limitations by distributing task requirements over human and interface to improve safety and performance. Recent empirical findings suggest that the mere spatial layout of car cockpits may influence driver behaviour, expanding current views on HF/E in cockpit design. To assess the reliability of findings showing that an expansive driver seat space predicts parking violations, we replicated an original field study in a geographically and socio-culturally different location and included an additional covariate. After controlling for car length, brand status, and car price, driver seat space remained a positive predictor of illegal parking. This suggests that the spatial design of vehicle cockpits may indeed have an influence on driver behaviour and may therefore be a relevant dimension to be included in research and applications of HF/E in cockpit design.
Practitioner summary: In car cockpit design, ergonomists typically focus on optimising human–machine interfaces to improve traffic safety. We replicate evidence showing that increasing physical space surrounding the driver relates to an increased probability of parking violations. This suggests that spatial design should be added to the ergonomist's toolbox for reducing traffic violations.
Keywords: Embodiment, expansive body postures, traffic safety, cockpit design, parking violations
4 Discussion
Similar to the findings of Yap and colleagues (2013) our study shows that driver seat space predictsthe likelihood of parking violations. This effect could be replicated in a different cultural (Germany vs. US) and urban setting (the rural town of Offenburg vs. the metropolis New York City) focusing ona broad variation of parking violations identified by professional inspectors.Theeffect statistically persisted, even when controlling for car brand status, car length, and car price, the latter of which is also a significant predictor forparking violations.
These findings suggest that driving behaviour and traffic safety may not only be influenced by interactions between the person behind the wheel and interface design, but also by the spatial dimension of the driver'scar cockpit. Furtherresearchinto the effectof driver seat space on behavioural processes(e.g., body postures, risk taking, andviolations)might inform future HF/E research on cockpit design. Relatedly, our results also imply that safe cockpit design should also move beyond the standard error categories of slips, lapses and mistakes,and should also pay attention to violations. Although ample studies investigate the relationship between psychological factors and traffic violations (e.g.,Ba et al. 2016), there are only few HF/E studies on the effect of cockpit designon traffic violations to date (e.g., Aliane et al. 2014). The present study suggests anewavenue for HF/E to systematicallyinvestigate traffic violations in relation to the spatial dimensionof cockpit design. More such studies may have the power to advance the current understanding of traffic violations bycomplementingpsychological sources of violations with those that are located in the environment (Reason 1990).
We are aware that the behavioural effects of body postures are fiercely debated in the literature. Given that this debate is ongoing,there is no clear-cut explanatory accountfor our results. But even ifwecannot explain the effect of body postures on parking violations with our observational design, our results may help trigger additional research for a better understanding of the relationship between driver seat space and traffic violations.
Our study included additional control variables (i.e., car price) compared to the original study by Yap and colleagues. However, there are also other variables, which shouldbe consideredin future studies. For instance, tall or heavy drivers will have different individual seat spaces compared to short and slender drivers. Also, individual seat configuration, that is, whether a seat is adjusted closer to or further away from the steering wheel, influences individual seat spaceand, therefore, body postures. Moreover, Carney, Cuddy, and Yap (2015) discuss that also the time a person remains in a certain posturemay change its effects. Whereas experimental manipulations of body postures forced participants to hold a posture oneminute (Carney, Cuddy, and Yap 2010) or threeminutes (Ranehill et al. 2015), it can be assumed that participants in our study did not “hold” but selected a posture that felt comfortable or natural, potentially for an extended period of time. Clearly, more research is needed to work out both the magnitude and the causal explanations of body posture effects as well astheir relevance for cockpit design. Our results imply that it is worthwhile investigating the thus faru nder-researched impact of driver seat space on traffic behaviour. HF/E is well equipped to follow up on these findings.
Abstract: Cockpit design is a core area of human factors and ergonomics (HF/E). Ideally, good design compensates for human capacity limitations by distributing task requirements over human and interface to improve safety and performance. Recent empirical findings suggest that the mere spatial layout of car cockpits may influence driver behaviour, expanding current views on HF/E in cockpit design. To assess the reliability of findings showing that an expansive driver seat space predicts parking violations, we replicated an original field study in a geographically and socio-culturally different location and included an additional covariate. After controlling for car length, brand status, and car price, driver seat space remained a positive predictor of illegal parking. This suggests that the spatial design of vehicle cockpits may indeed have an influence on driver behaviour and may therefore be a relevant dimension to be included in research and applications of HF/E in cockpit design.
Practitioner summary: In car cockpit design, ergonomists typically focus on optimising human–machine interfaces to improve traffic safety. We replicate evidence showing that increasing physical space surrounding the driver relates to an increased probability of parking violations. This suggests that spatial design should be added to the ergonomist's toolbox for reducing traffic violations.
Keywords: Embodiment, expansive body postures, traffic safety, cockpit design, parking violations
4 Discussion
Similar to the findings of Yap and colleagues (2013) our study shows that driver seat space predictsthe likelihood of parking violations. This effect could be replicated in a different cultural (Germany vs. US) and urban setting (the rural town of Offenburg vs. the metropolis New York City) focusing ona broad variation of parking violations identified by professional inspectors.Theeffect statistically persisted, even when controlling for car brand status, car length, and car price, the latter of which is also a significant predictor forparking violations.
These findings suggest that driving behaviour and traffic safety may not only be influenced by interactions between the person behind the wheel and interface design, but also by the spatial dimension of the driver'scar cockpit. Furtherresearchinto the effectof driver seat space on behavioural processes(e.g., body postures, risk taking, andviolations)might inform future HF/E research on cockpit design. Relatedly, our results also imply that safe cockpit design should also move beyond the standard error categories of slips, lapses and mistakes,and should also pay attention to violations. Although ample studies investigate the relationship between psychological factors and traffic violations (e.g.,Ba et al. 2016), there are only few HF/E studies on the effect of cockpit designon traffic violations to date (e.g., Aliane et al. 2014). The present study suggests anewavenue for HF/E to systematicallyinvestigate traffic violations in relation to the spatial dimensionof cockpit design. More such studies may have the power to advance the current understanding of traffic violations bycomplementingpsychological sources of violations with those that are located in the environment (Reason 1990).
We are aware that the behavioural effects of body postures are fiercely debated in the literature. Given that this debate is ongoing,there is no clear-cut explanatory accountfor our results. But even ifwecannot explain the effect of body postures on parking violations with our observational design, our results may help trigger additional research for a better understanding of the relationship between driver seat space and traffic violations.
Our study included additional control variables (i.e., car price) compared to the original study by Yap and colleagues. However, there are also other variables, which shouldbe consideredin future studies. For instance, tall or heavy drivers will have different individual seat spaces compared to short and slender drivers. Also, individual seat configuration, that is, whether a seat is adjusted closer to or further away from the steering wheel, influences individual seat spaceand, therefore, body postures. Moreover, Carney, Cuddy, and Yap (2015) discuss that also the time a person remains in a certain posturemay change its effects. Whereas experimental manipulations of body postures forced participants to hold a posture oneminute (Carney, Cuddy, and Yap 2010) or threeminutes (Ranehill et al. 2015), it can be assumed that participants in our study did not “hold” but selected a posture that felt comfortable or natural, potentially for an extended period of time. Clearly, more research is needed to work out both the magnitude and the causal explanations of body posture effects as well astheir relevance for cockpit design. Our results imply that it is worthwhile investigating the thus faru nder-researched impact of driver seat space on traffic behaviour. HF/E is well equipped to follow up on these findings.
Wednesday, November 6, 2019
Our results suggest that individuals in a more positive mood are less likely to cooperate, and play less efficiently in a repeated Prisoner’s Dilemma
Happiness, cooperation and language. Eugenio Proto, Daniel Sgroi, Mahnaz Nazneen. Journal of Economic Behavior & Organization, November 6 2019. https://doi.org/10.1016/j.jebo.2019.10.006
Abstract: According to existing research across several disciplines (management, psychology, economics and neuroscience), positive mood can have positive effects, engendering more altruistic, open and helpful behaviour, but can also work through a more negative channel by inducing inward-orientation, assertiveness, and reduced use of information. This leaves the impact on cooperation in interactive and strategic situations unclear. We find evidence from 490 participants in a laboratory experiment suggesting that participants in an induced positive mood cooperate less in a repeated Prisoner’s Dilemma than participants in a neutral setting. This is robust to the number of repetitions or the inclusion of pre-play communication. In order to understand why positive mood might damage the propensity to cooperate, we conduct a language analysis of the pre-play communication between players. This analysis indicates that subjects in a more positive mood use more inward-oriented and more negative language.
Keywords: Positive moodAffectHappinessMood induction proceduresCooperationRepeated Prisoner’s DilemmaSocial preferencesSocial dilemmasCognitive skillsProductivityInward-orientationLanguage analysis
JEL classification: C72 (Cooperative games)C91 (Laboratory experiments)D91 (Role and effects of psychologicalemotionalsocialand cognitive factors on decision making)J24 (Productivity)J28 (Life satisfaction)
---
In a previous version:
5 Concluding Remarks
Our results suggest that individuals in a more positive mood are less likely to cooperate, and play less efficiently in a repeated Prisoner’s Dilemma. This supports what we described as the “negative channel” in the introduction, and suggests that this channel dominates the “positive channel” in a situation involving repeated play and strategic interaction. This is true both for the repeated Prisoner’s Dilemma with a known and unknown end date and for sessions both with and without pre-play communication. We also show that the result is not specific to a particular form of mood induction. The result holds right through to the final round of play, though it does not hold if we analyse only the very first round of each supergame.
A novel analysis of the text used in pre-play communication, to our knowledge the first of its kind in an economics laboratory experiment, suggests that those in a more positive mood use more negative language and display greater inward-orientation (through the greater use of the “I” pronoun) than those in a neutral mood which also supports the “negative channel”. We confirm that inward orientation is not specific to any one form of mood induction (it applies equally well to the use of movie clips or Velten statements and music) an our language analysis is. Our findings also support the concept of “mood maintenance” which explains why those with a higher level of happiness might shy away from the risks involved in cooperation: they have more to lose and less to gain compared to those at lower levels of happiness: this is most apparent when looking at the choice of defect where positive mood is associated with a 7.2 percentage point reduction (p-value 0.0232) in the cooperation. These findings are very different from results in the literature typically obtained in oneshot games or which do not involve strategic interaction. A simple explanation (supported by Proto et al. (2017)) is that repeated-interaction games involve more complex tasks where cognitive ability plays a crucial role.
Taken together with one of the key findings in the “negative channel” described earlier, that cognitive ability may be negatively related to positive mood, this might explain why subjects in a neutral mood are better equipped for more complex strategic settings. Finally, we should note that in our study we were specifically interested in the impact of general positive or neutral mood shocks and so elected to have everyone within a session face the same shock. Randomization then occurred across sessions not within sessions. This works well if we wish to consider a situation where everyone faces the same shock. Our work is not well-placed to study situations where individuals face different shocks and in judging how these might interact, for instance if one player has recently become happier while another has not. This is a potential topic for future study.
Abstract: According to existing research across several disciplines (management, psychology, economics and neuroscience), positive mood can have positive effects, engendering more altruistic, open and helpful behaviour, but can also work through a more negative channel by inducing inward-orientation, assertiveness, and reduced use of information. This leaves the impact on cooperation in interactive and strategic situations unclear. We find evidence from 490 participants in a laboratory experiment suggesting that participants in an induced positive mood cooperate less in a repeated Prisoner’s Dilemma than participants in a neutral setting. This is robust to the number of repetitions or the inclusion of pre-play communication. In order to understand why positive mood might damage the propensity to cooperate, we conduct a language analysis of the pre-play communication between players. This analysis indicates that subjects in a more positive mood use more inward-oriented and more negative language.
Keywords: Positive moodAffectHappinessMood induction proceduresCooperationRepeated Prisoner’s DilemmaSocial preferencesSocial dilemmasCognitive skillsProductivityInward-orientationLanguage analysis
JEL classification: C72 (Cooperative games)C91 (Laboratory experiments)D91 (Role and effects of psychologicalemotionalsocialand cognitive factors on decision making)J24 (Productivity)J28 (Life satisfaction)
---
In a previous version:
5 Concluding Remarks
Our results suggest that individuals in a more positive mood are less likely to cooperate, and play less efficiently in a repeated Prisoner’s Dilemma. This supports what we described as the “negative channel” in the introduction, and suggests that this channel dominates the “positive channel” in a situation involving repeated play and strategic interaction. This is true both for the repeated Prisoner’s Dilemma with a known and unknown end date and for sessions both with and without pre-play communication. We also show that the result is not specific to a particular form of mood induction. The result holds right through to the final round of play, though it does not hold if we analyse only the very first round of each supergame.
A novel analysis of the text used in pre-play communication, to our knowledge the first of its kind in an economics laboratory experiment, suggests that those in a more positive mood use more negative language and display greater inward-orientation (through the greater use of the “I” pronoun) than those in a neutral mood which also supports the “negative channel”. We confirm that inward orientation is not specific to any one form of mood induction (it applies equally well to the use of movie clips or Velten statements and music) an our language analysis is. Our findings also support the concept of “mood maintenance” which explains why those with a higher level of happiness might shy away from the risks involved in cooperation: they have more to lose and less to gain compared to those at lower levels of happiness: this is most apparent when looking at the choice of defect where positive mood is associated with a 7.2 percentage point reduction (p-value 0.0232) in the cooperation. These findings are very different from results in the literature typically obtained in oneshot games or which do not involve strategic interaction. A simple explanation (supported by Proto et al. (2017)) is that repeated-interaction games involve more complex tasks where cognitive ability plays a crucial role.
Taken together with one of the key findings in the “negative channel” described earlier, that cognitive ability may be negatively related to positive mood, this might explain why subjects in a neutral mood are better equipped for more complex strategic settings. Finally, we should note that in our study we were specifically interested in the impact of general positive or neutral mood shocks and so elected to have everyone within a session face the same shock. Randomization then occurred across sessions not within sessions. This works well if we wish to consider a situation where everyone faces the same shock. Our work is not well-placed to study situations where individuals face different shocks and in judging how these might interact, for instance if one player has recently become happier while another has not. This is a potential topic for future study.
The rise in the political polarization in recent decades is not accounted for by the dramatic rise in internet use; claims that partisans inhabit wildly segregated echo chambers/filter bubbles are largely overstated
Deri, Sebastian. 2019. “Internet Use and Political Polarization: A Review.” PsyArXiv. November 6. doi:10.31234/osf.io/u3xyb
Abstract
In this paper, I attempt to provide a comprehensive review of the evidence regarding the relationship between political polarization in the US and internet use. In the first part, I examine whether there has indeed been a rise in political polarization in the US in the last several decades. The remaining second and third parts deal with the relationship between polarization and internet use. I begin, in the second part, by reviewing evidence pertaining to the question of whether internet use plays a causal role in bringing about polarization. I then move, in the third part, to exploring the possible means by which internet use might bring about polarization. By analogy to cigarettes and cancer, the second part examines whether cigarette smoking causes cancer, while the third part examines how cigarette smoking causes (or might cause) cancer. One focus, in the third section, is on the most often discussed mechanism of internet-caused polarization: segregated information exposure, which corresponds to claims that polarization is been driven by an internet ecosystem characterized by “echo chambers”, “filter bubbles”, and otherwise partisan information consumption and dissemination.
The brief summary for each of the three parts is this. First, there is evidence that polarization has been on the rise in the U.S. in the recent decades—but it depends what you measure. When comparing Republican and Democrats, there is strongest evidence for increases in affective polarization and policy-based polarization. Second, most analyses would marshal against a version of reality where the rise in the political polarization in recent decades is mostly accounted for by the dramatic rise in internet use over this same time period. However, one notable, well-conducted, large-scale randomized direct intervention study confirms that de-activating a social media account (Facebook) resulted in significant and non-trivially sized decreases in polarization, specifically related to political opinions and policy preferences (Allcott, Braghieri, Eichmeyer, & Gentzkow, 2019). Finally, the evidence is murkiest regarding how internet use might drive polarization. With regard to polarization via segregated information exposure, claims that partisans inhabit wildly segregated “echo chambers” or “filter bubbles” are largely overstated. Nevertheless, there are significant and meaningful differences in the political content that partisans of different political orientations consume online, comparable to the degree of segregation in national print newspaper readership. Causal evidence linking this differential exposure to political polarization is not as strong as evidence that differential exposure exists. Evidence for other mechanisms of polarization is suggestive but awaits strong empirical confirmation.
Abstract
In this paper, I attempt to provide a comprehensive review of the evidence regarding the relationship between political polarization in the US and internet use. In the first part, I examine whether there has indeed been a rise in political polarization in the US in the last several decades. The remaining second and third parts deal with the relationship between polarization and internet use. I begin, in the second part, by reviewing evidence pertaining to the question of whether internet use plays a causal role in bringing about polarization. I then move, in the third part, to exploring the possible means by which internet use might bring about polarization. By analogy to cigarettes and cancer, the second part examines whether cigarette smoking causes cancer, while the third part examines how cigarette smoking causes (or might cause) cancer. One focus, in the third section, is on the most often discussed mechanism of internet-caused polarization: segregated information exposure, which corresponds to claims that polarization is been driven by an internet ecosystem characterized by “echo chambers”, “filter bubbles”, and otherwise partisan information consumption and dissemination.
The brief summary for each of the three parts is this. First, there is evidence that polarization has been on the rise in the U.S. in the recent decades—but it depends what you measure. When comparing Republican and Democrats, there is strongest evidence for increases in affective polarization and policy-based polarization. Second, most analyses would marshal against a version of reality where the rise in the political polarization in recent decades is mostly accounted for by the dramatic rise in internet use over this same time period. However, one notable, well-conducted, large-scale randomized direct intervention study confirms that de-activating a social media account (Facebook) resulted in significant and non-trivially sized decreases in polarization, specifically related to political opinions and policy preferences (Allcott, Braghieri, Eichmeyer, & Gentzkow, 2019). Finally, the evidence is murkiest regarding how internet use might drive polarization. With regard to polarization via segregated information exposure, claims that partisans inhabit wildly segregated “echo chambers” or “filter bubbles” are largely overstated. Nevertheless, there are significant and meaningful differences in the political content that partisans of different political orientations consume online, comparable to the degree of segregation in national print newspaper readership. Causal evidence linking this differential exposure to political polarization is not as strong as evidence that differential exposure exists. Evidence for other mechanisms of polarization is suggestive but awaits strong empirical confirmation.
Do exonerees face employment discrimination similar to actual offenders?
Do exonerees face employment discrimination similar to actual offenders? Jeff Kukucka, Heather K. Applegarth, Abby L. Mello. Legal and Criminological Psychology, November 6 2019. https://doi.org/10.1111/lcrp.12159
Abstract
Purpose: Given that criminal offenders face employment discrimination (Ahmed & Lang, 2017, IZA Journal of Labor Policy, 6) and wrongly convicted individuals are stereotyped similarly to offenders (Clow & Leach, 2015, Legal and Criminological Psychology, 20, 147), we tested the hypothesis that exonerees – despite their innocence – face employment discrimination comparable to actual offenders.
Methods: Experienced hiring professionals (N = 82) evaluated a job application that was identical apart from the applicant's criminal history (i.e., offender, exoneree, or none).
Results: As predicted, professionals formed more negative impressions of both the exoneree and offender – but unexpectedly, they stereotyped exonerees and offenders somewhat differently. Compared to the control applicant, professionals desired to contact more of the exoneree's references, and they offered the exoneree a lower wage.
Conclusions: Paradoxically, exonerees may be worse off than offenders to the extent that exonerees also face employment discrimination but have access to fewer resources. As the exoneree population continues to grow, research can and should inform policies and legislation in ways that will facilitate exonerees’ reintegration.
Discussion
Our findings suggest that exonerees–despite their innocence–may face hiringdiscrimination similar to actual offenders. Compared to an applicant with no criminalhistory, hiring professionals formed less favourable impressions of exoneree and offenderapplicants, desired to contact more of the exoneree’s references, and were more likely tooffer the exoneree a low wage–all despite their applications being otherwise identical.Notably, the observed effects were consistent in magnitude with those seen in meta-analyses of race-based (Quillian, Pager, Hexel, & Midtbøen, 2017) and gender-based(Koch, D’Mello, & Sackett, 2015) hiring discrimination. For offenders, employment is animportant predictor of post-release adjustment (Bahr, Harris, Fisher, & Armstrong, 2010;Uggen, Wakefield, & Western, 2005), including lower recidivism. Similarly for exonerees,studies have found a positive relationship between employment and mental health(Wildemanet al., 2011) and a negative relationship between financial compensation andpost-release criminality (Mandery, Shlosberg, West, & Callaghan, 2013). Our findings thuscarry potentially broad implications for exonerees’ post-release well-being.
Like Clow and Leach (2015), we found that hiring professionals negatively stereotypedboth offenders and exonerees–but we also unexpectedly found some evidence that theywere stereotyped differently. While both were seen as less trustworthy than the controlapplicant, exonerees were generally seen as intellectually deficient (i.e., less intelligent,competent, and articulate), whereas offenders were generally seen asmotivationallydeficient (i.e., less conscientious and responsible). If that is the case, then discrimination against these populations may depend on the requirements of the job in question. In our study, applicants sought a job that required both intellect and leadership, which may have made exonerees and offenders equally undesirable candidates. Still, this finding is rather tentative; future research should more carefully explore the possibility that these populations are stereotyped differently and therefore face discrimination under different circumstances.
The tendency to stereotype exonerees as unintelligent suggests that professionals may have attributed the exoneree’s conviction to dispositional rather than situational factors (Gilbert & Malone, 1995; Ross, 1977). Just world theory–which posits that people have afundamental need to view the world as fair (Hafer & B egue, 2005; Lerner & Miller, 1978)–may shed light on why exonerees would be blamed for their own plight: When faced withinjustice, people preserve their belief in a just worldby blaming the victim. In turn, peopleare less helpful to those who appear responsible for their own plight (Farwell & Weiner,2000; Weiner, Perry, & Magnusson, 1988)–and indeed, recent work has found that blaming exonerees for their own conviction predicted lower support for post-exoneration services (Kukucka & Evelo, 2019; Scherr, Normile, & Putney, 2018). This literature may thus explain why professionals stereotyped exonerees as unintelligent and why they more often offered exonerees a low wage. Perhaps educating employers about thesystemic causes of wrongful conviction would reduce discrimination against exonerees. Consistent with this possibility, Ricciardelli and Clow (2012) found that students’attitudes towards exonerees became more positive after hearing a lecture on the causes ofwrongful conviction.
Our professionals also wanted to contact more of the exoneree’s references, and they were equally likely to cite criminal history as a negative quality of the exoneree andoffender. These findings may indicate that professionals doubted the exoneree’s innocence. Qualitative studies abound with examples of exonerees who the publicpresumed guilty even after their exoneration (Scott, 2010; Westervelt & Cook, 2010), andother findings suggest that laypeople are often unconvinced of exonerees’ innocence(Scherr, Normile, & Sarmiento, 2018). If our professionals felt similarly, then it isunsurprising that they were equally apprehensive about the exoneree’s and offender’scriminal history. Alternatively, professionals may have accepted the exoneree’sinnocence but feared that incarceration had tainted them. This possibility is consistentwith research on stigma by association as well as the ‘magical law of contagion’–that is,the belief that people take on the properties of others with whom they have contact (e.g.,Rozin & Royzman, 2001). In other words, people may believe that exonerees take on thesame traits as the offenders with whom they cohabitated in prison (Clowet al., 2012). Future research should explore whether exonerees are stigmatized because they are mistakenly thought to be offenders or because they are known to have cohabitated with offenders.
Abstract
Purpose: Given that criminal offenders face employment discrimination (Ahmed & Lang, 2017, IZA Journal of Labor Policy, 6) and wrongly convicted individuals are stereotyped similarly to offenders (Clow & Leach, 2015, Legal and Criminological Psychology, 20, 147), we tested the hypothesis that exonerees – despite their innocence – face employment discrimination comparable to actual offenders.
Methods: Experienced hiring professionals (N = 82) evaluated a job application that was identical apart from the applicant's criminal history (i.e., offender, exoneree, or none).
Results: As predicted, professionals formed more negative impressions of both the exoneree and offender – but unexpectedly, they stereotyped exonerees and offenders somewhat differently. Compared to the control applicant, professionals desired to contact more of the exoneree's references, and they offered the exoneree a lower wage.
Conclusions: Paradoxically, exonerees may be worse off than offenders to the extent that exonerees also face employment discrimination but have access to fewer resources. As the exoneree population continues to grow, research can and should inform policies and legislation in ways that will facilitate exonerees’ reintegration.
Discussion
Our findings suggest that exonerees–despite their innocence–may face hiringdiscrimination similar to actual offenders. Compared to an applicant with no criminalhistory, hiring professionals formed less favourable impressions of exoneree and offenderapplicants, desired to contact more of the exoneree’s references, and were more likely tooffer the exoneree a low wage–all despite their applications being otherwise identical.Notably, the observed effects were consistent in magnitude with those seen in meta-analyses of race-based (Quillian, Pager, Hexel, & Midtbøen, 2017) and gender-based(Koch, D’Mello, & Sackett, 2015) hiring discrimination. For offenders, employment is animportant predictor of post-release adjustment (Bahr, Harris, Fisher, & Armstrong, 2010;Uggen, Wakefield, & Western, 2005), including lower recidivism. Similarly for exonerees,studies have found a positive relationship between employment and mental health(Wildemanet al., 2011) and a negative relationship between financial compensation andpost-release criminality (Mandery, Shlosberg, West, & Callaghan, 2013). Our findings thuscarry potentially broad implications for exonerees’ post-release well-being.
Like Clow and Leach (2015), we found that hiring professionals negatively stereotypedboth offenders and exonerees–but we also unexpectedly found some evidence that theywere stereotyped differently. While both were seen as less trustworthy than the controlapplicant, exonerees were generally seen as intellectually deficient (i.e., less intelligent,competent, and articulate), whereas offenders were generally seen asmotivationallydeficient (i.e., less conscientious and responsible). If that is the case, then discrimination against these populations may depend on the requirements of the job in question. In our study, applicants sought a job that required both intellect and leadership, which may have made exonerees and offenders equally undesirable candidates. Still, this finding is rather tentative; future research should more carefully explore the possibility that these populations are stereotyped differently and therefore face discrimination under different circumstances.
The tendency to stereotype exonerees as unintelligent suggests that professionals may have attributed the exoneree’s conviction to dispositional rather than situational factors (Gilbert & Malone, 1995; Ross, 1977). Just world theory–which posits that people have afundamental need to view the world as fair (Hafer & B egue, 2005; Lerner & Miller, 1978)–may shed light on why exonerees would be blamed for their own plight: When faced withinjustice, people preserve their belief in a just worldby blaming the victim. In turn, peopleare less helpful to those who appear responsible for their own plight (Farwell & Weiner,2000; Weiner, Perry, & Magnusson, 1988)–and indeed, recent work has found that blaming exonerees for their own conviction predicted lower support for post-exoneration services (Kukucka & Evelo, 2019; Scherr, Normile, & Putney, 2018). This literature may thus explain why professionals stereotyped exonerees as unintelligent and why they more often offered exonerees a low wage. Perhaps educating employers about thesystemic causes of wrongful conviction would reduce discrimination against exonerees. Consistent with this possibility, Ricciardelli and Clow (2012) found that students’attitudes towards exonerees became more positive after hearing a lecture on the causes ofwrongful conviction.
Our professionals also wanted to contact more of the exoneree’s references, and they were equally likely to cite criminal history as a negative quality of the exoneree andoffender. These findings may indicate that professionals doubted the exoneree’s innocence. Qualitative studies abound with examples of exonerees who the publicpresumed guilty even after their exoneration (Scott, 2010; Westervelt & Cook, 2010), andother findings suggest that laypeople are often unconvinced of exonerees’ innocence(Scherr, Normile, & Sarmiento, 2018). If our professionals felt similarly, then it isunsurprising that they were equally apprehensive about the exoneree’s and offender’scriminal history. Alternatively, professionals may have accepted the exoneree’sinnocence but feared that incarceration had tainted them. This possibility is consistentwith research on stigma by association as well as the ‘magical law of contagion’–that is,the belief that people take on the properties of others with whom they have contact (e.g.,Rozin & Royzman, 2001). In other words, people may believe that exonerees take on thesame traits as the offenders with whom they cohabitated in prison (Clowet al., 2012). Future research should explore whether exonerees are stigmatized because they are mistakenly thought to be offenders or because they are known to have cohabitated with offenders.
Amazing how much we may hate the others — The Harmful Side of Thanks: Thankful Responses to High-Power Group Help Undermine Low-Power Groups’ Protest, Pacifying Them
Amazing how much we may hate the others — The Harmful Side of Thanks: Thankful Responses to High-Power Group Help Undermine Low-Power Groups’ Protest. Inna Ksenofontov, Julia C. Becker. Personality and Social Psychology Bulletin, October 9, 2019. https://doi.org/10.1177/0146167219879125
Abstract: Giving thanks has multiple psychological benefits. However, within intergroup contexts, thankful responses from low-power to high-power group members could solidify the power hierarchy. The other-oriented nature of grateful expressions could mask power differences and discourage low-power group members from advocating for their ingroup interests. In five studies (N = 825), we examine the novel idea of a potentially harmful side of “thanks,” using correlational and experimental designs and a follow-up. Across different contexts, expressing thanks to a high-power group member who transgressed and then helped undermined low-power group members’ protest intentions and actual protest. Thus, the expression of thanks can pacify members of low-power groups. We offer insights into the underlying process by showing that forgiveness of the high-power benefactor and system justification mediate this effect. Our findings provide evidence for a problematic side of gratitude within intergroup relations. We discuss social implications.
Keywords: expressions of thanks, protest, intergroup helping, system justification, forgiveness
---
How can you thank a man for giving you what’s already yours? How then can you thank him for giving you only part of what’s already yours?
—Malcolm X, “The Ballot or the Bullet,” 1964
Abstract: Giving thanks has multiple psychological benefits. However, within intergroup contexts, thankful responses from low-power to high-power group members could solidify the power hierarchy. The other-oriented nature of grateful expressions could mask power differences and discourage low-power group members from advocating for their ingroup interests. In five studies (N = 825), we examine the novel idea of a potentially harmful side of “thanks,” using correlational and experimental designs and a follow-up. Across different contexts, expressing thanks to a high-power group member who transgressed and then helped undermined low-power group members’ protest intentions and actual protest. Thus, the expression of thanks can pacify members of low-power groups. We offer insights into the underlying process by showing that forgiveness of the high-power benefactor and system justification mediate this effect. Our findings provide evidence for a problematic side of gratitude within intergroup relations. We discuss social implications.
Keywords: expressions of thanks, protest, intergroup helping, system justification, forgiveness
---
How can you thank a man for giving you what’s already yours? How then can you thank him for giving you only part of what’s already yours?
—Malcolm X, “The Ballot or the Bullet,” 1964
Mild alcohol use is shown to improve bargaining efficiency in labs; the effect does not arise from changes in mood, altruism, or risk aversion; may be caused by impairment in information processing ability, diminishing self-interest
From 2016... Deal or no deal? The effect of alcohol drinking on bargaining. Pak HungAu, Jipeng Zhang. Journal of Economic Behavior & Organization, Volume 127, July 2016, Pages 70-86. https://doi.org/10.1016/j.jebo.2016.04.011
Highlights
• Mild alcohol use is shown to improve bargaining efficiency in experiments.
• The effect does not arise from changes in mood, altruism, or risk aversion.
• The effect can be caused by impairment in information processing ability.
Abstract: Alcohol drinking during business negotiation is a very common practice, particularly in some East Asian countries. Does alcohol consumption affect negotiator's strategy and consequently the outcome of the negotiation? If so, what is the mechanism through which alcohol impacts negotiator's behavior? We investigate the effect of a moderate amount of alcohol on negotiation using controlled experiments. Subjects are randomly matched into pairs to play a bargaining game with adverse selection. In the game, each subject is given a private endowment. The total endowment is scaled up and shared equally between the pair provided that they agree to collaborate. It is found that a moderate amount of alcohol consumption increases subjects’ willingness to collaborate, thus improving their average payoff. We find that alcohol consumption increases neither subjects’ preference for risk nor altruism. A possible explanation for the increase in the likelihood of collaboration is that subjects under the influence of alcohol are more “cursed” in the sense of Eyster and Rabin (2005), which is supported by the estimation results of a structural model of quantal response equilibrium.
---
5. Concluding remarks
Given the harmful effects of excessive alcohol consumption on health are well-known, it is not clear and thereforeinteresting to investigate why aggressive business drinking has become a routine, and even an accepted culture in manycountries. In this study, we make the first attempt to study the effect of a mild amount of alcohol consumption on bargainingunder incomplete information. We find a positive effect of alcohol consumption on the efficiency of bargaining in a specificexperimental setting. Our finding suggests that consuming a mild to moderate amount of alcoholic drink in business meetings can potentially help smooth the negotiation process.
Out of the concern of health risk, the alcohol consumption of subjects in our experiment is mild relative to businessdrinking in real world. Our results still can shed useful light on understanding the effect of business drinking. First, thea lcohol intoxication effects, especially on information processing and working memory, have shown to be present even at amild dose of alcohol similar to that used in our experiment (Dry et al., 2012). Moreover, the intoxication effect is increasingin BAC up to a moderate level. We thus conjecture that a slight increase in dosage would strengthen our results. Second,the medical literature has well documented that chronic alcohol consumption makes the drinker develop tolerance to someof alcohol’s effects.20Consequently, the amount of alcohol needed to achieve a certain level of intoxication for a graduatestudent (who do not drink much typically) can be much smaller than the amount for a businessman (who drinks moreheavily and frequently).
Despite the aforementioned positive effect for a mild dose of alcohol, caution must be exercised in extrapolating theresults too far. It is well known that an excessive dose of alcohol can lead to a range of harmful effects, including aggressiveand violent behaviors (Dougherty et al., 1999), as well as impairment in problem solving ability (Streufert et al., 1993). Therefore, it is almost certain that excessive drinking would hamper efficiency in bargaining.
What are the channels through which alcohol use affects bargaining strategies and outcomes in our setting? It is commonly accepted that alcohol use lowers one’s ability to make appropriate reasoning and inference from available information. Therefore, in settings in which skepticism can lead to a breakdown in negotiation, alcohol consumption can make people drop their guard for each others’ actions, thus facilitating reaching an agreement. Our QRE estimation of a cursed equilibrium provides some support for this channel.
Other conceivable channels can be ruled out as follows. First, in line with the existing literature on the effect of alcoholuse, we find that a mild does of alcohol has little (if not zero) effect on our subjects’ risk aversion and altruism. Therefore,the increase in willingness to collaborate does not arise from a decrease in risk aversion, and/or an increase in altruism.
Second, the positive effect of alcohol in social setting has often been attributed to creating a more comforting and relaxingatmosphere. Our experiment is conducted in a laboratory, and each subject consumed the given beer individually. As such,the socializing effect of alcohol is clearly absent in our setting. Third, alcohol consumption has been suggested to have asignaling value that one is trustworthy and is ready to commit to a relationship. (See for example, Haucap and Herr (2014)and Schweitzer and Kerr (2000).) In our study, treatments are randomized and enforced by the experimenters: subjects donot get to choose whether and what type of drink to consume, so they cannot signal their private information. Whereas ourexperiment design abstracts away from the second and the third channels discussed above, future research can consideralcohol’s effects on relieving tension and building trust in a social setting.
Highlights
• Mild alcohol use is shown to improve bargaining efficiency in experiments.
• The effect does not arise from changes in mood, altruism, or risk aversion.
• The effect can be caused by impairment in information processing ability.
Abstract: Alcohol drinking during business negotiation is a very common practice, particularly in some East Asian countries. Does alcohol consumption affect negotiator's strategy and consequently the outcome of the negotiation? If so, what is the mechanism through which alcohol impacts negotiator's behavior? We investigate the effect of a moderate amount of alcohol on negotiation using controlled experiments. Subjects are randomly matched into pairs to play a bargaining game with adverse selection. In the game, each subject is given a private endowment. The total endowment is scaled up and shared equally between the pair provided that they agree to collaborate. It is found that a moderate amount of alcohol consumption increases subjects’ willingness to collaborate, thus improving their average payoff. We find that alcohol consumption increases neither subjects’ preference for risk nor altruism. A possible explanation for the increase in the likelihood of collaboration is that subjects under the influence of alcohol are more “cursed” in the sense of Eyster and Rabin (2005), which is supported by the estimation results of a structural model of quantal response equilibrium.
---
5. Concluding remarks
Given the harmful effects of excessive alcohol consumption on health are well-known, it is not clear and thereforeinteresting to investigate why aggressive business drinking has become a routine, and even an accepted culture in manycountries. In this study, we make the first attempt to study the effect of a mild amount of alcohol consumption on bargainingunder incomplete information. We find a positive effect of alcohol consumption on the efficiency of bargaining in a specificexperimental setting. Our finding suggests that consuming a mild to moderate amount of alcoholic drink in business meetings can potentially help smooth the negotiation process.
Out of the concern of health risk, the alcohol consumption of subjects in our experiment is mild relative to businessdrinking in real world. Our results still can shed useful light on understanding the effect of business drinking. First, thea lcohol intoxication effects, especially on information processing and working memory, have shown to be present even at amild dose of alcohol similar to that used in our experiment (Dry et al., 2012). Moreover, the intoxication effect is increasingin BAC up to a moderate level. We thus conjecture that a slight increase in dosage would strengthen our results. Second,the medical literature has well documented that chronic alcohol consumption makes the drinker develop tolerance to someof alcohol’s effects.20Consequently, the amount of alcohol needed to achieve a certain level of intoxication for a graduatestudent (who do not drink much typically) can be much smaller than the amount for a businessman (who drinks moreheavily and frequently).
Despite the aforementioned positive effect for a mild dose of alcohol, caution must be exercised in extrapolating theresults too far. It is well known that an excessive dose of alcohol can lead to a range of harmful effects, including aggressiveand violent behaviors (Dougherty et al., 1999), as well as impairment in problem solving ability (Streufert et al., 1993). Therefore, it is almost certain that excessive drinking would hamper efficiency in bargaining.
What are the channels through which alcohol use affects bargaining strategies and outcomes in our setting? It is commonly accepted that alcohol use lowers one’s ability to make appropriate reasoning and inference from available information. Therefore, in settings in which skepticism can lead to a breakdown in negotiation, alcohol consumption can make people drop their guard for each others’ actions, thus facilitating reaching an agreement. Our QRE estimation of a cursed equilibrium provides some support for this channel.
Other conceivable channels can be ruled out as follows. First, in line with the existing literature on the effect of alcoholuse, we find that a mild does of alcohol has little (if not zero) effect on our subjects’ risk aversion and altruism. Therefore,the increase in willingness to collaborate does not arise from a decrease in risk aversion, and/or an increase in altruism.
Second, the positive effect of alcohol in social setting has often been attributed to creating a more comforting and relaxingatmosphere. Our experiment is conducted in a laboratory, and each subject consumed the given beer individually. As such,the socializing effect of alcohol is clearly absent in our setting. Third, alcohol consumption has been suggested to have asignaling value that one is trustworthy and is ready to commit to a relationship. (See for example, Haucap and Herr (2014)and Schweitzer and Kerr (2000).) In our study, treatments are randomized and enforced by the experimenters: subjects donot get to choose whether and what type of drink to consume, so they cannot signal their private information. Whereas ourexperiment design abstracts away from the second and the third channels discussed above, future research can consideralcohol’s effects on relieving tension and building trust in a social setting.
Thank God for My Successes (Not My Failures): Feeling God’s Presence Explains a God Attribution Bias
Thank God for My Successes (Not My Failures): Feeling God’s Presence Explains a God Attribution Bias. Amber DeBono, Dennis Poepsel, Natarshia Corley. Psychological Reports, November 4, 2019. https://doi.org/10.1177/0033294119885842
Abstract: Little research has investigated attributional biases to God for positive and negative personal events. Consistent with past work, we predicted that people who believe in God will attribute successes more to God than failures, particularly for highly religious people. We also predicted that believing that God is a part of the self would increase how much people felt God’s presence which would result in giving God more credit for successes. Our study (N = 133) was a two-factor, between-subject experimental design in which participants either won or lost a game and were asked to attribute the cause of this outcome to themselves, God, or other factors. Furthermore, participants either completed the game before or after responding to questions about their religious beliefs. Overall, there was support for our predictions. Our results have important implications for attribution research and the practical psychological experiences for religious people making attributions for their successes and failures.
Keywords: Religion, God, attribution, self
---
Discussion
The results of this study provided substantial evidence for our two primarygoals. First, we demonstrated that people who believe in God attributed successes more to God than their failures. Furthermore, we showed that thiseffect was stronger for people who identified as more religious. We thereforeconceptually replicated previous research (Spilka & Schmidt, 1983), demonstrating that this God attribution style is a reliable effect, not limited to hypo-thetical scenarios.
Moreover, the percentage attributed to God for a win appeared to be bestpredicted by believing God is a part of them. This relationship was bestexplained by simply feeling God’s presence during the experimental task.These findings are consistent with previous research that showed the importanceof the overlap between God and self in addition to feeling God’s presence (e.g.,Hodges et al., 2013; Sharp et al., 2017). In contrast with Spilka and Schmidt’s(1983) findings, our results indicated that the overlap between God and the selfmay provide a better explanation than religious commitment for how peopleattribute successes to God, by feeling God’s presence. Our review of the literature suggests this may be the first study to investigate these concepts as explan-ations for differing attribution styles for failures and successes.
Strengths and implications
Until now, little research has investigated why and how God-believers attributetheir successes to God more than their failures. We replicated the results of a setof over 30-year-old studies (Spilka & Schmidt, 1983). Contrary to these originalstudies, our research did not use hypothetical events; our participants experienced real-life successes and failures. Despite this seemingly stable effect, little research has explained why people who believe in God experienced a God attri-bution bias instead of a self-serving bias. We again showed that a God attribution bias may be a result of religious commitment. In addition to conceptually replicating these over 30-year-old findings (which is important in and of itself), we also found evidence for some possible mechanisms to explain this God attri-bution bias. That is, believing God is a part of them, a variable potentially moreimportant than religiosity, appeared to increase feeling God’s presence whichresulted in greater attributions to God for successes. This is the first setof studies that show these beliefs may play an important role in the God attribution bias.
These results also indicate that a more nuanced approach is needed to under-stand why people attribute successes more to God than failures and how thisimpacts people’s thinking and behavior. Although vignettes are better thansimple survey measures (Alexander & Becker, 1978), they are problematic aspeople might believe they would make attributions one way when in reality theymay do another (Barter & Renold, 1999). Our study is the first to examine God attributional styles for actual experiences of failures and successes by the participants. Nevertheless, the results of our study were consistent with the vignetteresearch: while religious individuals were more likely to use this God-serving attributional style, we saw that people generally tended to give God more credit for successes than failures. We also found support for the idea that feeling Godas a part of the self, which resulted in feeling God’s presence also predictedgiving God greater credit, but only for successes. Religious commitment did notexplain this effect as well as feeling God is a part of the self.
Although the Battleship game held little consequences for participants(whether they won or lost resulted in no benefit or penalty), even with thisinconsequential task, we saw that people will attribute successes more to Godand failures to themselves. Yet, the successes and failures in life often result inreal consequences. Our study showed that even inconsequential failures andsuccesses can lead to God attributional biases seen in previous research. Thus,we would predict a similar God attributional pattern between both consequen-tial and inconsequential tasks, inside and outside of the laboratory.
Our results also suggest that people who are especially religious may be morelikely to attribute their successes more to God than their failures. People whouse this attributional style should be more mindful of these attribution tenden-cies, as giving God credit for successes and taking credit for failures could resultin depression. Potentially, this could explain slumps we see in highly religiousathletes. If athletes give credit to God for successes on the field, this may appearas humility to some, but this type of thinking could quickly lead to the samedownward spiral thinking that we see in people suffering from depression (Alloy& Abramson, 1988). It would be prudent for all of us, especially for people whobelieve in God, to be aware of how much credit we are taking for our successesand failures. As such, Sports Psychologists may consider heeding this line ofthinking in their religious athletes, so that they can break out of their “slumps.”
Our findings also further our understanding of SIT, by showing that religiousidentity may be less important for explaining attributions to God for successesthan experiencing God as part of the self. Although religiosity, an aspect ofone’s collective identity, moderated the effect of wins on attributions to God,experiencing God as part of the self predicted feeling God’s presence, which thenpredicted attributing the win to God. Religious commitment did not explain thiseffect as well. Future research should continue to examine these two variables,religiosity and experiencing God as part of the self, when attempting to explainattributional styles.
Abstract: Little research has investigated attributional biases to God for positive and negative personal events. Consistent with past work, we predicted that people who believe in God will attribute successes more to God than failures, particularly for highly religious people. We also predicted that believing that God is a part of the self would increase how much people felt God’s presence which would result in giving God more credit for successes. Our study (N = 133) was a two-factor, between-subject experimental design in which participants either won or lost a game and were asked to attribute the cause of this outcome to themselves, God, or other factors. Furthermore, participants either completed the game before or after responding to questions about their religious beliefs. Overall, there was support for our predictions. Our results have important implications for attribution research and the practical psychological experiences for religious people making attributions for their successes and failures.
Keywords: Religion, God, attribution, self
---
Discussion
The results of this study provided substantial evidence for our two primarygoals. First, we demonstrated that people who believe in God attributed successes more to God than their failures. Furthermore, we showed that thiseffect was stronger for people who identified as more religious. We thereforeconceptually replicated previous research (Spilka & Schmidt, 1983), demonstrating that this God attribution style is a reliable effect, not limited to hypo-thetical scenarios.
Moreover, the percentage attributed to God for a win appeared to be bestpredicted by believing God is a part of them. This relationship was bestexplained by simply feeling God’s presence during the experimental task.These findings are consistent with previous research that showed the importanceof the overlap between God and self in addition to feeling God’s presence (e.g.,Hodges et al., 2013; Sharp et al., 2017). In contrast with Spilka and Schmidt’s(1983) findings, our results indicated that the overlap between God and the selfmay provide a better explanation than religious commitment for how peopleattribute successes to God, by feeling God’s presence. Our review of the literature suggests this may be the first study to investigate these concepts as explan-ations for differing attribution styles for failures and successes.
Strengths and implications
Until now, little research has investigated why and how God-believers attributetheir successes to God more than their failures. We replicated the results of a setof over 30-year-old studies (Spilka & Schmidt, 1983). Contrary to these originalstudies, our research did not use hypothetical events; our participants experienced real-life successes and failures. Despite this seemingly stable effect, little research has explained why people who believe in God experienced a God attri-bution bias instead of a self-serving bias. We again showed that a God attribution bias may be a result of religious commitment. In addition to conceptually replicating these over 30-year-old findings (which is important in and of itself), we also found evidence for some possible mechanisms to explain this God attri-bution bias. That is, believing God is a part of them, a variable potentially moreimportant than religiosity, appeared to increase feeling God’s presence whichresulted in greater attributions to God for successes. This is the first setof studies that show these beliefs may play an important role in the God attribution bias.
These results also indicate that a more nuanced approach is needed to under-stand why people attribute successes more to God than failures and how thisimpacts people’s thinking and behavior. Although vignettes are better thansimple survey measures (Alexander & Becker, 1978), they are problematic aspeople might believe they would make attributions one way when in reality theymay do another (Barter & Renold, 1999). Our study is the first to examine God attributional styles for actual experiences of failures and successes by the participants. Nevertheless, the results of our study were consistent with the vignetteresearch: while religious individuals were more likely to use this God-serving attributional style, we saw that people generally tended to give God more credit for successes than failures. We also found support for the idea that feeling Godas a part of the self, which resulted in feeling God’s presence also predictedgiving God greater credit, but only for successes. Religious commitment did notexplain this effect as well as feeling God is a part of the self.
Although the Battleship game held little consequences for participants(whether they won or lost resulted in no benefit or penalty), even with thisinconsequential task, we saw that people will attribute successes more to Godand failures to themselves. Yet, the successes and failures in life often result inreal consequences. Our study showed that even inconsequential failures andsuccesses can lead to God attributional biases seen in previous research. Thus,we would predict a similar God attributional pattern between both consequen-tial and inconsequential tasks, inside and outside of the laboratory.
Our results also suggest that people who are especially religious may be morelikely to attribute their successes more to God than their failures. People whouse this attributional style should be more mindful of these attribution tenden-cies, as giving God credit for successes and taking credit for failures could resultin depression. Potentially, this could explain slumps we see in highly religiousathletes. If athletes give credit to God for successes on the field, this may appearas humility to some, but this type of thinking could quickly lead to the samedownward spiral thinking that we see in people suffering from depression (Alloy& Abramson, 1988). It would be prudent for all of us, especially for people whobelieve in God, to be aware of how much credit we are taking for our successesand failures. As such, Sports Psychologists may consider heeding this line ofthinking in their religious athletes, so that they can break out of their “slumps.”
Our findings also further our understanding of SIT, by showing that religiousidentity may be less important for explaining attributions to God for successesthan experiencing God as part of the self. Although religiosity, an aspect ofone’s collective identity, moderated the effect of wins on attributions to God,experiencing God as part of the self predicted feeling God’s presence, which thenpredicted attributing the win to God. Religious commitment did not explain thiseffect as well. Future research should continue to examine these two variables,religiosity and experiencing God as part of the self, when attempting to explainattributional styles.
On the Mathematics of the Fraternal Birth Order Effect and the Genetics of Homosexuality
On the Mathematics of the Fraternal Birth Order Effect and the Genetics of Homosexuality. Tanya Khovanova. Archives of Sexual Behavior, November 5 2019. https://link.springer.com/article/10.1007/s10508-019-01573-1
Abstract: Mathematicians have always been attracted to the field of genetics. The mathematical aspects of research on homosexuality are especially interesting. Certain studies show that male homosexuality may have a genetic component that is correlated with female fertility. Other studies show the existence of the fraternal birth order effect, that is, the correlation of homosexuality with the number of older brothers. This article is devoted to the mathematical aspects of how these two phenomena are interconnected. In particular, we show that the fraternal birth order effect implies a correlation between homosexuality and maternal fecundity. Vice versa, we show that the correlation between homosexuality and female fecundity implies the increase in the probability of the younger brothers being homosexual.
Keywords: Fraternal birth order effect Male homosexuality Fecundity Genetics Sexual orientation
---
Introduction
According to the study by Blanchard and Bogaert (1996): “[E]ach additional older brother increased the odds of [male] homosexuality by 34% ” (see also Blanchard [2004], Bogaert [2006], Bogaert et al. [2018], and a recent survey by Blan-chard [2018]). The current explanation is that carrying a boy to term changes their mother’s uterine environment. Male fetuses produce H–Y antigens which may be responsible for this environmental change for future fetuses.
The research into a genetic component of male gayness shows that there might be some genes in the X chromosome that influence male homosexuality. It also shows that the same genes might be responsible for increased fertility in females (see Ciani, Cermelli, & Zanzotto [2008] and Iemmola & Ciani [2009]).
In this article, we compare two mathematical models. In these mathematical models, we disregard girls for the sake of clarity and simplicity.
The first mathematical model of the Fraternal Birth Order Effect (FBOE), which we denote FBOE-model, assumes that each next-born son becomes homosexual with increased probability. This probability is independent of any other factor.
The second mathematical model of Female Fecundity (FF), which we denote FF-model, assumes that a son becomes homosexual with probability depending on the total number of chil-dren and nothing else.
We show mathematically how FBOE-model implies correlation with family size and FF-model implies correlation with birth order. That means these two models are math-ematically intertwined.We also propose the Brother Effect. Brothers share a lot of the same genes. It is not surprising that brothers are more probable to share traits. With respect to homosexuality, we call the correlation that homosexuals are more probable to have a homosexual brother than a non-homosexual the Brother Effect. The existence of genes that increase predisposition to homo-sexuality implies the Brother Effect. The connection between the FBOE-model and the Brother Effect is more complicated.
We also discuss how to separate FBOE and FF in the data.
The “Extreme Examples” section contains extreme mathematical examples that amplify the results of this article. The “FBOE-model and the family size” section shows how FBOE-model implies the correlation with family size. The “FF-model implies birth order correlation” section shows how FF-model implies the correlation with birth order. In the “Brothers” section, we discuss the connection between FBOE-model and the Brother Effect. In the “Separating Birth Order and Female Fecundity” section, we discuss how to separate the birth order from the family size.
Tuesday, November 5, 2019
Moderate drinking: Enhanced cognition and lower dementia risk, substantive reductions in risk for cardiovascular and diabeters events are reported (but robust conclusions remain elusive)
Clarifying the neurobehavioral sequelae of moderate drinking lifestyles and acute alcohol effects with aging. Sara Jo Nixon, Ben Lewis. International Review of Neurobiology, November 5 2019. https://doi.org/10.1016/bs.irn.2019.10.016
Abstract: Epidemiological estimates indicate not only an increase in the proportion of older adults, but also an increase in those who continue moderate alcohol consumption. Substantial literatures have attempted to characterize health benefits/risks of moderate drinking lifestyles. Not uncommonly, reports address outcomes in a single outcome, such as cardiovascular function or cognitive decline, rather than providing a broader overview of systems. In this narrative review, retaining focus on neurobiological considerations, we summarize key findings regarding moderate drinking and three health domains, cardiovascular health, Type 2 diabetes (T2D), and cognition. Interestingly, few investigators have studied bouts of low/moderate doses of alcohol consumption, a pattern consistent with moderate drinking lifestyles. Here, we address both moderate drinking as a lifestyle and as an acute event.
Review of health-related correlates illustrates continuing inconsistencies. Although substantive reductions in risk for cardiovascular and T2D events are reported, robust conclusions remain elusive. Similarly, whereas moderate drinking is often associated with enhanced cognition and lower dementia risk, few benefits are noted in rates of decline or alterations in brain structure. The effect of sex/gender varies across health domains and by consumption levels. For example, women appear to differentially benefit from alcohol use in terms of T2D, but experience greater risk when considering aspects of cardiovascular function. Finally, we observe that socially relevant alcohol doses do not consistently impair performance in older adults. Rather, older drinkers demonstrate divergent, but not necessarily detrimental, patterns in neural activation and some behavioral measures relative to younger drinkers. Taken together, the epidemiological and laboratory studies reinforce the need for greater attention to key individual differences and for the conduct of systematic studies sensitive to age-related shifts in neurobiological systems.
Keywords: AlcoholModerate drinkingHealthCognitionBehaviorAgingOlder adultsAcute administrationNeurophysiology
Abstract: Epidemiological estimates indicate not only an increase in the proportion of older adults, but also an increase in those who continue moderate alcohol consumption. Substantial literatures have attempted to characterize health benefits/risks of moderate drinking lifestyles. Not uncommonly, reports address outcomes in a single outcome, such as cardiovascular function or cognitive decline, rather than providing a broader overview of systems. In this narrative review, retaining focus on neurobiological considerations, we summarize key findings regarding moderate drinking and three health domains, cardiovascular health, Type 2 diabetes (T2D), and cognition. Interestingly, few investigators have studied bouts of low/moderate doses of alcohol consumption, a pattern consistent with moderate drinking lifestyles. Here, we address both moderate drinking as a lifestyle and as an acute event.
Review of health-related correlates illustrates continuing inconsistencies. Although substantive reductions in risk for cardiovascular and T2D events are reported, robust conclusions remain elusive. Similarly, whereas moderate drinking is often associated with enhanced cognition and lower dementia risk, few benefits are noted in rates of decline or alterations in brain structure. The effect of sex/gender varies across health domains and by consumption levels. For example, women appear to differentially benefit from alcohol use in terms of T2D, but experience greater risk when considering aspects of cardiovascular function. Finally, we observe that socially relevant alcohol doses do not consistently impair performance in older adults. Rather, older drinkers demonstrate divergent, but not necessarily detrimental, patterns in neural activation and some behavioral measures relative to younger drinkers. Taken together, the epidemiological and laboratory studies reinforce the need for greater attention to key individual differences and for the conduct of systematic studies sensitive to age-related shifts in neurobiological systems.
Keywords: AlcoholModerate drinkingHealthCognitionBehaviorAgingOlder adultsAcute administrationNeurophysiology
In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly
The Eighty Five Percent Rule for optimal learning. Robert C. Wilson, Amitai Shenhav, Mark Straccia & Jonathan D. Cohen. Nature Communications, volume 10, Article number: 4646 (2019). November 5 2019. https://www.nature.com/articles/s41467-019-12552-4
Abstract: Researchers and educators have long wrestled with the question of how best to teach their clients be they humans, non-human animals or machines. Here, we examine the role of a single variable, the difficulty of training, on the rate of learning. In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly. We derive conditions for this sweet spot for a broad class of learning algorithms in the context of binary classification tasks. For all of these stochastic gradient-descent based learning algorithms, we find that the optimal error rate for training is around 15.87% or, conversely, that the optimal training accuracy is about 85%. We demonstrate the efficacy of this ‘Eighty Five Percent Rule’ for artificial neural networks used in AI and biologically plausible neural networks thought to describe animal learning.
Discussion
In this article we considered the effect of training accuracy on learning in the case of binary classification tasks and stochastic gradient-descent-based learning rules. We found that the rate of learning is maximized when the difficulty of training is adjusted to keep the training accuracy at around 85%. We showed that training at the optimal accuracy proceeds exponentially faster than training at a fixed difficulty. Finally we demonstrated the efficacy of the Eighty Five Percent Rule in the case of artificial and biologically plausible neural networks.
Our results have implications for a number of fields. Perhaps most directly, our findings move towards a theory for identifying the optimal environmental settings in order to maximize the rate of gradient-based learning. Thus the Eighty Five Percent Rule should hold for a wide range of machine learning algorithms including multilayered feedforward and recurrent neural networks (e.g. including ‘deep learning’ networks using backpropagation9, reservoir computing networks21,22, as well as Perceptrons). Of course, in these more complex situations, our assumptions may not always be met. For example, as shown in the Methods, relaxing the assumption that the noise is Gaussian leads to changes in the optimal training accuracy: from 85% for Gaussian, to 82% for Laplacian noise, to 75% for Cauchy noise (Eq. (31) in the “Methods”).
More generally, extensions to this work should consider how batch-based training changes the optimal accuracy, and how the Eighty Five Percent Rule changes when there are more than two categories. In batch learning, the optimal difficulty to select for the examples in each batch will likely depend on the rate of learning relative to the size of the batch. If learning is slow, then selecting examples in a batch that satisfy the 85% rule may work, but if learning is fast, then mixing in more difficult examples may be best. For multiple categories, it is likely possible to perform similar analyses, although the mapping between decision variable and categories will be more complex as will be the error rates which could be category specific (e.g., misclassifying category 1 as category 2 instead of category 3).
In Psychology and Cognitive Science, the Eighty Five Percent Rule accords with the informal intuition of many experimentalists that participant engagement is often maximized when tasks are neither too easy nor too hard. Indeed it is notable that staircasing procedures (that aim to titrate task difficulty so that error rate is fixed during learning) are commonly designed to produce about 80–85% accuracy17. Similarly, when given a free choice about the difficulty of task they can perform, participants will spontaneously choose tasks of intermediate difficulty levels as they learn23. Despite the prevalence of this intuition, to the best of our knowledge no formal theoretical work has addressed the effect of training accuracy on learning, a test of which is an important direction for future work.
More generally, our work closely relates to the Region of Proximal Learning and Desirable Difficulty frameworks in education24,25,26 and Curriculum Learning and Self-Paced Learning7,8 in computer science. These related, but distinct, frameworks propose that people and machines should learn best when training tasks involve just the right amount of difficulty. In the Desirable Difficulties framework, the difficulty in the task must be of a ‘desirable’ kind, such as spacing practice over time, that promotes learning as opposed to an undesirable kind that does not. In the Region of Proximal Learning framework, which builds on early work by Piaget27 and Vygotsky28, this optimal difficulty is in a region of difficulty just beyond the person’s current ability. Curriculum and Self-Paced Learning in computer science build on similar intuitions, that machines should learn best when training examples are presented in order from easy to hard. In practice, the optimal difficulty in all of these domains is determined empirically and is often dependent on many factors29. In this context, our work offers a way of deriving the desired difficulty and the region of proximal learning in the special case of binary classification tasks for which stochastic gradient-descent learning rules apply. As such our work represents the first step towards a more mathematical instantiation of these theories, although it remains to be generalized to a broader class of circumstances, such as multi-choice tasks and different learning algorithms.
[...] our work points to a mathematical theory of the state of ‘Flow’34. This state, ‘in which an individual is completely immersed in an activity without reflective self-consciousness but with a deep sense of control’ [ref. 35, p. 1], is thought to occur most often when the demands of the task are well matched to the skills of the participant. This idea of balance between skill and challenge was captured originally with a simple conceptual diagram (Fig. 5) with two other states: ‘anxiety’ when challenge exceeds skill and ‘boredom’ when skill exceeds challenge. These three qualitatively different regions (flow, anxiety, and boredom) arise naturally in our model. Identifying the precision, β, with the level of skill and the level challenge with the inverse of true decision variable, 1/Δ, we see that when challenge equals skill, flow is associated with a high learning rate and accuracy, anxiety with low learning rate and accuracy and boredom with high accuracy but low learning rate (Fig. 5b, c). Intriguingly, recent work by Vuorre and Metcalfe, has found that subjective feelings of Flow peaks on tasks that are subjectively rated as being of intermediate difficulty36. In addition work on learning to control brain computer interfaces finds that subjective, self-reported measures of ‘optimal difficulty’, peak at a difficulty associated with maximal learning, and not at a difficulty associated with optimal decoding of neural activity37. Going forward, it will be interesting to test whether these subjective measures of engagement peak at the point of maximal learning gradient, which for binary classification tasks is 85%.
Abstract: Researchers and educators have long wrestled with the question of how best to teach their clients be they humans, non-human animals or machines. Here, we examine the role of a single variable, the difficulty of training, on the rate of learning. In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly. We derive conditions for this sweet spot for a broad class of learning algorithms in the context of binary classification tasks. For all of these stochastic gradient-descent based learning algorithms, we find that the optimal error rate for training is around 15.87% or, conversely, that the optimal training accuracy is about 85%. We demonstrate the efficacy of this ‘Eighty Five Percent Rule’ for artificial neural networks used in AI and biologically plausible neural networks thought to describe animal learning.
Discussion
In this article we considered the effect of training accuracy on learning in the case of binary classification tasks and stochastic gradient-descent-based learning rules. We found that the rate of learning is maximized when the difficulty of training is adjusted to keep the training accuracy at around 85%. We showed that training at the optimal accuracy proceeds exponentially faster than training at a fixed difficulty. Finally we demonstrated the efficacy of the Eighty Five Percent Rule in the case of artificial and biologically plausible neural networks.
Our results have implications for a number of fields. Perhaps most directly, our findings move towards a theory for identifying the optimal environmental settings in order to maximize the rate of gradient-based learning. Thus the Eighty Five Percent Rule should hold for a wide range of machine learning algorithms including multilayered feedforward and recurrent neural networks (e.g. including ‘deep learning’ networks using backpropagation9, reservoir computing networks21,22, as well as Perceptrons). Of course, in these more complex situations, our assumptions may not always be met. For example, as shown in the Methods, relaxing the assumption that the noise is Gaussian leads to changes in the optimal training accuracy: from 85% for Gaussian, to 82% for Laplacian noise, to 75% for Cauchy noise (Eq. (31) in the “Methods”).
More generally, extensions to this work should consider how batch-based training changes the optimal accuracy, and how the Eighty Five Percent Rule changes when there are more than two categories. In batch learning, the optimal difficulty to select for the examples in each batch will likely depend on the rate of learning relative to the size of the batch. If learning is slow, then selecting examples in a batch that satisfy the 85% rule may work, but if learning is fast, then mixing in more difficult examples may be best. For multiple categories, it is likely possible to perform similar analyses, although the mapping between decision variable and categories will be more complex as will be the error rates which could be category specific (e.g., misclassifying category 1 as category 2 instead of category 3).
In Psychology and Cognitive Science, the Eighty Five Percent Rule accords with the informal intuition of many experimentalists that participant engagement is often maximized when tasks are neither too easy nor too hard. Indeed it is notable that staircasing procedures (that aim to titrate task difficulty so that error rate is fixed during learning) are commonly designed to produce about 80–85% accuracy17. Similarly, when given a free choice about the difficulty of task they can perform, participants will spontaneously choose tasks of intermediate difficulty levels as they learn23. Despite the prevalence of this intuition, to the best of our knowledge no formal theoretical work has addressed the effect of training accuracy on learning, a test of which is an important direction for future work.
More generally, our work closely relates to the Region of Proximal Learning and Desirable Difficulty frameworks in education24,25,26 and Curriculum Learning and Self-Paced Learning7,8 in computer science. These related, but distinct, frameworks propose that people and machines should learn best when training tasks involve just the right amount of difficulty. In the Desirable Difficulties framework, the difficulty in the task must be of a ‘desirable’ kind, such as spacing practice over time, that promotes learning as opposed to an undesirable kind that does not. In the Region of Proximal Learning framework, which builds on early work by Piaget27 and Vygotsky28, this optimal difficulty is in a region of difficulty just beyond the person’s current ability. Curriculum and Self-Paced Learning in computer science build on similar intuitions, that machines should learn best when training examples are presented in order from easy to hard. In practice, the optimal difficulty in all of these domains is determined empirically and is often dependent on many factors29. In this context, our work offers a way of deriving the desired difficulty and the region of proximal learning in the special case of binary classification tasks for which stochastic gradient-descent learning rules apply. As such our work represents the first step towards a more mathematical instantiation of these theories, although it remains to be generalized to a broader class of circumstances, such as multi-choice tasks and different learning algorithms.
[...] our work points to a mathematical theory of the state of ‘Flow’34. This state, ‘in which an individual is completely immersed in an activity without reflective self-consciousness but with a deep sense of control’ [ref. 35, p. 1], is thought to occur most often when the demands of the task are well matched to the skills of the participant. This idea of balance between skill and challenge was captured originally with a simple conceptual diagram (Fig. 5) with two other states: ‘anxiety’ when challenge exceeds skill and ‘boredom’ when skill exceeds challenge. These three qualitatively different regions (flow, anxiety, and boredom) arise naturally in our model. Identifying the precision, β, with the level of skill and the level challenge with the inverse of true decision variable, 1/Δ, we see that when challenge equals skill, flow is associated with a high learning rate and accuracy, anxiety with low learning rate and accuracy and boredom with high accuracy but low learning rate (Fig. 5b, c). Intriguingly, recent work by Vuorre and Metcalfe, has found that subjective feelings of Flow peaks on tasks that are subjectively rated as being of intermediate difficulty36. In addition work on learning to control brain computer interfaces finds that subjective, self-reported measures of ‘optimal difficulty’, peak at a difficulty associated with maximal learning, and not at a difficulty associated with optimal decoding of neural activity37. Going forward, it will be interesting to test whether these subjective measures of engagement peak at the point of maximal learning gradient, which for binary classification tasks is 85%.
What We Know And Don't Know About Stressful Life Events and Disease Risk
Ten Surprising Facts About Stressful Life Events and Disease Risk. Sheldon Cohen, Michael L.M. Murphy, and Aric A. Prather. Annual Review of Psychology, Vol. 70:577-597. https://doi.org/10.1146/annurev-psych-010418-102857
Abstract: After over 70 years of research on the association between stressful life events and health, it is generally accepted that we have a good understanding of the role of stressors in disease risk. In this review, we highlight that knowledge but also emphasize misunderstandings and weaknesses in this literature with the hope of triggering further theoretical and empirical development. We organize this review in a somewhat provocative manner, with each section focusing on an important issue in the literature where we feel that there has been some misunderstanding of the evidence and its implications. Issues that we address include the definition of a stressful event, characteristics of diseases that are impacted by events, differences in the effects of chronic and acute events, the cumulative effects of events, differences in events across the life course, differences in events for men and women, resilience to events, and methodological challenges in the literature.
Keywords: stressors, life events, health, disease
---
REFLECTIONS AND CONCLUSIONS
What We Know About Stressful Life Events and Disease Risk
What we can be sure of is that stressful life events predict increases in severity and progression of multiple diseases, including depression, cardiovascular diseases, HIV/AIDS, asthma, and autoimmune diseases. Although there is also evidence for stressful events predicting disease onset, challenges in obtaining sensitive assessments of premorbid states at baseline (for example, in cancer and heart disease) make interpretation of much of these data as evidence for onset less compelling. In general, stressful life events are thought to influence disease risk through their effects on affect, behavior, and physiology. These effects include affective dysregulation such as increases in anxiety, fear, and depression. Additionally, behavioral changes occurring as adaptations or coping responses to stressors, such as increased smoking, decreased exercise and sleep, poorer diets, and poorer adherence to medical regimens, provide important pathways through which stressors can influence disease risk. Two endocrine response systems, the hypothalamic-pituitaryadrenocortical (HPA) axis and the sympathetic-adrenal-medullary (SAM) system, are particularly reactive to psychological stress and are also thought to play a major role in linking stressor exposure to disease. Prolonged or repeated activation of the HPA axis and SAM system can interfere with their control of other physiological systems (e.g., cardiovascular, metabolic, immune), resulting in increased risk for physical and psychiatric disorders (Cohen et al. 1995b, McEwen 1998).
Chronic stressor exposure is considered to be the most toxic form of stressor exposure because chronic events are the most likely to result in long-term or permanent changes in the emotional, physiological, and behavioral responses that influence susceptibility to and course of disease. These exposures include those to stressful events that persist over an extended duration (e.g., caring for a spouse with dementia) and to brief focal events that continue to be experienced as overwhelming long after they have ended (e.g., experiencing a sexual assault). Even so, acute stressors seem to play a special role in triggering disease events among those with underlying pathology (whether premorbid or morbid), such as asthma and heart attacks.
One of the most provocative aspects of the evidence linking stressful events to disease is the broad range of diseases that are presumed to be affected. As discussed above, the range of effects may be attributable to the fact that many behavioral and physiological responses to stressors are risk factors for a wide range of diseases. The more of these responses to stressful events are associated with risk for a specific disease, the greater is the chance that stressful events will increase the risk for the onset and progression of that disease. For example, risk factors for CVD include many of the behavioral effects of stressors (poor diet, smoking, inadequate physical activity). In addition, stressor effects on CVD (Kaplan et al. 1987, Skantze et al. 1998) and HIV (Capitanio et al. 1998, Cole et al. 2003) are mediated by physiological effects of stressors (e.g., sympathetic activation, glucocorticoid regulation, and inflammation).
It is unlikely that all diseases are modulated by stressful life event exposure. Rare conditions, such as those that are genetic and of high penetrance, leave little room for stressful life events to play a role in disease onset. For example, Tay-Sachs disease is an autosomal recessive disorder expressed in infancy that results in destruction of neurons in both the spinal cord and brain. This disease is fully penetrant, meaning that, if an individual carries two copies of the mutation in the HEXA gene, then they will be affected. Other inherited disorders, such as Huntington’s disease, show high penetrance but are not fully penetrant, leaving room for environmental exposures, behavioral processes, and interactions among these factors to influence disease onset. Note that, upon disease onset, it is unlikely that any disease is immune to the impact of stressor exposure if pathways elicited by the stressor are implicated in the pathogenesis or symptom course of the disease.
What We Do Not Know About Stressful Life Events and Disease Risk
There are still a number of key issues in understanding how stressful events might alter disease pathogenesis where the data are still insufficient to provide clear answers. These include the lack of a clear conceptual definition of what constitutes a stressful event. Alternative approaches (adaptation, threat, goal interruption, demand versus control) overlap in their predictions, providing little leverage for empirically establishing the unique nature of major stressful events. The lack of understanding of the primary nature of stressful events also obscures the reasons for certain events (e.g., interpersonal, economic) being more potent.
Two other important questions for which we lack consistent evidence are whether the stress load accumulates with each additional stressor and whether previous or ongoing chronic stressors moderate responses to current ones. The nature of the cumulative effects of stressors is key to obtaining sensitive assessments of the effects of stressful events on disease and for planning environmental (stressor-reduction) interventions to reduce the impact of events on our health. Evidence that single events may be sufficient to trigger risk for disease has raised two important questions. First, are some types of events more potent than others? Wea ddress this question above (in the section titled Fact 6: Certain Types of Stressful Events Are Particularly Potent) using the existing evidence, but it is important to emphasize the relative lack of studies comparing the impact of different stressors on the same outcomes (for some exceptions, see Cohen et al. 1998, Kendler et al. 2003, Murphy et al. 2015). Second, are specific types of events linked to specific diseases? This question derives from scattered evidence of stressors that are potent predictors of specific diseases [e.g., social loss for depression (Kendler et al. 2003), work stress for CHD (Kivim¨aki et al. 2006)] and of specific stress biomarkers [e.g., threats to social status leading to cortisol responses (Denson et al. 2009, Dickerson & Kemeny 2004)]. While it is provocative, there are no direct tests of the stressor-disease specificity hypothesis. A proper examination of this theory would require studies that not only conduct broad assessments of different types of stressful life events, but also measure multiple unique diseases to draw comparisons. Such studies may not be feasible due to the high costs of properly assessing multiple disease outcomes and the need for large numbers of participants to obtain sufficient numbers of persons developing (incidence) or initially having each disease so as to measure progression. Comparisons of limited numbers of diseases proposed to have different predictors (e.g., cancer and heart disease) are more efficient and may be a good initial approach to this issue.
Another area of weakness is the lack of understanding of the types of stressful events that are most salient at different points in development. For example, although traumatic events are the type of events studied most often in children, the relative lack of focus on more normative events leaves us with an incomplete understanding of how different events influence the current and later health of young people. Overall, the relative lack of comparisons of the impact of the same events (or equivalents) across the life course further muddies our understanding of event salience as we age.
It is noteworthy that the newest generation of instruments designed to assess major stressful life events has the potential to provide some of the fine-grained information required to address many of the issues raised in this review (for a review, see Anderson et al. 2010; see also Epel et al. 2018). For example, the Life Events Assessment Profile (LEAP) (Anderson et al. 2010) is a computer-assisted, interviewer-administered measure designed to mimic the LEDS. Like the LEDS, the LEAP assesses events occurring within the past 6–12 months, uses probing questions to better define events, assesses exposure duration, and assigns objective levels of contextual threat based on LEDS dictionaries. Another instrument, the Stress and Adversity Inventory (STRAIN) (Slavich & Shields 2018), is a participant-completed computer assessment of lifetime cumulative exposure to stressors. The STRAIN assesses a range of event domains and timing of events (e.g., early life, distant, recent) and uses probing follow-up questions. Both the LEAP and the STRAIN are less expensive and time consuming than the LEDS and other interview techniques and are thus more amenable to use in large-scale studies.
The fundamental question of whether stressful events cause disease can only be rigorously evaluated by experimental studies. Ethical considerations prohibit conducting experimental studies in humans of the effects of enduring stressful events on the pathogenesis of serious disease. A major limitation of the correlational studies is insufficient evidence of (and control for) selection in who gets exposed to events, resulting in the possibility that selection factors such as environments, personalities, or genetics are the real causal agents. The concern is that the social and psychological characteristics that shape what types of stressful events people are exposed to may be directly responsible for modulating disease risk. Because it is not possible to randomly assign people to stressful life events, being able to infer that exposure to stressful events causally modulates disease will require the inclusion of covariates representing obvious individual and environmental confounders, as well as controls for stressor dependency—the extent to which individuals are responsible for generating the stressful events that they report.
Even with these methodological limitations, there is evidence from natural experiments that capitalize on real-life stressors occurring outside of a person’s control, such as natural disasters, economic downsizing, or bereavement (Cohen et al. 2007). There have also been attempts to reduce progression and recurrence of disease using experimental studies of psychosocial interventions. However, clinical trials in this area tend to be small, methodologically weak, and not specifically focused on determining whether stress reduction accounts for intervention-induced reduction in disease risk. Moreover, trials that do assess stress reduction as a mediator generally focus on the reduction of nonspecific perceptions of stress and negative affect instead of on the elimination or reduction of the stressful event itself. In contrast, evidence from prospective cohort studies and natural experiments is informative. These studies typically control for a set of accepted potentially confounding demographic and environmental factors such as age, sex, race or ethnicity, and SES. It is also informative that the results of these studies are consistent with those of laboratory experiments showing that stress modifies disease-relevant biological processes in humans and with those of animal studies that investigate stressors as causative factors in disease onset and progression (Cohen et al. 2007).
Despite many years of investigation, our understanding of resilience to stressful life events is incomplete and even seemingly contradictory (e.g., Brody et al. 2013). Resilience generally refers to the ability of an individual to maintain healthy psychological and physical functioning in the face of exposure to adverse experiences (Bonanno 2004). This definition suggests that when a healthy individual is exposed to a stressful event but does not get sick and continues to be able to function relatively normally, this person has shown resilience. What is less clear is whether there are certain types of stressful events for which people tend to show greater resilience than for others. It seems likely that factors that increase stressor severity, such as imminence of harm, uncontrollability, and unpredictability, also decrease an event’s potential to be met with resilience. Additionally, it may be possible that stressful events that are more commonly experienced are easier to adapt to due to shared cultural experiences that provide individuals with expectations for how to manage events. Conversely, less common events (e.g., combat exposure) or experiences that carry significant sociocultural stigma (e.g., rape) might be less likely to elicit resilience. As efforts to test interventions to promote resilience continue to be carried out, careful characterizations of stress exposures, including the complexities discussed in this review, will be critical to understanding the heterogeneity in physical and mental health outcomes associated with stressful life events.
Check also, from 2018...Salutogenic effects of adversity and the role of adversity for successful aging. Jan Höltge. Fall 2018, University of Zurich, Faculty of Arts, PhD Thesis. https://www.bipartisanalliance.com/2019/10/from-2018salutogenic-effects-of.html
And Research has predominantly focused on the negative effects of adversity on health and well-being; but under certain circumstances, adversity may have the potential for positive outcomes, such as increased resilience and thriving (steeling effect):
Abstract: After over 70 years of research on the association between stressful life events and health, it is generally accepted that we have a good understanding of the role of stressors in disease risk. In this review, we highlight that knowledge but also emphasize misunderstandings and weaknesses in this literature with the hope of triggering further theoretical and empirical development. We organize this review in a somewhat provocative manner, with each section focusing on an important issue in the literature where we feel that there has been some misunderstanding of the evidence and its implications. Issues that we address include the definition of a stressful event, characteristics of diseases that are impacted by events, differences in the effects of chronic and acute events, the cumulative effects of events, differences in events across the life course, differences in events for men and women, resilience to events, and methodological challenges in the literature.
Keywords: stressors, life events, health, disease
---
REFLECTIONS AND CONCLUSIONS
What We Know About Stressful Life Events and Disease Risk
What we can be sure of is that stressful life events predict increases in severity and progression of multiple diseases, including depression, cardiovascular diseases, HIV/AIDS, asthma, and autoimmune diseases. Although there is also evidence for stressful events predicting disease onset, challenges in obtaining sensitive assessments of premorbid states at baseline (for example, in cancer and heart disease) make interpretation of much of these data as evidence for onset less compelling. In general, stressful life events are thought to influence disease risk through their effects on affect, behavior, and physiology. These effects include affective dysregulation such as increases in anxiety, fear, and depression. Additionally, behavioral changes occurring as adaptations or coping responses to stressors, such as increased smoking, decreased exercise and sleep, poorer diets, and poorer adherence to medical regimens, provide important pathways through which stressors can influence disease risk. Two endocrine response systems, the hypothalamic-pituitaryadrenocortical (HPA) axis and the sympathetic-adrenal-medullary (SAM) system, are particularly reactive to psychological stress and are also thought to play a major role in linking stressor exposure to disease. Prolonged or repeated activation of the HPA axis and SAM system can interfere with their control of other physiological systems (e.g., cardiovascular, metabolic, immune), resulting in increased risk for physical and psychiatric disorders (Cohen et al. 1995b, McEwen 1998).
Chronic stressor exposure is considered to be the most toxic form of stressor exposure because chronic events are the most likely to result in long-term or permanent changes in the emotional, physiological, and behavioral responses that influence susceptibility to and course of disease. These exposures include those to stressful events that persist over an extended duration (e.g., caring for a spouse with dementia) and to brief focal events that continue to be experienced as overwhelming long after they have ended (e.g., experiencing a sexual assault). Even so, acute stressors seem to play a special role in triggering disease events among those with underlying pathology (whether premorbid or morbid), such as asthma and heart attacks.
One of the most provocative aspects of the evidence linking stressful events to disease is the broad range of diseases that are presumed to be affected. As discussed above, the range of effects may be attributable to the fact that many behavioral and physiological responses to stressors are risk factors for a wide range of diseases. The more of these responses to stressful events are associated with risk for a specific disease, the greater is the chance that stressful events will increase the risk for the onset and progression of that disease. For example, risk factors for CVD include many of the behavioral effects of stressors (poor diet, smoking, inadequate physical activity). In addition, stressor effects on CVD (Kaplan et al. 1987, Skantze et al. 1998) and HIV (Capitanio et al. 1998, Cole et al. 2003) are mediated by physiological effects of stressors (e.g., sympathetic activation, glucocorticoid regulation, and inflammation).
It is unlikely that all diseases are modulated by stressful life event exposure. Rare conditions, such as those that are genetic and of high penetrance, leave little room for stressful life events to play a role in disease onset. For example, Tay-Sachs disease is an autosomal recessive disorder expressed in infancy that results in destruction of neurons in both the spinal cord and brain. This disease is fully penetrant, meaning that, if an individual carries two copies of the mutation in the HEXA gene, then they will be affected. Other inherited disorders, such as Huntington’s disease, show high penetrance but are not fully penetrant, leaving room for environmental exposures, behavioral processes, and interactions among these factors to influence disease onset. Note that, upon disease onset, it is unlikely that any disease is immune to the impact of stressor exposure if pathways elicited by the stressor are implicated in the pathogenesis or symptom course of the disease.
What We Do Not Know About Stressful Life Events and Disease Risk
There are still a number of key issues in understanding how stressful events might alter disease pathogenesis where the data are still insufficient to provide clear answers. These include the lack of a clear conceptual definition of what constitutes a stressful event. Alternative approaches (adaptation, threat, goal interruption, demand versus control) overlap in their predictions, providing little leverage for empirically establishing the unique nature of major stressful events. The lack of understanding of the primary nature of stressful events also obscures the reasons for certain events (e.g., interpersonal, economic) being more potent.
Two other important questions for which we lack consistent evidence are whether the stress load accumulates with each additional stressor and whether previous or ongoing chronic stressors moderate responses to current ones. The nature of the cumulative effects of stressors is key to obtaining sensitive assessments of the effects of stressful events on disease and for planning environmental (stressor-reduction) interventions to reduce the impact of events on our health. Evidence that single events may be sufficient to trigger risk for disease has raised two important questions. First, are some types of events more potent than others? Wea ddress this question above (in the section titled Fact 6: Certain Types of Stressful Events Are Particularly Potent) using the existing evidence, but it is important to emphasize the relative lack of studies comparing the impact of different stressors on the same outcomes (for some exceptions, see Cohen et al. 1998, Kendler et al. 2003, Murphy et al. 2015). Second, are specific types of events linked to specific diseases? This question derives from scattered evidence of stressors that are potent predictors of specific diseases [e.g., social loss for depression (Kendler et al. 2003), work stress for CHD (Kivim¨aki et al. 2006)] and of specific stress biomarkers [e.g., threats to social status leading to cortisol responses (Denson et al. 2009, Dickerson & Kemeny 2004)]. While it is provocative, there are no direct tests of the stressor-disease specificity hypothesis. A proper examination of this theory would require studies that not only conduct broad assessments of different types of stressful life events, but also measure multiple unique diseases to draw comparisons. Such studies may not be feasible due to the high costs of properly assessing multiple disease outcomes and the need for large numbers of participants to obtain sufficient numbers of persons developing (incidence) or initially having each disease so as to measure progression. Comparisons of limited numbers of diseases proposed to have different predictors (e.g., cancer and heart disease) are more efficient and may be a good initial approach to this issue.
Another area of weakness is the lack of understanding of the types of stressful events that are most salient at different points in development. For example, although traumatic events are the type of events studied most often in children, the relative lack of focus on more normative events leaves us with an incomplete understanding of how different events influence the current and later health of young people. Overall, the relative lack of comparisons of the impact of the same events (or equivalents) across the life course further muddies our understanding of event salience as we age.
It is noteworthy that the newest generation of instruments designed to assess major stressful life events has the potential to provide some of the fine-grained information required to address many of the issues raised in this review (for a review, see Anderson et al. 2010; see also Epel et al. 2018). For example, the Life Events Assessment Profile (LEAP) (Anderson et al. 2010) is a computer-assisted, interviewer-administered measure designed to mimic the LEDS. Like the LEDS, the LEAP assesses events occurring within the past 6–12 months, uses probing questions to better define events, assesses exposure duration, and assigns objective levels of contextual threat based on LEDS dictionaries. Another instrument, the Stress and Adversity Inventory (STRAIN) (Slavich & Shields 2018), is a participant-completed computer assessment of lifetime cumulative exposure to stressors. The STRAIN assesses a range of event domains and timing of events (e.g., early life, distant, recent) and uses probing follow-up questions. Both the LEAP and the STRAIN are less expensive and time consuming than the LEDS and other interview techniques and are thus more amenable to use in large-scale studies.
The fundamental question of whether stressful events cause disease can only be rigorously evaluated by experimental studies. Ethical considerations prohibit conducting experimental studies in humans of the effects of enduring stressful events on the pathogenesis of serious disease. A major limitation of the correlational studies is insufficient evidence of (and control for) selection in who gets exposed to events, resulting in the possibility that selection factors such as environments, personalities, or genetics are the real causal agents. The concern is that the social and psychological characteristics that shape what types of stressful events people are exposed to may be directly responsible for modulating disease risk. Because it is not possible to randomly assign people to stressful life events, being able to infer that exposure to stressful events causally modulates disease will require the inclusion of covariates representing obvious individual and environmental confounders, as well as controls for stressor dependency—the extent to which individuals are responsible for generating the stressful events that they report.
Even with these methodological limitations, there is evidence from natural experiments that capitalize on real-life stressors occurring outside of a person’s control, such as natural disasters, economic downsizing, or bereavement (Cohen et al. 2007). There have also been attempts to reduce progression and recurrence of disease using experimental studies of psychosocial interventions. However, clinical trials in this area tend to be small, methodologically weak, and not specifically focused on determining whether stress reduction accounts for intervention-induced reduction in disease risk. Moreover, trials that do assess stress reduction as a mediator generally focus on the reduction of nonspecific perceptions of stress and negative affect instead of on the elimination or reduction of the stressful event itself. In contrast, evidence from prospective cohort studies and natural experiments is informative. These studies typically control for a set of accepted potentially confounding demographic and environmental factors such as age, sex, race or ethnicity, and SES. It is also informative that the results of these studies are consistent with those of laboratory experiments showing that stress modifies disease-relevant biological processes in humans and with those of animal studies that investigate stressors as causative factors in disease onset and progression (Cohen et al. 2007).
Despite many years of investigation, our understanding of resilience to stressful life events is incomplete and even seemingly contradictory (e.g., Brody et al. 2013). Resilience generally refers to the ability of an individual to maintain healthy psychological and physical functioning in the face of exposure to adverse experiences (Bonanno 2004). This definition suggests that when a healthy individual is exposed to a stressful event but does not get sick and continues to be able to function relatively normally, this person has shown resilience. What is less clear is whether there are certain types of stressful events for which people tend to show greater resilience than for others. It seems likely that factors that increase stressor severity, such as imminence of harm, uncontrollability, and unpredictability, also decrease an event’s potential to be met with resilience. Additionally, it may be possible that stressful events that are more commonly experienced are easier to adapt to due to shared cultural experiences that provide individuals with expectations for how to manage events. Conversely, less common events (e.g., combat exposure) or experiences that carry significant sociocultural stigma (e.g., rape) might be less likely to elicit resilience. As efforts to test interventions to promote resilience continue to be carried out, careful characterizations of stress exposures, including the complexities discussed in this review, will be critical to understanding the heterogeneity in physical and mental health outcomes associated with stressful life events.
Check also, from 2018...Salutogenic effects of adversity and the role of adversity for successful aging. Jan Höltge. Fall 2018, University of Zurich, Faculty of Arts, PhD Thesis. https://www.bipartisanalliance.com/2019/10/from-2018salutogenic-effects-of.html
And Research has predominantly focused on the negative effects of adversity on health and well-being; but under certain circumstances, adversity may have the potential for positive outcomes, such as increased resilience and thriving (steeling effect):
A Salutogenic Perspective on Adverse Experiences. The Curvilinear Relationship of Adversity and Well-Being. Jan Höltge et al. European Journal of Health Psychology (2018), 25, pp. 53-69. https://www.bipartisanalliance.com/2018/08/research-has-predominantly-focused-on.html
Monday, November 4, 2019
Are Daylight Saving Time Changes Bad for the Brain?
Are Daylight Saving Time Changes Bad for the Brain? Beth A. Malow et al. JAMA Neurol., November 4, 2019. doi:10.1001/jamaneurol.2019.3780
Excerpts (full text, map, references, etc., in the DOI above):
Daylight saving time (DST) begins on the second Sunday in March and ends on the first Sunday in November. During this period, clocks in most parts of the United States are set 1 hour ahead of standard time. First introduced in the United States in 1918 to mimic policies already being used in several European countries during World War I, DST was unpopular and abolished as a federal policy shortly after World War I ended.1 It was reinstated in 1942 during World War II but covered the entire year and was called “war time.” After World War II ended, it became a local policy. Varying DST policies across cities and states led to the Uniform Time Act of 1966, which mandated DST starting on the last Sunday in April until the last Sunday in October. States were allowed to exempt themselves from observing DST (including parts of the state that were within a different time zone [eg, Michigan and Indiana]).
The US Department of Transportation (DOT) is responsible for enforcing and evaluating DST. In 1974, DOT reported that the potential benefits to energy conservation, traffic safety, and reductions in violent crime were minimal.2 In 2008, the US Department of Energy assessed the potential effects to national energy consumption of an extended DST and found a reduction in total primary energy consumption of 0.02%. The DOT is currently reviewing the literature associated with DST in response to a request from the US House Committee on Energy and Commerce.
Since 2015, multiple states have proposed legislation to change their observance of DST (Figure).1,2 These efforts include proposals to exempt a state from DST observance, which is allowable under existing law, and proposals that would establish permanent DST, which would require Congress to amend the Uniform Time Act of 1966.
[Figure]
State Legislation Related to Daylight Saving Time
Map of the United States depicting current practices or legislation pending as of August 2019.1,2 Note the exception of the Navajo Nation in Arizona, which participates in the daylight saving time (DST) transition. Most states have either adopted permanent DST or standard time (ST) or have legislation being considered. While Indiana does not have DST legislation being considered, it is considering legislation in which the entire state would be located within the central time zone.
This Viewpoint reviews data associated with the DST transition. The effects of permanent DST have received less attention and are beyond the scope of this review.
Clinical Implications
The transition to DST has been associated with health consequences, including on cerebrovascular and cardiovascular function. The rate of ischemic stroke was significantly higher during the first 2 days after DST transition, with women, older age, and malignancy showing increased susceptibility.3 A meta-analysis based on several studies including more than 100 000 participants documented a modest increased risk of acute myocardial infarction in the week after the DST spring transition (about 5%).4 This increased risk may be associated with the effect of acute partial sleep deprivation, changes in sympathetic activity with increased heart rate and blood pressure, and the release of proinflammatory cytokines.
The association of DST with self-reported life satisfaction scores was assessed using mixed-effect models and found to be negatively associated with individual well-being.5 The effect of DST was more significant for men and those with full-time employment. In a survey of sleep patterns in 55 000 participants, adjustments to the autumn time zone shift were easier, but adjustments were more difficult in the spring. There was a lower quality of sleep reported in participants up to 2 weeks afterwards during the spring season.6
Using time use data (eg, how individuals spent their time) in the week before and after the transition to DST, the transition to DST resulted in an average of 15 to 20 fewer minutes of sleep.1 High school students studied during the DST transition showed reduced weeknight sleep duration (approximately 30 minutes), as measured by actigraphy.7 The average sleep duration was 7 hours and 51 minutes on pre-DST transition weeknights and 7 hours, 19 minutes post-DST weeknights. In addition, longer reaction times, increased lapses in vigilance, and increased daytime sleepiness were documented. While it is important to recognize that this study only involved 40 students and was limited to the week following the DST transition, an American Academy of Sleep Medicine consensus statement has recommended 8 to 10 hours of sleep for adolescents on a regular basis.8 These recommendations were based on a detailed literature review that documented adverse effects of chronic sleep loss on attention, behavior, learning problems, depression, and self-harm. Additional studies will be needed to document whether transitions to DST have more long-term associations with adolescent sleep and contribute to adverse effects.
Genetics of Circadian Disruption
The negative health outcomes associated with the DST transition may be associated with disruptions in the underlying genetic mechanisms that contribute to the expression of the circadian clock and its behavioral manifestations in neurology (ie, chronotype).9 It is well established that genetic factors help to regulate the sleep-wake cycle in humans by encoding the circadian clock, which is an autoregulatory feedback loop. When sleep time shifts there is global disruption in peripheral gene expression, and even the short-term sleep deprivation that occurs following the transition to DST may alter the epigenetic and transcriptional profile of core circadian clock genes.10 While it is unclear how disruptive a 1-hour time change is to otherwise healthy individuals, it is possible that individuals with extreme manifestations of chronotype or circadian rhythm sleep-wake disorders, neurological disorders, or children and adolescents whose brains are still developing are more susceptible to the adverse health effects that occur following the DST transition.
Conclusions
Transitions to DST have documented detrimental associations with the brain, specifically ischemic stroke, with the risk of myocardial infarction and well-being also affected. A lower quality of sleep, shorter sleep duration, and decreased psychomotor vigilance have also been reported. Additional studies are needed to understand the causes of these detrimental effects and the role of sleep deprivation and circadian disruption. Based on these data, we advocate for the elimination of transitions to DST.
Excerpts (full text, map, references, etc., in the DOI above):
Daylight saving time (DST) begins on the second Sunday in March and ends on the first Sunday in November. During this period, clocks in most parts of the United States are set 1 hour ahead of standard time. First introduced in the United States in 1918 to mimic policies already being used in several European countries during World War I, DST was unpopular and abolished as a federal policy shortly after World War I ended.1 It was reinstated in 1942 during World War II but covered the entire year and was called “war time.” After World War II ended, it became a local policy. Varying DST policies across cities and states led to the Uniform Time Act of 1966, which mandated DST starting on the last Sunday in April until the last Sunday in October. States were allowed to exempt themselves from observing DST (including parts of the state that were within a different time zone [eg, Michigan and Indiana]).
The US Department of Transportation (DOT) is responsible for enforcing and evaluating DST. In 1974, DOT reported that the potential benefits to energy conservation, traffic safety, and reductions in violent crime were minimal.2 In 2008, the US Department of Energy assessed the potential effects to national energy consumption of an extended DST and found a reduction in total primary energy consumption of 0.02%. The DOT is currently reviewing the literature associated with DST in response to a request from the US House Committee on Energy and Commerce.
Since 2015, multiple states have proposed legislation to change their observance of DST (Figure).1,2 These efforts include proposals to exempt a state from DST observance, which is allowable under existing law, and proposals that would establish permanent DST, which would require Congress to amend the Uniform Time Act of 1966.
[Figure]
State Legislation Related to Daylight Saving Time
Map of the United States depicting current practices or legislation pending as of August 2019.1,2 Note the exception of the Navajo Nation in Arizona, which participates in the daylight saving time (DST) transition. Most states have either adopted permanent DST or standard time (ST) or have legislation being considered. While Indiana does not have DST legislation being considered, it is considering legislation in which the entire state would be located within the central time zone.
This Viewpoint reviews data associated with the DST transition. The effects of permanent DST have received less attention and are beyond the scope of this review.
Clinical Implications
The transition to DST has been associated with health consequences, including on cerebrovascular and cardiovascular function. The rate of ischemic stroke was significantly higher during the first 2 days after DST transition, with women, older age, and malignancy showing increased susceptibility.3 A meta-analysis based on several studies including more than 100 000 participants documented a modest increased risk of acute myocardial infarction in the week after the DST spring transition (about 5%).4 This increased risk may be associated with the effect of acute partial sleep deprivation, changes in sympathetic activity with increased heart rate and blood pressure, and the release of proinflammatory cytokines.
The association of DST with self-reported life satisfaction scores was assessed using mixed-effect models and found to be negatively associated with individual well-being.5 The effect of DST was more significant for men and those with full-time employment. In a survey of sleep patterns in 55 000 participants, adjustments to the autumn time zone shift were easier, but adjustments were more difficult in the spring. There was a lower quality of sleep reported in participants up to 2 weeks afterwards during the spring season.6
Using time use data (eg, how individuals spent their time) in the week before and after the transition to DST, the transition to DST resulted in an average of 15 to 20 fewer minutes of sleep.1 High school students studied during the DST transition showed reduced weeknight sleep duration (approximately 30 minutes), as measured by actigraphy.7 The average sleep duration was 7 hours and 51 minutes on pre-DST transition weeknights and 7 hours, 19 minutes post-DST weeknights. In addition, longer reaction times, increased lapses in vigilance, and increased daytime sleepiness were documented. While it is important to recognize that this study only involved 40 students and was limited to the week following the DST transition, an American Academy of Sleep Medicine consensus statement has recommended 8 to 10 hours of sleep for adolescents on a regular basis.8 These recommendations were based on a detailed literature review that documented adverse effects of chronic sleep loss on attention, behavior, learning problems, depression, and self-harm. Additional studies will be needed to document whether transitions to DST have more long-term associations with adolescent sleep and contribute to adverse effects.
Genetics of Circadian Disruption
The negative health outcomes associated with the DST transition may be associated with disruptions in the underlying genetic mechanisms that contribute to the expression of the circadian clock and its behavioral manifestations in neurology (ie, chronotype).9 It is well established that genetic factors help to regulate the sleep-wake cycle in humans by encoding the circadian clock, which is an autoregulatory feedback loop. When sleep time shifts there is global disruption in peripheral gene expression, and even the short-term sleep deprivation that occurs following the transition to DST may alter the epigenetic and transcriptional profile of core circadian clock genes.10 While it is unclear how disruptive a 1-hour time change is to otherwise healthy individuals, it is possible that individuals with extreme manifestations of chronotype or circadian rhythm sleep-wake disorders, neurological disorders, or children and adolescents whose brains are still developing are more susceptible to the adverse health effects that occur following the DST transition.
Conclusions
Transitions to DST have documented detrimental associations with the brain, specifically ischemic stroke, with the risk of myocardial infarction and well-being also affected. A lower quality of sleep, shorter sleep duration, and decreased psychomotor vigilance have also been reported. Additional studies are needed to understand the causes of these detrimental effects and the role of sleep deprivation and circadian disruption. Based on these data, we advocate for the elimination of transitions to DST.
From 2018... A Simple Combinatorial Model of Technological Change that Explains the Industrial Revolution
From 2018... A Simple Combinatorial Model of World Economic History. Roger Koppl, Abigail Devereaux, Jim Herriot, Stuart Kauffman. Nov 2018. arXiv:1811.04502
Abstract: We use a simple combinatorial model of technological change to explain the Industrial Revolution. The Industrial Revolution was a sudden large improvement in technology, which resulted in significant increases in human wealth and life spans. In our model, technological change is combining or modifying earlier goods to produce new goods. The underlying process, which has been the same for at least 200,000 years, was sure to produce a very long period of relatively slow change followed with probability one by a combinatorial explosion and sudden takeoff. Thus, in our model, after many millennia of relative quiescence in wealth and technology, a combinatorial explosion created the sudden takeoff of the Industrial Revolution.
Abstract: We use a simple combinatorial model of technological change to explain the Industrial Revolution. The Industrial Revolution was a sudden large improvement in technology, which resulted in significant increases in human wealth and life spans. In our model, technological change is combining or modifying earlier goods to produce new goods. The underlying process, which has been the same for at least 200,000 years, was sure to produce a very long period of relatively slow change followed with probability one by a combinatorial explosion and sudden takeoff. Thus, in our model, after many millennia of relative quiescence in wealth and technology, a combinatorial explosion created the sudden takeoff of the Industrial Revolution.
Subscribe to:
Posts (Atom)