Nadelhoffer, Thomas, Jason Shepard, Damien Crone, Jim A. C. Everett, Brian D. Earp, and Neil Levy. 2019. “Does Encouraging a Belief in Determinism Increase Cheating? Reconsidering the Value of Believing in Free Will.” OSF Preprints. May 3. doi:10.31219/osf.io/bhpe5
Abstract: A key source of support for the view that challenging people’s beliefs about free will may undermine moral behavior is two classic studies by Vohs and Schooler (2008). These authors reported that exposure to certain prompts suggesting that free will is an illusion increased cheating behavior. In the present paper, we report several attempts to replicate this influential and widely cited work. Over a series of four high-powered studies (sample sizes of N = 162, N = 283, N = 268, N = 804) (three preregistered) we tested the relationship between (1) anti-free-will prompts and free will beliefs and (2) free will beliefs and immoral behavior. Our primary task was to closely replicate the findings from Vohs and Schooler (2008) using the same or similar manipulations and measurements as the ones used in their original studies. Our efforts were largely unsuccessful. We suggest that manipulating free will beliefs in a robust way is more difficult than has been implied by prior work, and that the proposed link with immoral behavior may not be as consistent as previous work suggests.
4. General Discussion
The free will debate has gone mainstream in recent years in the wake of scientific advances that on some accounts seem to undermine free will. Given the traditional associations between free will and moral responsibility, a great deal may hang on this debate. In a high-profile paper on the relationship between free will beliefs and moral behavior, Vohs and Schooler (2008) cautioned against public pronouncements disputing the existence of free will, based on their findings concerning the relationship between free will beliefs and cheating. Our goal in this paper was to replicate their landmark findings. Across four studies, we had mixed results. While we were able to influence people’s beliefs in free will in one of the four studies, we failed in our efforts to find a relationship between free will beliefs and behavior. When coupled with the work of other researchers who have had difficulty replicating the original findings by Vohs and Schooler, we think this should give us further pause for concern.
That said, there are four primary limitations of our studies. First, in light of the results from Study 4, it is possible that there is a link between free will belief and moral behavior—we just failed to detect it because our two behavioral studies were not high powered enough. Perhaps a very high-powered (800+ participants) behavioral experiment would replicate Vohs and Schooler’s original findings. That is certainly possible, but we are doubtful that simply running another high-powered experiment would yield the desired effect. After all, our pooled data analyses have 1,089 and 551 pooled participants, respectively. Moreover, Monroe, Brady, and Malle (2016) had mixed results manipulating free will beliefs in very high-powered studies. And even when they did manage to decrease free will beliefs, they did not find any behavioral differences. So, we are not convinced that insufficient power explains our failures to replicate—especially given that in Vohs and Schooler’s original studies were underpowered (N = 15-30 per cell) and yet they found very large effects both with respect to manipulating free will beliefs (d = 1.20) and influencing cheating behavior (d = 0.88). By our lights, we have done enough in this paper—when coupled with the other mixed results from attempts to replicate Vohs and Schooler (2008)—to weaken our collective confidence in the proposed relationship between free will beliefs and moral behaviors. That is not to say there is no relationship, however, it suggests that if there is one, it likely not a relationship we should be especially worried about from the dual standpoints of morality and public policy.
The second potential problem with our studies is that we ran them online rather than using a convenience sample, as Vohs and Schooler did. While we tried to ensure that we mimicked their original work as much as possible, follow up studies with a convenience sample would certainly be valuable. However, the differences in sample should not deflate the importance of our replication attempts. After all, the effect (and its societal implications) are claimed to be pervasive. If directly communicating skepticism about free will barely undermined people's beliefs and (going beyond our own data) at most resulted in only a trivial increase in bad behavior (or affected behavior in a very limited range of contexts), then the effect is arguably unimportant and unworthy of the substantial attention it has received so far. A third limitation is that we only used American participants. However, this limitation is an artifact of our goal of trying to replicate the work by Vohs and Schooler. Because they used an American sample, we used an American sample. Figuring out whether their work replicates in a non-American sample is a task for another day. That said, we would obviously welcome cross-cultural studies that implemented our paradigms to see whether our findings are cross-culturally stable.
The fourth and final limitation our experimental design is the possibility that MTurk participants may not be as attentive as in-lab participants. To guard against this, we used an attention check and excluded any participants who failed it. We also used two items designed to encourage participants to pay attention by reminding them that they would be asked to write about the content of the vignette they read. While these measures can obviously not guarantee participants are paying attention, we’d like to think that they reduce the likelihood of inattention. Additionally, many lab tasks that are particularly susceptible to lapses in attention have been replicated using MTurk populations, including tasks that depend on difference in reaction times on the scale of milliseconds (e.g., Erikson Flanker tasks) and memory tasks that are heavily attention-dependent (see Woods, et al., 2015 for a review).
Setting these limitations aside, we nevertheless think we have made a valuable contribution to the literature on the relationship between free will beliefs and moral behavior. Minimally, our findings serve as a cautionary tale for those who fret that challenging free will beliefs might undermine public morality. Future research on this front will have to take into consideration the difficulty of replicating both standard manipulations of belief in free will and the purported link between free will skepticism and morality. Contrary to our initial expectations, the association between free will beliefs and moral behavior appears to be elusive. As such, worries about the purported erosion of societal mores in the wake of recent advances in neuroscience are likely to be misplaced. The belief in free will appears to be more stable, robust, and resistant to challenge than earlier work suggests. While some scientists may think that their research undermines the traditional picture of agency and responsibility, public beliefs on this front are likely to be relatively slow to change. Even if beliefs about free will were to incrementally change, given the lack of association between dispositional free will beliefs and moral behavior reported by Crone and Levy (2018), it is unclear that people would have difficulty integrating such beliefs into a coherent worldview that permits the same level of moral behavior.
Tuesday, January 21, 2020
Monday, January 20, 2020
Extended Penfield’s findings of the primary somatosensory cortex’s homunculus to the higher level of somatosensory processing suggest a major role for somatosensation in human cognition
The "creatures" of the human cortical somatosensory system: Multiple somatosensory homunculi. Noam Saadon-Grosman, Yonatan Loewenstein, Shahar Arzy Author Notes. Brain Communications, fcaa003, January 17 2020, https://doi.org/10.1093/braincomms/fcaa003
Abstract: Penfield’s description of the “homunculus”, a “grotesque creature” with large lips and hands and small trunk and legs depicting the representation of body-parts within the primary somatosensory cortex (S1), is one of the most prominent contributions to the neurosciences. Since then, numerous studies have identified additional body-parts representations outside of S1. Nevertheless, it has been implicitly assumed that S1’s homunculus is representative of the entire somatosensory cortex. Therefore, the distribution of body-parts representations in other brain regions, the property that gave Penfield’s homunculus its famous “grotesque” appearance, has been overlooked. We used whole-body somatosensory stimulation, functional MRI and a new cortical parcellation to quantify the organization of the cortical somatosensory representation. Our analysis showed first, an extensive somatosensory response over the cortex; and second, that the proportional representation of body-parts differs substantially between major neuroanatomical regions and from S1, with, for instance, much larger trunk representation at higher brain regions, potentially in relation to the regions’ functional specialization. These results extend Penfield’s initial findings to the higher level of somatosensory processing and suggest a major role for somatosensation in human cognition.
Abstract: Penfield’s description of the “homunculus”, a “grotesque creature” with large lips and hands and small trunk and legs depicting the representation of body-parts within the primary somatosensory cortex (S1), is one of the most prominent contributions to the neurosciences. Since then, numerous studies have identified additional body-parts representations outside of S1. Nevertheless, it has been implicitly assumed that S1’s homunculus is representative of the entire somatosensory cortex. Therefore, the distribution of body-parts representations in other brain regions, the property that gave Penfield’s homunculus its famous “grotesque” appearance, has been overlooked. We used whole-body somatosensory stimulation, functional MRI and a new cortical parcellation to quantify the organization of the cortical somatosensory representation. Our analysis showed first, an extensive somatosensory response over the cortex; and second, that the proportional representation of body-parts differs substantially between major neuroanatomical regions and from S1, with, for instance, much larger trunk representation at higher brain regions, potentially in relation to the regions’ functional specialization. These results extend Penfield’s initial findings to the higher level of somatosensory processing and suggest a major role for somatosensation in human cognition.
The division-of-labor may result in modular & assortative social network of strong associations among those performing the same task: DOL & political polarization may share a common mechanism
Social influence and interaction bias can drive emergent behavioural specialization and modular social networks across systems. Christopher K. Tokita and Corina E. Tarnita. Journal of The Royal Society Interface, January 8 2020. https://doi.org/10.1098/rsif.2019.0564
Abstract: In social systems ranging from ant colonies to human society, behavioural specialization—consistent individual differences in behaviour—is commonplace: individuals can specialize in the tasks they perform (division of labour (DOL)), the political behaviour they exhibit (political polarization) or the non-task behaviours they exhibit (personalities). Across these contexts, behavioural specialization often co-occurs with modular and assortative social networks, such that individuals tend to associate with others that have the same behavioural specialization. This raises the question of whether a common mechanism could drive co-emergent behavioural specialization and social network structure across contexts. To investigate this question, here we extend a model of self-organized DOL to account for social influence and interaction bias among individuals—social dynamics that have been shown to drive political polarization. We find that these same social dynamics can also drive emergent DOL by forming a feedback loop that reinforces behavioural differences between individuals, a feedback loop that is impacted by group size. Moreover, this feedback loop also results in modular and assortative social network structure, whereby individuals associate strongly with those performing the same task. Our findings suggest that DOL and political polarization—two social phenomena not typically considered together—may actually share a common social mechanism. This mechanism may result in social organization in many contexts beyond task performance and political behaviour.
---
Abstract: In social systems ranging from ant colonies to human society, behavioural specialization—consistent individual differences in behaviour—is commonplace: individuals can specialize in the tasks they perform (division of labour (DOL)), the political behaviour they exhibit (political polarization) or the non-task behaviours they exhibit (personalities). Across these contexts, behavioural specialization often co-occurs with modular and assortative social networks, such that individuals tend to associate with others that have the same behavioural specialization. This raises the question of whether a common mechanism could drive co-emergent behavioural specialization and social network structure across contexts. To investigate this question, here we extend a model of self-organized DOL to account for social influence and interaction bias among individuals—social dynamics that have been shown to drive political polarization. We find that these same social dynamics can also drive emergent DOL by forming a feedback loop that reinforces behavioural differences between individuals, a feedback loop that is impacted by group size. Moreover, this feedback loop also results in modular and assortative social network structure, whereby individuals associate strongly with those performing the same task. Our findings suggest that DOL and political polarization—two social phenomena not typically considered together—may actually share a common social mechanism. This mechanism may result in social organization in many contexts beyond task performance and political behaviour.
4. Discussion
Our main result demonstrates that, in the presence of homophily with positive influence, the feedback between social influence and interaction bias could result in the co-emergence of DOL and modular social network structure. These results reveal that self-organized specialization could give rise to modular social networks without direct selection for modularity, filling a gap in our knowledge of social organization [55] and mirroring findings in gene regulatory networks, which can become modular as genes specialize [56]. The co-emergence requires both social influence and interaction bias but, if the level of social influence is too high, its pressure leads to conformity, which homogenizes the society. Because this feedback between social influence and interaction bias has also been shown to drive political polarization [22–25], our results suggest a shared mechanism between two social phenomena—polarization and DOL—that have not traditionally been considered together and raise the possibility that this mechanism may structure social systems in other contexts as well, such as in the case of emergent personalities [11,29–31]. Furthermore, the ubiquity of this mechanism may help explain why social systems often have a common feature—modular network structure—that is shared with a range of other biological and physical complex systems [57].
Intriguingly, although our results suggest that diverse forms of behavioural specialization—and the associated modular, assortative social networks—might arise from a common mechanism, depending on their manifestation, they can be either beneficial or detrimental for the group. For example, DOL and personality differences have long been associated with beneficial group outcomes in both animal [5,58–60] and human societies [61] (although it can sometimes come at the expense of group flexibility [62]). Moreover, the modularity that co-occurs in these systems is also often framed as beneficial, since it can limit the spread of disease [63] and make the social system more robust to perturbation [55]. On the contrary, political polarization is typically deemed harmful to democratic societies [64]. Thus, an interesting question for future research arises: if a common mechanism underlies the emergence of behavioural specialization and the co-emergence of a modular social network structure in multiple contexts, why would group outcomes differ so dramatically? Insights may come from studying the frequency of co-occurrence among various forms of behavioural specialization. If the same mechanism underlies behavioural specialization broadly, then one would expect multiple types of behavioural specialization (e.g. in task performance, personality, decision-making) to simultaneously arise and co-occur in the same group or society, as is the case in some social systems, where certain personalities consistently specialize on particular tasks [9,10] or in human society, where personality type and political ideology appear correlated [65]. Then, the true outcome of behavioural specialization for the group is the net across the different types co-originating from the same mechanism and cannot be inferred by investigating any one specific instantiation of behavioural specialization.
While DOL emerged when homophily was combined with positive influence, other combinations of social influence and interaction bias may nevertheless be employed in societies to elicit other group-level phenomena. For instance, under certain conditions, a society might benefit from uniform rather than divergent, specialized behaviour. This is the case when social insect colonies must relocate to a new nest, a collective decision that requires consensus-building [66]. To produce consensus, interactions should cause individuals to weaken their commitment to an option until a large majority agrees on one location. Heterophily with positive influence—preferential interactions between dissimilar individuals that reduce dissimilarity—achieves this dynamic and is consistent with the cross-inhibitory interactions observed in nest-searching honeybee swarms [67]: scouts interact with scouts favouring other sites and release a signal that causes them to stop reporting that site to others. One could imagine that similar dynamics might also reduce political polarization.
Recent work has shown that built environments—physical or digital—can greatly influence collective behaviour [16,18,68–70], but the mechanisms underlying this influence have remained elusive. By demonstrating the critical role of interaction bias for behavioural outcomes, our results provide a candidate mechanism: structures can enhance interaction bias among individuals and thereby amplify the behavioural specialization of individuals. For example, nest architecture in social insect colonies alter collective behaviour [68] and social organization [18] possibly because the nest chambers and tunnels force proximity to individuals performing the same behaviour and limit interactions with individuals performing other behaviours. Similarly, the Internet and social media platforms have changed the way individuals interact according to interest or ideology [16,69,70]: selective exposure to certain individuals or viewpoints creates a form of interaction bias that our results predict would increase behavioural specialization, i.e. political bias. Thus, our model predicts that built environments should increase behavioural specialization beyond what would be expected in more ‘open’, well-mixed environments. This prediction has evolutionary consequences: a nest can increase behavioural specialization without any underlying genetic or otherwise inherent, diversity. Such consequences would further consolidate the importance of built environments—specifically, nests—for the evolution of complex societies. It has been previously argued that the construction of a nest may have been a critical step in the evolution of stable, highly cooperative social groups [71]. Subsequent spatial structuring of the nest would then, according to our findings, bring further benefits to nascent social groups in the form of increased behavioural specialization, e.g. DOL, even in the absence of initial behavioural and/or trait heterogeneity.
Finally, our results shed light on how plastic traits can result in scaling effects of social organization with group size, a finding that tightens theoretical links between the biological and social sciences. Founding sociological theorist, Emile Durkheim, posited that the size of a society would shape its fundamental organization [3]: small societies would have relatively homogeneous behaviour among individuals, but DOL would naturally emerge as societies grew in size and individuals differentiated in behaviour due to social interactions. Similar to Durkheim's theoretical framing, John Bonner famously posited that complexity, as measured by the differentiated types of individuals (in societies) or cells (in multicellular aggregations), would increase as groups grew in size [72]. Bonner argued that the differentiation among individuals was not due to direct genetic determinism but was instead the result of plasticity that allowed individuals to differ as groups increased in size. Our model supports these qualitative predictions and even predicts a rapid transition in organization as a function of group size that results from socially influenced plasticity at the level of the individual. Previous theoretical work showed that DOL could exhibit group size scaling effects even with fixed traits, but these increases in DOL quickly plateaued past relatively small group sizes [5,39]. Our model, along with models of self-reinforced traits [38], demonstrates how DOL could continue to increase at larger group sizes, a pattern observed empirically in both animal [49,73] and human societies [74,75]. For other forms of behavioural specialization, such as emergent personalities or political polarization, the effect of group size is understudied; however, our results suggest similar patterns. Our model further demonstrated that group size can affect social network structure, a dynamic that has only been preliminarily investigated empirically so far [76]. Leveraging new technologies—such as camera-tracking algorithms and social media—that can simultaneously monitor thousands of individuals and their interactions to investigate the effect of group size on societal dynamics could have significant implications as globalization, urbanization and technology increase the size of our social groups and the frequency of our interactions.
---
Modularity is a form of community structure within a group in which there are clusters of strongly connected nodes that are weakly connected to nodes in other clusters. Using each simulation's time-aggregated interaction matrix A, we calculated modularity with the metric developed by Clauset et al. [77]. A modularity value of 0 indicates that the network is a random graph and, therefore, lacks modularity; positive values indicate deviations from randomness and the presence of some degree of modularity in the network.
Frequency of non-random interactions reveals the degree to which individuals are biasing their interactions towards or away from certain other individuals. For a random, well-mixed population, the expected frequency of interactions between any two individuals is pinteract = 1 − (1 − 1/(n − 1))2. For our resulting social networks, we compared this expected well-mixed frequency to the value of each entry aik in the average interaction matrix resulting from the 100 replicate simulations per group size. To determine whether the deviation from random was statistically significant, we calculated the 95% confidence interval for the value of each entry aik in the average interaction matrix. If the 95% confidence interval for a given interaction did not cross the value pinteract, that interaction was considered significantly different than random.
Assortativity is the tendency of nodes to attach to other nodes that are similar in some trait (e.g. here, threshold bias). We measured assortativity using the weighted assortment coefficient [78]. This metric takes values in the range [− 1, 1], with positive values indicating a tendency to interact with individuals that are similar in traits and negative values indicating a tendency to interact with individuals that are different. A value of 0 means random traits-based mixing among individuals.
US: The share of job vacancies requiring a bachelor’s degree increased by more than 60 percent between 2007 and 2019, with faster growth in professional occupations and high-wage cities
Structural Increases in Skill Demand after the Great Recession. Peter Q. Blair, David J. Deming. NBER Working Paper No. 26680. January 2020. https://www.nber.org/papers/w26680
Abstract: In this paper we use detailed job vacancy data to estimate changes in skill demand in the years since the Great Recession. The share of job vacancies requiring a bachelor’s degree increased by more than 60 percent between 2007 and 2019, with faster growth in professional occupations and high-wage cities. Since the labor market was becoming tighter over this period, cyclical “upskilling” is unlikely to explain our findings.
Abstract: In this paper we use detailed job vacancy data to estimate changes in skill demand in the years since the Great Recession. The share of job vacancies requiring a bachelor’s degree increased by more than 60 percent between 2007 and 2019, with faster growth in professional occupations and high-wage cities. Since the labor market was becoming tighter over this period, cyclical “upskilling” is unlikely to explain our findings.
1 Introduction
The yearly wage premium for U.S. workers with a college degree has grown rapidly in
recent decades: from 40 percent in 1980 to nearly 70 percent in 2017 (Autor, Goldin, and
Katz 2020). Over the same period, the share of adults with at least a four-year college
degree doubled, from 17 to 34 percent (Snyder, de Brey, and Dillow, 2019) (Digest of
Education Statistics, 2020). In the “education race” model of Tinbergen (1974), these two
facts are explained by rapidly growing relative demand for college-level skills. If the
college premium grows despite a rapid increase in the supply of skills, this must mean
that the demand for skills is growing even faster.
The education race model provides a parsimonious and powerful explanation of US
educational wage differentials over the last two centuries (Katz and Murphy 1992; Goldin
and Katz 2008; Autor, Goldin, and Katz 2020). Yet one key limitation of the model is that
skill demand is not directly observed, but rather inferred as a residual that fits the facts
above. How do we know that the results from the education race model are driven by
rising employer skill demand, as opposed to some other unobserved explanation?
We study this question by using detailed job vacancy data to estimate the change in
employer skill demands in the years since the Great Recession. Our data come from the
labor market analytics firm Burning Glass Technologies (BGT), which has collected data
on the near-universe of online job vacancy postings since 2007.
Our main finding is that skill demand has increased substantially in the decade following the Great Recession. The share of online job vacancies requiring a bachelor’s degree
increased from 23 percent in 2007 to 37 percent in 2019, an increase of more than 60 percent. Most of this increase occurred between 2007 and 2010, consistent with the finding
that the Great Recession provided an opportunity for firms to upgrade skill requirements
in response to new technologies (Hershbein and Kahn 2018).
We present several pieces of evidence suggesting that the increase in skill demand is
structural, rather than cyclical. We replicate the findings of Hershbein and Kahn (2018)
and Modestino, Shoag, and Ballance (2019), who show that skill demands increased more
in labor markets that were harder hit by the Great Recession. However, when we extend
the sample forward and adjust for differences in the composition of online vacancies, we
find that this cyclical “upskilling” fades within a few years. In its place, we find longrun structural increases in skill demand across all labor markets. In fact, we show that
the increase in skill demand post-2010 is larger in higher-wage cities. We also find much
larger increases in the demand for education in professional, high-wage occupations such
as management, business, science and engineering.
Previous work using the BGT data has found that it is disproportionately comprised of
high-wage professional occupations, mostly because these types of jobs were more likely
to be posted online (e.g., Deming and Kahn 2018). As online job advertising has become
more common, the BGT sample has become more representative, and the firms and jobs
that are added later in the sample period are less likely to require bachelor’s degrees and
other advanced skills.
We adjust for the changing composition of the sample in two ways. First, we weight
all of our results by the employment share of each occupation as well as the size of the
labor force in each city in 2006. This ensures that our sample of vacancies is roughly
representative of the national job distribution in the pre-sample period. Second, our preferred empirical specification controls for occupation-by-MSA-by-firm fixed effects. This
approach accounts for compositional changes over time in the BGT data.
Our results suggest that increasing demand for educated workers is likely a persistent
feature of the U.S. economy post-recession. Recent work has documented a slowdown
in the growth of the college wage premium since 2005 (Beaudry, Green, and Sand 2016;
Valletta 2018; Autor, Goldin, and Katz 2020). Yet this slowdown has occurred during a
period of rapid expansion in the supply of skills. We find rapid expansion in the demand
for skills, suggesting that education and technology are “racing” together to hold the
college wage premium steady.1
Non-conscious prioritization speed is not explained by variation in conscious cognitive speed, decision thresholds, short-term visual memory, and by the three networks of attention (alerting, orienting and executive)
Sklar, Asael, Ariel Goldstein, Yaniv Abir, Ron Dotsch, Alexander Todorov, and Ran Hassin. 2020. “Did You See It? Robust Individual Variance in the Prioritization of Contents to Conscious Awareness.” PsyArXiv. January 20. doi:10.31234/osf.io/hp7we
Abstract: Perceptual conscious experiences result from non-conscious processes that precede them. We document a new characteristic of the human cognitive system: the speed with which the non-conscious processes prioritize percepts to conscious experiences. In eight experiments (N=375) we find that an individual’s non-conscious prioritization speed (NPS) is ubiquitous across a wide variety of stimuli, and generalizes across tasks and time. We also find that variation in NPS is unique, in that it is not explained by variation in conscious cognitive speed, decision thresholds, short-term visual memory, and by the three networks of attention (alerting, orienting and executive). Finally, we find that NPS is correlated with self-reported differences in perceptual experience. We conclude by discussing the implications of variance in NPS for understanding individual variance in behavior and the neural substrates of consciousness.
NPS=non-conscious prioritization speed
---
And then, you suddenly become aware: it might be of a child running into the road in front of your car, your friend walking on the other side of the street, or a large spider in your shoe. On the timeline that stretches between non-conscious processes and the conscious experiences that emerge from them, this paper focuses on the moment in which your conscious experiences begin: just when you become aware of the child, your friend or the spider. Before this point in time processing is strictly non-conscious, after this moment conscious processing unfolds.
For many, the idea that non-conscious processing generates visual awareness is unintuitive. Imagine suddenly finding yourself in Times Square. You may imagine opening your eyes and immediately experiencing busy streets, flashing ads and moving people, all at once. Intuitively, we feel our experience of the world is immediate and detailed. Yet this intuition is wrong; the literature strongly suggests that conscious experiences are both limited in scope (e.g., Cohen, Dennett, & Kanwisher, 2016; Elliott, Baird, & Giesbrecht, 2013; Wu & Wolfe, 2018) and delayed (e.g., Dehaene, Changeux, Naccache, & Sergent, 2006; Libet, 2009; Sergent, Baillet, & Dehaene, 2005). The feeling that we consciously experience more than we actually do is perhaps the most prevalent psychological illusion, as it is omnipresent in our every waking moment (e.g., Cohen et al., 2016; Kouider, De Gardelle, Sackur, & Dupoux, 2010)1.
Measurements of the “size” or “scope” of conscious experience indicate a rather limited number of objects can be experienced at any given time (e.g., Cohen, Dennett, & Kanwisher, 2016; Elliott, Baird, & Giesbrecht, 2013; Wu & Wolfe, 2018). Other objects, the ones not consciously experienced, are not necessarily entirely discarded. Such objects may be partially experienced (Kouider et al., 2010) or integrated into a perceptual ensemble (Cohen et al., 2016) yet neither constitutes fully conscious processing.
Considerable research effort has identified what determines which visual stimuli are prioritized for conscious experience. This work found that both low-level features (e.g. movement, high contrast) and higher-level features (e.g., expectations, Stein, Sterzer, & Peelen, 2012; emotional value, Zeelenberg, Wagenmakers, & Rotteveel, 2006) influence the prioritization of stimuli for awareness.
Evidently, the process that begins with activation patterns in the retina and ends with a conscious percept has a duration (e.g., Dehaene, Changeux, Naccache, & Sergent, 2006; Libet, 2009; Sergent, Baillet, & Dehaene, 2005). Considering the above examples, clearly this duration may have important consequences. If you become aware quickly enough, you are more likely to slam the brakes to avoid running over the child, call out to your friend, or avoid a painful spider bite.
Here, we focus on this aspect of how conscious experiences come about using a novel perspective. Specifically, we examine individual variability in the speed with which our non-conscious processes prioritize information for conscious awareness (i.e., do some individuals become aware of stimuli more quickly than others?). Examination of individual differences provides rich data for psychological theories (for a recent example see de Haas, Iakovidis, Schwarzkopf, & Gegenfurtner, 2019), an acknowledgement that has recently gained renewed interest (e.g., Bolger, Zee, Rossignac-Milon, & Hassin, 2019). We report 8 experiments documenting robust differences, and examine possible mechanisms that may bring these differences about.
To examine non-conscious prioritization speed (NPS) we use two long-duration masking paradigms. The main paradigm we employ is breaking continuous flash suppression (bCFS; Tsuchiya & Koch, 2005). In bCFS, a stimulus is presented to one eye while a dynamic mask is presented to the other eye (see Figure 1). This setup results in long masking periods, which may last seconds. Participants are asked to respond when they become conscious of any part of the target stimulus. This reaction time, the duration between the initiation of stimulus presentation and its conscious experience, is our measure of participants' NPS.
Like many others (e.g., Macrae, Visokomogilski, Golubickis, Cunningham, & Sahraie, 2017; Salomon, Lim, Herbelin, Hesselmann, & Blanke, 2013; Yang, Zald, & Blake, 2007), we hold that bCFS is particularly suited for assessing differences in access to awareness for two reasons. First, CFS allows for subliminal presentations that can last seconds. Thus, unlike previous masking techniques, it allows for a lengthy non-conscious processing. Second, bCFS allows one to measure spontaneous emergence into awareness, focusing on the moment in which a previously non-conscious stimulus suddenly becomes conscious2.
Crucially, to overcome the limitations associated with using just one paradigm, we use another long duration masking technique that has the same advantage, Repeated Mask Suppression (RMS; Abir & Hassin, in preparation). Using two different paradigms allow us to generalize our conclusions beyond the specific characteristics and limitations of each of the paradigms.
In eight experiments, we document large differences between individuals in NPS. Across experiments, we show that some people are consistently faster than others in becoming aware of a wide variety of stimuli, including words, numbers, faces, and emotional expressions.
Moreover, this individual variance is general across paradigms: Participants who are fast prioritizers in one paradigm (CFS; Tsuchiya & Koch, 2005) are also fast when tested using a different suppression method (RMS; Abir & Hassin, in preparation; see Experiment 3), a difference which is stable over time (Experiment 7). We extensively examined possible sources of this individual trait. Our experiments establish that NPS cannot be explained by variation in conscious cognitive speed (Experiment 4), detection threshold (Experiment 5), visual short-term memory (Experiment 6), and alerting, orienting and executive attention (Experiment 7). Finally, we find that differences in NPS are associated with self-reported differences in the richness of experience (Experiment 8). Based on these results we conclude that NPS is a robust trait and has subjectively noticeable ramifications in everyday life. We discuss possible implications of this trait in the General Discussion.
Discussion
Overall, the current findings paint a clear picture. In eight experiments we discovered a highly consistent, stable and strong cognitive characteristic: NPS. NPS manifested in a large variety of stimuli – from faces and emotional expressions, through language to numbers. It was stable over time (20 minutes) as well as measurement paradigm (bCFS vs. bRMS). We additionally found NPS to be independent of conscious speed, short-term visual memory, visual acuity and three different attentional functions and largely independent of conscious detection thresholds.
In previous research differences in suppression time between stimuli (e.g. upright faces, Stein et al., 2012; primed stimuli, Lupyan & Ward, 2013) have been used as a measure of stimuli’s priority in access to awareness. In such research, individual variance in participants' overall speed of becoming aware of stimuli is treated, if it is considered at all, as nuisance variance during analysis (e.g., Gayet & Stein, 2017). A notable exception to this trend is a recent article (Blake, Goodman, Tomarken, & Kim, 2019) that documented a relationship between individual differences in the masking potency of CFS and subsequent binocular rivalry performance. Here, we greatly extend this recent result as we show that individual variance in NPS is highly consistent across stimuli and time, generalizes beyond bCFS, and is not explained by established individual differences in cognition.
Because of its effect on conscious experience, it is easy to see how NPS may be crucial for tasks such as driving or sports, and in professions such as law enforcement and piloting, where the duration required before conscious processing initiates can have crucial and predictable implications. In fact, NPS may be an important factor in any task that requires both conscious processing and speeded reaction. Understanding NPS, its underlying processes and downstream consequences, is therefore a promising avenue for further research.
Another promising direction would be to examine NPS using neuroscience tools, especially with respect to the underpinnings of conscious experience. First, understanding what neural substrates underpin individual differences in NPS may shed new light on the age-old puzzle of what determines our conscious stream. Second, understanding NPS may shed new light on some of the currently intractable problems in the field of consciousness research, such as separating neural activity that underlies consciousness per se, from neural activity that underlies the non-conscious processes that precede or follow it (De Graaf, Hsieh, & Sack, 2012). Thus, understanding NPS may provide missing pieces for many puzzles both in relation to how conscious experience arises and in relation to how it may differ between individuals, and what the consequences of such differences might be.
Abstract: Perceptual conscious experiences result from non-conscious processes that precede them. We document a new characteristic of the human cognitive system: the speed with which the non-conscious processes prioritize percepts to conscious experiences. In eight experiments (N=375) we find that an individual’s non-conscious prioritization speed (NPS) is ubiquitous across a wide variety of stimuli, and generalizes across tasks and time. We also find that variation in NPS is unique, in that it is not explained by variation in conscious cognitive speed, decision thresholds, short-term visual memory, and by the three networks of attention (alerting, orienting and executive). Finally, we find that NPS is correlated with self-reported differences in perceptual experience. We conclude by discussing the implications of variance in NPS for understanding individual variance in behavior and the neural substrates of consciousness.
NPS=non-conscious prioritization speed
---
And then, you suddenly become aware: it might be of a child running into the road in front of your car, your friend walking on the other side of the street, or a large spider in your shoe. On the timeline that stretches between non-conscious processes and the conscious experiences that emerge from them, this paper focuses on the moment in which your conscious experiences begin: just when you become aware of the child, your friend or the spider. Before this point in time processing is strictly non-conscious, after this moment conscious processing unfolds.
For many, the idea that non-conscious processing generates visual awareness is unintuitive. Imagine suddenly finding yourself in Times Square. You may imagine opening your eyes and immediately experiencing busy streets, flashing ads and moving people, all at once. Intuitively, we feel our experience of the world is immediate and detailed. Yet this intuition is wrong; the literature strongly suggests that conscious experiences are both limited in scope (e.g., Cohen, Dennett, & Kanwisher, 2016; Elliott, Baird, & Giesbrecht, 2013; Wu & Wolfe, 2018) and delayed (e.g., Dehaene, Changeux, Naccache, & Sergent, 2006; Libet, 2009; Sergent, Baillet, & Dehaene, 2005). The feeling that we consciously experience more than we actually do is perhaps the most prevalent psychological illusion, as it is omnipresent in our every waking moment (e.g., Cohen et al., 2016; Kouider, De Gardelle, Sackur, & Dupoux, 2010)1.
Measurements of the “size” or “scope” of conscious experience indicate a rather limited number of objects can be experienced at any given time (e.g., Cohen, Dennett, & Kanwisher, 2016; Elliott, Baird, & Giesbrecht, 2013; Wu & Wolfe, 2018). Other objects, the ones not consciously experienced, are not necessarily entirely discarded. Such objects may be partially experienced (Kouider et al., 2010) or integrated into a perceptual ensemble (Cohen et al., 2016) yet neither constitutes fully conscious processing.
Considerable research effort has identified what determines which visual stimuli are prioritized for conscious experience. This work found that both low-level features (e.g. movement, high contrast) and higher-level features (e.g., expectations, Stein, Sterzer, & Peelen, 2012; emotional value, Zeelenberg, Wagenmakers, & Rotteveel, 2006) influence the prioritization of stimuli for awareness.
Evidently, the process that begins with activation patterns in the retina and ends with a conscious percept has a duration (e.g., Dehaene, Changeux, Naccache, & Sergent, 2006; Libet, 2009; Sergent, Baillet, & Dehaene, 2005). Considering the above examples, clearly this duration may have important consequences. If you become aware quickly enough, you are more likely to slam the brakes to avoid running over the child, call out to your friend, or avoid a painful spider bite.
Here, we focus on this aspect of how conscious experiences come about using a novel perspective. Specifically, we examine individual variability in the speed with which our non-conscious processes prioritize information for conscious awareness (i.e., do some individuals become aware of stimuli more quickly than others?). Examination of individual differences provides rich data for psychological theories (for a recent example see de Haas, Iakovidis, Schwarzkopf, & Gegenfurtner, 2019), an acknowledgement that has recently gained renewed interest (e.g., Bolger, Zee, Rossignac-Milon, & Hassin, 2019). We report 8 experiments documenting robust differences, and examine possible mechanisms that may bring these differences about.
To examine non-conscious prioritization speed (NPS) we use two long-duration masking paradigms. The main paradigm we employ is breaking continuous flash suppression (bCFS; Tsuchiya & Koch, 2005). In bCFS, a stimulus is presented to one eye while a dynamic mask is presented to the other eye (see Figure 1). This setup results in long masking periods, which may last seconds. Participants are asked to respond when they become conscious of any part of the target stimulus. This reaction time, the duration between the initiation of stimulus presentation and its conscious experience, is our measure of participants' NPS.
Like many others (e.g., Macrae, Visokomogilski, Golubickis, Cunningham, & Sahraie, 2017; Salomon, Lim, Herbelin, Hesselmann, & Blanke, 2013; Yang, Zald, & Blake, 2007), we hold that bCFS is particularly suited for assessing differences in access to awareness for two reasons. First, CFS allows for subliminal presentations that can last seconds. Thus, unlike previous masking techniques, it allows for a lengthy non-conscious processing. Second, bCFS allows one to measure spontaneous emergence into awareness, focusing on the moment in which a previously non-conscious stimulus suddenly becomes conscious2.
Crucially, to overcome the limitations associated with using just one paradigm, we use another long duration masking technique that has the same advantage, Repeated Mask Suppression (RMS; Abir & Hassin, in preparation). Using two different paradigms allow us to generalize our conclusions beyond the specific characteristics and limitations of each of the paradigms.
In eight experiments, we document large differences between individuals in NPS. Across experiments, we show that some people are consistently faster than others in becoming aware of a wide variety of stimuli, including words, numbers, faces, and emotional expressions.
Moreover, this individual variance is general across paradigms: Participants who are fast prioritizers in one paradigm (CFS; Tsuchiya & Koch, 2005) are also fast when tested using a different suppression method (RMS; Abir & Hassin, in preparation; see Experiment 3), a difference which is stable over time (Experiment 7). We extensively examined possible sources of this individual trait. Our experiments establish that NPS cannot be explained by variation in conscious cognitive speed (Experiment 4), detection threshold (Experiment 5), visual short-term memory (Experiment 6), and alerting, orienting and executive attention (Experiment 7). Finally, we find that differences in NPS are associated with self-reported differences in the richness of experience (Experiment 8). Based on these results we conclude that NPS is a robust trait and has subjectively noticeable ramifications in everyday life. We discuss possible implications of this trait in the General Discussion.
Discussion
Overall, the current findings paint a clear picture. In eight experiments we discovered a highly consistent, stable and strong cognitive characteristic: NPS. NPS manifested in a large variety of stimuli – from faces and emotional expressions, through language to numbers. It was stable over time (20 minutes) as well as measurement paradigm (bCFS vs. bRMS). We additionally found NPS to be independent of conscious speed, short-term visual memory, visual acuity and three different attentional functions and largely independent of conscious detection thresholds.
In previous research differences in suppression time between stimuli (e.g. upright faces, Stein et al., 2012; primed stimuli, Lupyan & Ward, 2013) have been used as a measure of stimuli’s priority in access to awareness. In such research, individual variance in participants' overall speed of becoming aware of stimuli is treated, if it is considered at all, as nuisance variance during analysis (e.g., Gayet & Stein, 2017). A notable exception to this trend is a recent article (Blake, Goodman, Tomarken, & Kim, 2019) that documented a relationship between individual differences in the masking potency of CFS and subsequent binocular rivalry performance. Here, we greatly extend this recent result as we show that individual variance in NPS is highly consistent across stimuli and time, generalizes beyond bCFS, and is not explained by established individual differences in cognition.
Because of its effect on conscious experience, it is easy to see how NPS may be crucial for tasks such as driving or sports, and in professions such as law enforcement and piloting, where the duration required before conscious processing initiates can have crucial and predictable implications. In fact, NPS may be an important factor in any task that requires both conscious processing and speeded reaction. Understanding NPS, its underlying processes and downstream consequences, is therefore a promising avenue for further research.
Another promising direction would be to examine NPS using neuroscience tools, especially with respect to the underpinnings of conscious experience. First, understanding what neural substrates underpin individual differences in NPS may shed new light on the age-old puzzle of what determines our conscious stream. Second, understanding NPS may shed new light on some of the currently intractable problems in the field of consciousness research, such as separating neural activity that underlies consciousness per se, from neural activity that underlies the non-conscious processes that precede or follow it (De Graaf, Hsieh, & Sack, 2012). Thus, understanding NPS may provide missing pieces for many puzzles both in relation to how conscious experience arises and in relation to how it may differ between individuals, and what the consequences of such differences might be.
Another nation in which the Flynn effect (IQ in Romania was increasing with approximately 3 IQ points/decade) seems to reverse: The continuous positive outlook is in question as modern generations show signs of IQ “fatigue”
Time and generational changes in cognitive performance in Romania. George Gunnesch-Luca, DragoÈ™ Iliescu. Intelligence, Volume 79, March–April 2020, 101430. https://doi.org/10.1016/j.intell.2020.101430
Highlights
• The Flynn effect can be also observed in Romanian samples.
• IQ in Romania is increasing with approximately 3 IQ points per decade.
• Both period and generational effects contribute to the overall effect.
• The continuous positive outlook is in question as modern generations show signs of IQ “fatigue”.
Abstract: The Flynn effect describes sustained gains in cognitive performance that have been observed in the past century. These improvements are not evenly distributed, with strong variations across regions or groups. To this effect, we report time and generational trends in IQ development in Romania. Using pooled repeated cross-sectional data ranging from 2003 to 2018 (N = 12,034), we used Hierarchical Age-Period-Cohort Models (HAPC) on data measured with the Multidimensional Aptitude Battery II. The results show an increase in measured performance of about one third of an IQ point per year, mainly driven by individual level effects and with additional variance attributable to generational (cohort) and period effects.
Check also Cohort differences on the CVLT-II and CVLT3: evidence of a negative Flynn effect on the attention/working memory and learning trials. Lisa V. Graves, Lisa Drozdick, Troy Courville, Thomas J. Farrer, Paul E. Gilbert & Dean C. Delis. The Clinical Neuropsychologist, Dec 12 2019. https://www.bipartisanalliance.com/2019/12/usa-evidence-of-negative-flynn-effect.html
Highlights
• The Flynn effect can be also observed in Romanian samples.
• IQ in Romania is increasing with approximately 3 IQ points per decade.
• Both period and generational effects contribute to the overall effect.
• The continuous positive outlook is in question as modern generations show signs of IQ “fatigue”.
Abstract: The Flynn effect describes sustained gains in cognitive performance that have been observed in the past century. These improvements are not evenly distributed, with strong variations across regions or groups. To this effect, we report time and generational trends in IQ development in Romania. Using pooled repeated cross-sectional data ranging from 2003 to 2018 (N = 12,034), we used Hierarchical Age-Period-Cohort Models (HAPC) on data measured with the Multidimensional Aptitude Battery II. The results show an increase in measured performance of about one third of an IQ point per year, mainly driven by individual level effects and with additional variance attributable to generational (cohort) and period effects.
Check also Cohort differences on the CVLT-II and CVLT3: evidence of a negative Flynn effect on the attention/working memory and learning trials. Lisa V. Graves, Lisa Drozdick, Troy Courville, Thomas J. Farrer, Paul E. Gilbert & Dean C. Delis. The Clinical Neuropsychologist, Dec 12 2019. https://www.bipartisanalliance.com/2019/12/usa-evidence-of-negative-flynn-effect.html
Associations between cognitive ability & education, from middle childhood to old age, as well as their links with wealth, morbidity and mortality: The strong genetic basis in the association is amplified by environmental experiences
Cognitive ability and education: how behavioural genetic research has advanced our knowledge and understanding of their association. Margherita Malanchini et al. Neuroscience & Biobehavioral Reviews, January 20 2020. https://doi.org/10.1016/j.neubiorev.2020.01.016
Highlights
• The evidence reviewed points to a strong genetic basis in the association between cognitive ability and academic performance, observed from middle childhood to old age.
• Over development, genetic influences are amplified by environmental experiences trhigh gene-environment interplay.
• The strong stability and heritability of academic performance is not driven entirely by cognitive ability.
• Other educationally-relevant noncognitive characteristics contribute to accounting for the genetic variation in academic performance beyond cognitive ability.
• Overall, genetic research has provided compelling evidence that has resulted in greatly advancing our knowledge and understanding of the association between cognitive ability and learning.
• Considering both cognitive and noncognitive skills as well as their biological and environmental underpinnings will be fundamental in moving towards a comprehensive, evidence-based model of education.
Abstract: Cognitive ability and educational success predict positive outcomes across the lifespan, from higher earnings to better health and longevity. The shared positive outcomes associated with cognitive ability and education are emblematic of the strong interconnections between them. Part of the observed associations between cognitive ability and education, as well as their links with wealth, morbidity and mortality, are rooted in genetic variation. The current review evaluates the contribution of decades of behavioural genetic research to our knowledge and understanding of the biological and environmental basis of the association between cognitive ability and education. The evidence reviewed points to a strong genetic basis in their association, observed from middle childhood to old age, which is amplified by environmental experiences. In addition, the strong stability and heritability of educational success are not driven entirely by cognitive ability. This highlights the contribution of other educationally relevant noncognitive characteristics. Considering both cognitive and noncognitive skills as well as their biological and environmental underpinnings will be fundamental in moving towards a comprehensive, evidence-based model of education.
Highlights
• The evidence reviewed points to a strong genetic basis in the association between cognitive ability and academic performance, observed from middle childhood to old age.
• Over development, genetic influences are amplified by environmental experiences trhigh gene-environment interplay.
• The strong stability and heritability of academic performance is not driven entirely by cognitive ability.
• Other educationally-relevant noncognitive characteristics contribute to accounting for the genetic variation in academic performance beyond cognitive ability.
• Overall, genetic research has provided compelling evidence that has resulted in greatly advancing our knowledge and understanding of the association between cognitive ability and learning.
• Considering both cognitive and noncognitive skills as well as their biological and environmental underpinnings will be fundamental in moving towards a comprehensive, evidence-based model of education.
Abstract: Cognitive ability and educational success predict positive outcomes across the lifespan, from higher earnings to better health and longevity. The shared positive outcomes associated with cognitive ability and education are emblematic of the strong interconnections between them. Part of the observed associations between cognitive ability and education, as well as their links with wealth, morbidity and mortality, are rooted in genetic variation. The current review evaluates the contribution of decades of behavioural genetic research to our knowledge and understanding of the biological and environmental basis of the association between cognitive ability and education. The evidence reviewed points to a strong genetic basis in their association, observed from middle childhood to old age, which is amplified by environmental experiences. In addition, the strong stability and heritability of educational success are not driven entirely by cognitive ability. This highlights the contribution of other educationally relevant noncognitive characteristics. Considering both cognitive and noncognitive skills as well as their biological and environmental underpinnings will be fundamental in moving towards a comprehensive, evidence-based model of education.
High intellect/imagination predicted digital aggression in lab and Twitter; low conscientiousness predicted digital aggression on Twitter and self-reports
A Multi-Method Investigation of the Personality Correlates of Digital Aggression. M. Kim, SL. Clark, MB. Donnellan, SA. Burt. Journal of Research in Personality, January 20 2020, 103923. https://doi.org/10.1016/j.jrp.2020.103923
Highlights
• A multi-method investigation of the personality correlates of digital aggression.
• The ‘Big 5’ differentially predicted all three digital aggression measures.
• High intellect/imagination predicted digital aggression in lab and Twitter.
• Low conscientiousness predicted digital aggression on Twitter and self-reports.
• Personality predictors of digital aggression may be context-specific.
Abstract: Digital aggression (DA) refers to the use of computer-mediated technologies to harm others. A handful of previous studies have provided mixed results regarding the personality correlates of DA. We clarified these findings by analyzing the associations between three measures of DA (behavioral, Twitter, and self-report) and the Big Five traits using data from 1167 undergraduate participants. Big Five personality trait measures predicted all three DA measures, but results varied across particular assessments of DA. These results point to possible moderators and potentially important differences within the broader construct of DA.
Highlights
• A multi-method investigation of the personality correlates of digital aggression.
• The ‘Big 5’ differentially predicted all three digital aggression measures.
• High intellect/imagination predicted digital aggression in lab and Twitter.
• Low conscientiousness predicted digital aggression on Twitter and self-reports.
• Personality predictors of digital aggression may be context-specific.
Abstract: Digital aggression (DA) refers to the use of computer-mediated technologies to harm others. A handful of previous studies have provided mixed results regarding the personality correlates of DA. We clarified these findings by analyzing the associations between three measures of DA (behavioral, Twitter, and self-report) and the Big Five traits using data from 1167 undergraduate participants. Big Five personality trait measures predicted all three DA measures, but results varied across particular assessments of DA. These results point to possible moderators and potentially important differences within the broader construct of DA.
Ideological orientations are substantially heritable, but the public is largely non-ideological; what happens is that ideological orientations are extraordinarily heritable for the most informed citizens, much less for the others
Genes, Ideology, & Sophistication. Nathan P. Kalmoe. Jan 2020. https://www.dropbox.com/s/4q14j1qwx94ub7d/Kalmoe%20-%20Genes%2C%20Ideology%2C%20%26%20Sophistication.pdf?dl=0
Abstract: Twin studies show that ideological orientations are substantially heritable, but how does that comport with evidence showing a largely non-ideological public? This study integrates these two important literatures and tests whether political sophistication is necessary for genetic predispositions to find expression in political attitudes and their organization. Data from the Minnesota Twin Study show that ideological orientations are extraordinarily heritable for the most informed citizens—far more so than full-sample averages in past tests show—but barely heritable among the rest. This holds true for the Wilson-Patterson ideological index scores and a related measure of ideological consistency, and somewhat less so for individual W-P items. Heritability for ideological identification is non-monotonic across knowledge; partisanship is most heritable for the least knowledgeable. The results resolve the tension between the two fields by showing that political knowledge is required to link genetic predispositions with specific attitudes.
DISCUSSION
I set out to test whether average heritability estimates differ by levels of political knowledge, as prodigious literature on the limits of mass belief systems suggest they might. The results grandly support these expectations: High-knowledge twin pairs (top 21%) show heritability estimates ranging from 49-82% (average 65%) across a variety of ideology estimates. In contrast, the least knowledgeable half of the sample showed comparable estimates of 0-40% (average 18%). To sum it up: ideological orientations appear extraordinarily heritable for the most sophisticated citizens—far more so than full-sample averages in past tests show—but hardly heritable at all among the rest.
How well does this twin sample reflect the national population? Arceneaux and colleagues (2012) show Minnesota Twin Study respondents are older and more educated than the American public, on average, but they are similarly interested in politics and similarly unconstrained in attitudes, like national samples. That suggests these tests are a reasonable base from which to infer general population dynamics, at least as they relate to political sophistication.
Converse (2000) argued that ideological tests must always account for the public’s huge
variance in political knowledge—and that doing otherwise risked concealing more than it revealed.
Simply put, average ideological estimates ignore qualitative differences in the nature of belief
systems. The tests here show the utility of extending Converse’s exhortation to estimates of genetic influence. Low-knowledge citizens may also carry heritable ideological predispositions, but those proto-orientations lie dormant without the sophistication and engagement to connect them to concrete sociopolitical attitudes and broader liberal-conservative belief systems. Political knowledge is necessary for that political development. Merging two important and related but isolated fields adds insight into the origins of ideological beliefs and the conditions for genetic influence in politics.
Abstract: Twin studies show that ideological orientations are substantially heritable, but how does that comport with evidence showing a largely non-ideological public? This study integrates these two important literatures and tests whether political sophistication is necessary for genetic predispositions to find expression in political attitudes and their organization. Data from the Minnesota Twin Study show that ideological orientations are extraordinarily heritable for the most informed citizens—far more so than full-sample averages in past tests show—but barely heritable among the rest. This holds true for the Wilson-Patterson ideological index scores and a related measure of ideological consistency, and somewhat less so for individual W-P items. Heritability for ideological identification is non-monotonic across knowledge; partisanship is most heritable for the least knowledgeable. The results resolve the tension between the two fields by showing that political knowledge is required to link genetic predispositions with specific attitudes.
DISCUSSION
I set out to test whether average heritability estimates differ by levels of political knowledge, as prodigious literature on the limits of mass belief systems suggest they might. The results grandly support these expectations: High-knowledge twin pairs (top 21%) show heritability estimates ranging from 49-82% (average 65%) across a variety of ideology estimates. In contrast, the least knowledgeable half of the sample showed comparable estimates of 0-40% (average 18%). To sum it up: ideological orientations appear extraordinarily heritable for the most sophisticated citizens—far more so than full-sample averages in past tests show—but hardly heritable at all among the rest.
How well does this twin sample reflect the national population? Arceneaux and colleagues (2012) show Minnesota Twin Study respondents are older and more educated than the American public, on average, but they are similarly interested in politics and similarly unconstrained in attitudes, like national samples. That suggests these tests are a reasonable base from which to infer general population dynamics, at least as they relate to political sophistication.
Converse (2000) argued that ideological tests must always account for the public’s huge
variance in political knowledge—and that doing otherwise risked concealing more than it revealed.
Simply put, average ideological estimates ignore qualitative differences in the nature of belief
systems. The tests here show the utility of extending Converse’s exhortation to estimates of genetic influence. Low-knowledge citizens may also carry heritable ideological predispositions, but those proto-orientations lie dormant without the sophistication and engagement to connect them to concrete sociopolitical attitudes and broader liberal-conservative belief systems. Political knowledge is necessary for that political development. Merging two important and related but isolated fields adds insight into the origins of ideological beliefs and the conditions for genetic influence in politics.
Sunday, January 19, 2020
Real ideological coherence, stability of political beliefs: Results show polar, coherent, stable, and potent ideological orientations only among the most knowledgeable 20-30% of citizens
Uses and abuses of ideology in political psychology. Nathan P. Kalmoe. Political Psychology, forthcoming. Jan 2020. https://www.dropbox.com/s/owa710fc1fy081n/Kalmoe%20-%20PP%20-%20Uses%20%26%20Abuses%20of%20Ideology.pdf?dl=0
Abstract: Ideology is a central construct in political psychology. Even so, the field’s strong claims about an ideological public rarely engage evidence of enormous individual differences: a minority with real ideological coherence, and weak to non-existent political belief organization for everyone else. Here, I bridge disciplinary gaps by showing the limits of mass political ideology with several popular measures and components—self-identification, core political values (egalitarian and traditionalism’s resistance to change), and policy indices—in representative U.S. surveys across four decades (Ns~13k-37k), plus panel data testing stability. Results show polar, coherent, stable, and potent ideological orientations only among the most knowledgeable 20-30% of citizens. That heterogeneity means full-sample tests overstate ideology for most people but understate it for knowledgeable citizens. Whether through top-down opinion leadership or bottom-up ideological reasoning, organized political belief systems require political attention and understanding to form. Finally, I show that convenience samples make trouble for ideology generalizations. I conclude by proposing analytic best practices to help avoid over-claiming ideology in the public. Taken together, what first looks like strong and broad ideology is actually ideological innocence for most and meaningful ideology for a few.
Keywords: ideology, polarization, knowledge, values, attitudes, methods
Abstract: Ideology is a central construct in political psychology. Even so, the field’s strong claims about an ideological public rarely engage evidence of enormous individual differences: a minority with real ideological coherence, and weak to non-existent political belief organization for everyone else. Here, I bridge disciplinary gaps by showing the limits of mass political ideology with several popular measures and components—self-identification, core political values (egalitarian and traditionalism’s resistance to change), and policy indices—in representative U.S. surveys across four decades (Ns~13k-37k), plus panel data testing stability. Results show polar, coherent, stable, and potent ideological orientations only among the most knowledgeable 20-30% of citizens. That heterogeneity means full-sample tests overstate ideology for most people but understate it for knowledgeable citizens. Whether through top-down opinion leadership or bottom-up ideological reasoning, organized political belief systems require political attention and understanding to form. Finally, I show that convenience samples make trouble for ideology generalizations. I conclude by proposing analytic best practices to help avoid over-claiming ideology in the public. Taken together, what first looks like strong and broad ideology is actually ideological innocence for most and meaningful ideology for a few.
Keywords: ideology, polarization, knowledge, values, attitudes, methods
Immigrants to the US live disproportionately in metropolitan areas where nominal wages are high & real wages are low; they accept such wages to locate in cities that are coastal, larger, & offer deeper immigrant networks
Immigration and the pursuit of amenities. David Albouy Heepyung Cho Mariya Shappo. Journal of Regional Science, November 23 2019. https://doi.org/10.1111/jors.12475
Abstract: Immigrants to the United States live disproportionately in metropolitan areas where nominal wages are high, but real wages are low. This sorting behavior may be due to preferences toward certain quality‐of‐life amenities. Relative to U.S.‐born inter‐state migrants, immigrants accept lower real wages to locate in cities that are coastal, larger, and offer deeper immigrant networks. They sort toward cities that are hillier and also larger and networked. Immigrants come more from coastal, cloudy, and safer countries—conditional on income and distance. They choose cities that resemble their origin in terms of winter temperature, safety, and coastal proximity.
7. CONCLUSION
Given that economists generally model immigrants as pursuing greater market consumption,
it is seems surprising that they live in places that are so expensive. Yet, theories of spatial
equilibrium imply that the lower real wages immigrants receive from picking such expensive
cities is compensated for by quality-of-life amenities.
In particular, immigrants seem to gravitate towards natural amenities such as sunshine and
hilly geography. Most of all, immigrants seem to care for large, often coastal cities, known
for their diversity. Native migrants, on the other hand, move to smaller cities, albeit ones that
are relatively expensive and highly educated. This supports an interesting, if ancient, pattern
whereby migrants land initially on coasts, but over time, eventually move inland. Natives do
seem to be choosy in where they move, as they too move to higher-amenity areas.
Our results highlight that the pursuit for amenities may play as much of a role in determining
where immigrants locate as jobs. In other words, factors that affect labor supply may be as
important as those that affect labor demand. This may explain the fact that many immigrants
already see enormous income gains by moving to the U.S., and care not only for market goods,
but for non-market goods as well. As our push regressions suggest, some may indeed pursue
better amenities than in their origin country. Nevertheless, immigrants also seek out amenities,
as well as people, that resemble those of their origin countries. Indeed, the amenities that
remind someone of home may be the kind of amenities most worth paying for.
Abstract: Immigrants to the United States live disproportionately in metropolitan areas where nominal wages are high, but real wages are low. This sorting behavior may be due to preferences toward certain quality‐of‐life amenities. Relative to U.S.‐born inter‐state migrants, immigrants accept lower real wages to locate in cities that are coastal, larger, and offer deeper immigrant networks. They sort toward cities that are hillier and also larger and networked. Immigrants come more from coastal, cloudy, and safer countries—conditional on income and distance. They choose cities that resemble their origin in terms of winter temperature, safety, and coastal proximity.
7. CONCLUSION
Given that economists generally model immigrants as pursuing greater market consumption,
it is seems surprising that they live in places that are so expensive. Yet, theories of spatial
equilibrium imply that the lower real wages immigrants receive from picking such expensive
cities is compensated for by quality-of-life amenities.
In particular, immigrants seem to gravitate towards natural amenities such as sunshine and
hilly geography. Most of all, immigrants seem to care for large, often coastal cities, known
for their diversity. Native migrants, on the other hand, move to smaller cities, albeit ones that
are relatively expensive and highly educated. This supports an interesting, if ancient, pattern
whereby migrants land initially on coasts, but over time, eventually move inland. Natives do
seem to be choosy in where they move, as they too move to higher-amenity areas.
Our results highlight that the pursuit for amenities may play as much of a role in determining
where immigrants locate as jobs. In other words, factors that affect labor supply may be as
important as those that affect labor demand. This may explain the fact that many immigrants
already see enormous income gains by moving to the U.S., and care not only for market goods,
but for non-market goods as well. As our push regressions suggest, some may indeed pursue
better amenities than in their origin country. Nevertheless, immigrants also seek out amenities,
as well as people, that resemble those of their origin countries. Indeed, the amenities that
remind someone of home may be the kind of amenities most worth paying for.
CEO Pay in Perspective: A typical employee of the S&P500 firms implicitly “contributes” to the salary of his CEO on average one half of one percent on an individual salary basis
CEO Pay in Perspective. Marcel Boyer. Centre interuniversitaire de recherche en analyse des organisations, Dec 2019. https://cirano.qc.ca/files/publications/2019s-33.pdf
Abstract: The CEO pay ratio, measured as the ratio of CEO pay over the median salary of a firm’s employees, is the most often quoted number in the popular press. This ratio has reached 281 this last year for S&P500 firms, the largest US firms by capitalization (as of November 21 2019). But the B-ratio I proposed here, measured as the CEO pay over the total payroll of the firm, relates CEO pay to the salary of each employee and may be the most relevant and informative figure on CEO pay as perceived by the firm’s employees themselves. How much a typical employee of the S&P500 firms implicitly “contributes” to the salary of his/her CEO? An amount of $273 on average or 0.5% of one’s salary, that is, one half of one percent on an individual salary basis. To assess whether such a contribution is worthwhile, one must determine the value of the CEO for the organization and its workers and stakeholders. The Appendix provides the data for all 500 firms regrouped in 10 industries (Bloomberg classification).
IV. Conclusion
The CEO pay ratio, defined as the CEO pay (not the total compensation of a CEO since it
typically excludes different forms of incentive bonuses) over the median salary of the firm’s
employees, is one of the most discussed topics in society today. I showed that the CEO pay ratio
for the S&P500 firms (the largest US-traded firms by capitalization) reached an average value of
281 this last year (as of November 21 2019), a median value of 170 and a weighted average value
of 185, the last two ratios being more representative of the overall distribution of the relative
CEO pay. Other ratios, clearly more informative and revealing for stakeholders (employees,
citizens, shareholders, suppliers and clients) are the CEO pay per employee (average of $1961,
median of $564, weighted average of $273) and the B-ratio, defined as the CEO pay over the
total payroll of the firm, hence the implicit contribution of each employee (as a % of his/her
salary) to the CEO pay (average of 2,30%, median of 0,88%, weighted average of 0,50%).
I discussed above the value of management (CEO) from a real options approach, which is
arguably the proper methodology to use. Whether a given CEO is worth the pay she/he is getting
remains an open question. But the difference between a good one and a bad one for employees
and other stakeholders is potentially huge.
The CEO pay debate raise two additional crucially important and related questions. First, the
question of inequalities in society, their determining factors, and their evolution over time. I
discuss that question in my forthcoming paper “Inequalities: Income, Wealth, Consumption”,
where I show the level of inequality in income and wealth have been decreasing between 1920
and 1980 but increasing between 1980 and today, while inequality in consumption, arguably the
most important form of inequality, has been decreasing over the whole period and in particular
over the last two decades. I attempt in that paper to identify and explain the determinants of those
movements. Second, the question of the social role of inequalities in income and wealth. I discuss
that question in my forthcoming paper “The Social Role of Inequalities: Why Significant
Inequality Levels in Income and Wealth Are Important for Our Prosperity and Collective Well
Being”, where I show that inequalities in income and wealth develop from two related social
needs namely the need to ensure a proper level of savings and investments and the need to induce
the proper but individually costly acquisition of new competencies, both to favor increased levels
of productivity and prosperity.
Abstract: The CEO pay ratio, measured as the ratio of CEO pay over the median salary of a firm’s employees, is the most often quoted number in the popular press. This ratio has reached 281 this last year for S&P500 firms, the largest US firms by capitalization (as of November 21 2019). But the B-ratio I proposed here, measured as the CEO pay over the total payroll of the firm, relates CEO pay to the salary of each employee and may be the most relevant and informative figure on CEO pay as perceived by the firm’s employees themselves. How much a typical employee of the S&P500 firms implicitly “contributes” to the salary of his/her CEO? An amount of $273 on average or 0.5% of one’s salary, that is, one half of one percent on an individual salary basis. To assess whether such a contribution is worthwhile, one must determine the value of the CEO for the organization and its workers and stakeholders. The Appendix provides the data for all 500 firms regrouped in 10 industries (Bloomberg classification).
IV. Conclusion
The CEO pay ratio, defined as the CEO pay (not the total compensation of a CEO since it
typically excludes different forms of incentive bonuses) over the median salary of the firm’s
employees, is one of the most discussed topics in society today. I showed that the CEO pay ratio
for the S&P500 firms (the largest US-traded firms by capitalization) reached an average value of
281 this last year (as of November 21 2019), a median value of 170 and a weighted average value
of 185, the last two ratios being more representative of the overall distribution of the relative
CEO pay. Other ratios, clearly more informative and revealing for stakeholders (employees,
citizens, shareholders, suppliers and clients) are the CEO pay per employee (average of $1961,
median of $564, weighted average of $273) and the B-ratio, defined as the CEO pay over the
total payroll of the firm, hence the implicit contribution of each employee (as a % of his/her
salary) to the CEO pay (average of 2,30%, median of 0,88%, weighted average of 0,50%).
I discussed above the value of management (CEO) from a real options approach, which is
arguably the proper methodology to use. Whether a given CEO is worth the pay she/he is getting
remains an open question. But the difference between a good one and a bad one for employees
and other stakeholders is potentially huge.
The CEO pay debate raise two additional crucially important and related questions. First, the
question of inequalities in society, their determining factors, and their evolution over time. I
discuss that question in my forthcoming paper “Inequalities: Income, Wealth, Consumption”,
where I show the level of inequality in income and wealth have been decreasing between 1920
and 1980 but increasing between 1980 and today, while inequality in consumption, arguably the
most important form of inequality, has been decreasing over the whole period and in particular
over the last two decades. I attempt in that paper to identify and explain the determinants of those
movements. Second, the question of the social role of inequalities in income and wealth. I discuss
that question in my forthcoming paper “The Social Role of Inequalities: Why Significant
Inequality Levels in Income and Wealth Are Important for Our Prosperity and Collective Well
Being”, where I show that inequalities in income and wealth develop from two related social
needs namely the need to ensure a proper level of savings and investments and the need to induce
the proper but individually costly acquisition of new competencies, both to favor increased levels
of productivity and prosperity.
A Museum Is a Terrible Place for a Date
A Museum Is a Terrible Place for a Date. Sophia Benoit. GQ, January 16, 2020. https://www.gq.com/story/museum-dates-are-bad
Strictly-enforced quiet, bright lighting, and scarce opportunities for eye contact? No thanks.
As the internet is all too eager to inform me, a lot of my opinions are wrong. I think lavender, the ubiquitous scent of relaxation products, smells both too cloying and too medical to soothe. I don’t like sourdough bread, a blasphemous thing to say in an upscale sandwich shop or around Food People in general. I’m not a fan of Succession, unlike the rest of my media comrades and many awards show voters. I’m putting this all out here because I want to be transparent that a lot of my “takes” are… simply not good. One that’s absolutely, inarguably, 100 percent correct though? The next time someone you're chatting with on Tinder suggests you two check out the new MoMA exhibit instead of grabbing drinks, playing mini-golf , or really doing just about any activity under the sun, pivot immediately. Why? Because museums are garbage date spots.
Here is where four to eight people are going to begin mentally composing aggrieved emails that say, “The best date of my entire life took place at The Met; how dare you?” And I will immediately mark them as spam, because museum dates, especially early on in relationships, are second only to poetry when it comes to being a deceitful enemy of love and horniness.
There are many reasons why people who are considering boning should not elect to spend their time before said boning in a spacious, brightly-lit room, but the main one is this: timing. No two people alive move through museums at the same rate. Personally, unless I have purchased a guided audio tour that’s giving me juicy gossip on every artist, like which relatives they slept with and which pope they sent into apoplexy, I’m the kind of person who needs Heelys to get through a museum fast enough. I have no fine art education, which is admittedly my fault, and I don’t get much enjoyment out of paintings without knowing some background. Even then, I get bored easily in quiet, deferentail spaces.
Meanwhile, my boyfriend makes a whole production of it. He can look at one singular painting for longer than it takes me to make a whole lap around the room. (What is he looking at? I’ve spent less time analyzing my best friend’s ex’s engagement photos than he spends on a little statue). He and I are extremely mismatched, yes, but it’s not just the two of us. Everyone has different museum speeds—who is supposed to adjust, and how much? Is the brisk museum enjoyer supposed to linger with the slow peruser, growing bored out of their mind, feigning interest? Or does the lackadaisical browser need to toot toot hurry up? There’s no correct etiquette, just awkwardness.
The idea, of course, is that art will provoke stimulating conversation—great in theory, but unlikely in practice. In this fantasy, you go agog at all the same pieces like the beginning of act two of an indie rom com where you both connect over an abiding love of Artemisia Gentileschi. But that simply doesn’t happen. Either you’re the kind of person who gazes at a painting, speed-reads the plaque, and decides how much you like it, or you stalk around the place with your arms joined behind your back like a Serious Art Person while you form deep, emotional connections with the artwork. This means that when two people enter a museum, one might exit feeling somewhat relaxed (museums are pretty soothing) but otherwise unaffected, while the other is reeling and trying to not think about the painting that reminded them of their estranged relationship with their father. Now what? You two are emotionally out of sync and you’ve just walked two miles, so you have to find a place to sit down pretty soon.
The problem isn’t just that museums—well, the art within them—inspire discordant emotional responses. Lots of, perhaps even most, dates involve two people feeling very differently. The issue is that museums don’t offer a particularly conducive environment to actually talk about these feelings. They’re meant for quiet reflection. There are intimidating security guards stationed at every corner, ready to yell at you the moment your face gets too close to a piece. Plus, everything you say echoes, so your uneducated critique of a classic piece of modern art is earning you glares from an art student who has spent the last two months studying the work of Wassily Kandinsky.
I will also reiterate that these mismatches are especially difficult to reconcile early on. Expressing heavy emotions to someone and getting nothing in return can quickly snuff out a romantic spark. “I loved the painting of the woman on the chair. It reminded me of all the years I felt totally isolated and alone,” being met with, “Yeah. It was nice,” is not a recipe for “happily ever after” or even, “Do you want to come back to my place?” Please reserve deep emotional trauma for date seven and beyond.
Reciprocal enthusiasm and physical touch are two of the best ways to scoot romance along, and you aren’t going to find those in museums. You don’t have to be near to your date, or even make any eye contact at all, in fact, during the entire ordeal.
I will absolutely concede that the museum date sounds romantic. I do not think anyone is a fool for being seduced by the idea of a joint edifying pursuit, of ogling masterpieces with a person you’d like to ogle naked. But unless you two were both art history majors, the reality is likely to fall far short of the expectation.
Strictly-enforced quiet, bright lighting, and scarce opportunities for eye contact? No thanks.
As the internet is all too eager to inform me, a lot of my opinions are wrong. I think lavender, the ubiquitous scent of relaxation products, smells both too cloying and too medical to soothe. I don’t like sourdough bread, a blasphemous thing to say in an upscale sandwich shop or around Food People in general. I’m not a fan of Succession, unlike the rest of my media comrades and many awards show voters. I’m putting this all out here because I want to be transparent that a lot of my “takes” are… simply not good. One that’s absolutely, inarguably, 100 percent correct though? The next time someone you're chatting with on Tinder suggests you two check out the new MoMA exhibit instead of grabbing drinks, playing mini-golf , or really doing just about any activity under the sun, pivot immediately. Why? Because museums are garbage date spots.
Here is where four to eight people are going to begin mentally composing aggrieved emails that say, “The best date of my entire life took place at The Met; how dare you?” And I will immediately mark them as spam, because museum dates, especially early on in relationships, are second only to poetry when it comes to being a deceitful enemy of love and horniness.
There are many reasons why people who are considering boning should not elect to spend their time before said boning in a spacious, brightly-lit room, but the main one is this: timing. No two people alive move through museums at the same rate. Personally, unless I have purchased a guided audio tour that’s giving me juicy gossip on every artist, like which relatives they slept with and which pope they sent into apoplexy, I’m the kind of person who needs Heelys to get through a museum fast enough. I have no fine art education, which is admittedly my fault, and I don’t get much enjoyment out of paintings without knowing some background. Even then, I get bored easily in quiet, deferentail spaces.
Meanwhile, my boyfriend makes a whole production of it. He can look at one singular painting for longer than it takes me to make a whole lap around the room. (What is he looking at? I’ve spent less time analyzing my best friend’s ex’s engagement photos than he spends on a little statue). He and I are extremely mismatched, yes, but it’s not just the two of us. Everyone has different museum speeds—who is supposed to adjust, and how much? Is the brisk museum enjoyer supposed to linger with the slow peruser, growing bored out of their mind, feigning interest? Or does the lackadaisical browser need to toot toot hurry up? There’s no correct etiquette, just awkwardness.
The idea, of course, is that art will provoke stimulating conversation—great in theory, but unlikely in practice. In this fantasy, you go agog at all the same pieces like the beginning of act two of an indie rom com where you both connect over an abiding love of Artemisia Gentileschi. But that simply doesn’t happen. Either you’re the kind of person who gazes at a painting, speed-reads the plaque, and decides how much you like it, or you stalk around the place with your arms joined behind your back like a Serious Art Person while you form deep, emotional connections with the artwork. This means that when two people enter a museum, one might exit feeling somewhat relaxed (museums are pretty soothing) but otherwise unaffected, while the other is reeling and trying to not think about the painting that reminded them of their estranged relationship with their father. Now what? You two are emotionally out of sync and you’ve just walked two miles, so you have to find a place to sit down pretty soon.
The problem isn’t just that museums—well, the art within them—inspire discordant emotional responses. Lots of, perhaps even most, dates involve two people feeling very differently. The issue is that museums don’t offer a particularly conducive environment to actually talk about these feelings. They’re meant for quiet reflection. There are intimidating security guards stationed at every corner, ready to yell at you the moment your face gets too close to a piece. Plus, everything you say echoes, so your uneducated critique of a classic piece of modern art is earning you glares from an art student who has spent the last two months studying the work of Wassily Kandinsky.
I will also reiterate that these mismatches are especially difficult to reconcile early on. Expressing heavy emotions to someone and getting nothing in return can quickly snuff out a romantic spark. “I loved the painting of the woman on the chair. It reminded me of all the years I felt totally isolated and alone,” being met with, “Yeah. It was nice,” is not a recipe for “happily ever after” or even, “Do you want to come back to my place?” Please reserve deep emotional trauma for date seven and beyond.
Reciprocal enthusiasm and physical touch are two of the best ways to scoot romance along, and you aren’t going to find those in museums. You don’t have to be near to your date, or even make any eye contact at all, in fact, during the entire ordeal.
I will absolutely concede that the museum date sounds romantic. I do not think anyone is a fool for being seduced by the idea of a joint edifying pursuit, of ogling masterpieces with a person you’d like to ogle naked. But unless you two were both art history majors, the reality is likely to fall far short of the expectation.
Generalized trust in others lowers mortality, mainly that caused by CVD; trust seems to acts as buffer reducing the axiety stemming from the others' behavior
Trust, happiness and mortality: Findings from a prospective US population-based survey. Social Science & Medicine, January 18 2020, 112809. Alexander Miething, Jan Mewes, Giuseppe N. Giordano. Social Science & Medicine, January 18 2020, 112809. https://doi.org/10.1016/j.socscimed.2020.112809
Highlights
• Happiness and generalised trust both touted as independent predictors of mortality.
• Trust but not happiness predicts all-cause mortality.
• Trust predicts mortality caused by CVD but not by neoplasia.
• Psychosocial mechanisms might drive the association between trust and health.
Abstract: There has been an abundance of research discussing the health implications of generalised trust and happiness over the past two decades. Both attitudes have been touted as independent predictors of morbidity and mortality, with strikingly similar trajectories and biological pathways being hypothesised. To date, however, neither trust nor happiness have been considered simultaneously as predictors of mortality. This study, therefore, aims to investigate the effects of generalised trust and happiness on all-cause and cause-specific mortality. The distinction between different causes of death (i.e. cardiovascular vs. cancer-related mortality) allowed us to assess if psychosocial mechanisms could account for associations between generalised trust, happiness and mortality. The study sample was derived from US General Social Survey data from 1978 to 2010 (response rates ranged from 70 to 82 per cent), and combined with death records from the National Death Index. The analytical sample comprised 23,933 individuals with 5382 validated deaths from all-cause mortality by 2014. Analyses were performed with Cox regression models and competing-risk models. In final models, generalised trust, but not happiness, showed robust and independent associations with all-cause mortality. Regarding cause-specific mortality, trust only showed a significant relationship with cardiovascular mortality. The distinct patterns of association between generalised trust and all-cause/cause-specific mortality suggest that their relationship could be being driven by cardiovascular mortality. In turn, this supports the feasibility of psychosocial pathways as possible biological mechanisms from distrust to mortality.
Keywords: TrustHappinessAll-cause mortalityCause-specific mortalityPsychosocial pathwayCox regressionCompeting-risk regressionUnited States
Discussion
This US population-based study is the first to investigate whether individual-level generalised
trust and happiness independently predicted all-cause mortality. By using cause-specific
mortality outcomes, we further sought to corroborate the hypothesis that psychosocial
mechanisms could provide a feasible pathway from low trust and unhappiness to mortality.
Our findings confirmed that individual-level trust maintained independent and robust
associations with all-cause and cardiovascular-specific mortality, even after socio-economic
and other demographic factors were considered. Results presented here thus mirror previous
empirical findings of associations between generalised trust and longevity (Islam et al., 2006;
Kawachi et al., 1997; Murayama et al., 2012).
Conversely, associations between happiness and all-cause mortality were fully attenuated
once adjusting for confounders. Several mechanisms have been proposed as to why
generalised trust may lead to better health and longevity. Some argue that trust mobilises
social support and enables greater collective action, providing greater access to those
resources needed to cope better with any potential health hazard (Elgar, 2010; Moore &
Kawachi, 2017). Others hint at the genetic aetiology of trust (Oskarsson et al., 2012; Wootton
et al., 2016), though there are currently no studies that investigate if the genetic underpinnings
of distrust/trust are also linked with known disease risk and/or protective gene variants.
That generalised trust is robustly associated with cardiovascular mortality in this study lends
further weight to the idea that psychosocial pathways are a plausible biological mechanism
from trust to health (Abbott & Freeth, 2008). To clarify further, if individual-level trust
reflects social trustworthiness, then lower levels of trust could be indicative of higher social
stressors (Giordano & Lindström, 2010; Wilkinson, 1996). From this perspective, generalised
trust acts as buffer reducing the anxiety stemming from the behaviour of others (Abbott &
Freeth, 2008). If high trust facilitates collective action (Coleman, 1988), then it is reasonable
to assume that low trust hinders this process, creating greater concern during every-day
transactions compared to those conducted within a ‘high-trust’ milieu. It has been argued that
exposure to high levels of social stressors could have a deleterious impact on the
cardiovascular system. The biological pathways through which this acts is the hypothalamic
pituitary-adrenal (HPA) axis, the overstimulation of which leads to increased levels of blood
cortisol (Rosmond & Björntorp, 2000). Prolonged and/or repeatedly high blood cortisol levels
released under conditions of perceived stress have been shown to increase one’s risk of
atherosclerosis (Dekker et al., 2008) and coronary artery calcification over the life-course
(Hamer et al., 2012).
In this study, individuals who distrusted others had, in comparison to the trusting group, a 13
per cent elevated risk of death caused by cardiovascular disease (Table 3, Model 7 & 8).
However, from the analyses performed, it is not possible to distinguish if individual trust is an
interpretation of environmental trustworthiness (hinting at the social capital debate) or
whether it captures pathological distrust, an element of the personality trait known as cynical
hostility (Kawachi, 2018). Cynically hostile individuals are also reported to have an increased
cardiovascular mortality risk, with possible pathways from distrust to cardiovascular mortality
including socio-economic status and the same HPA-axis mechanisms previously described
(Everson et al., 1997).
Strengths & Limitations
This is the first study to independently investigate the effects of both generalised trust and
happiness on mortality outcomes, using rich US survey data that span over more than three
decades. The GSS data were prospectively linked to mortality registries from the NDI, which
provided objective and validated specific cause-of-death categories. While these pooled GSS
data are nationally representative, this study design relied on single cross-sectional
observations, which do not capture change over time. Though a study based on UK panel data
showed how individuals’ generalised trust could change (Giordano et al., 2012), individuals’
generalised trust tends to re-adapt to a certain ‘set point’ in the longer term (Dawson, 2017).
Considerable stability is also attributed to levels of happiness (Lucas & Donnellan, 2007).
While our study corroborates hypotheses linking generalised trust to longevity, our analysis
has consciously ignored important parts of the wider debate on social capital and health. We
refrained from analysing additional social capital measures for three important reasons. The
first is a simple methodological one: not enough rounds of the GSS contained the desired
measures to obtain statistically powerful samples. Secondly, while generalised trust is ‘sticky’
in adulthood (Uslaner, 2002), other important social capital proxies (e.g. membership in
(voluntary) associations and community ties are not. Our data lacked the possibility to track
individuals’ membership, networks and community social capital longitudinally, making
inferences from any estimates untrustworthy. Thirdly, using Canadian survey data, Carpiano
and Fitterer (2014) have showed that generalised trust could be conceptually different from
other social capital measures.
While survey research generally favours multiple-item scales over single-item measures, our
measures of trust and happiness belonged to the latter group. Regarding happiness, the GSS
simply lacks additional measures. As for trust, previous research highlighted that the single
item trust measure outperforms the GSS three-item trust scale in terms of reliability and
validity (Uslaner, 2015). Moreover, the standard single-item trust measure has, for a long
time, featured in a range of international survey studies. Opting for the single-item trust
measure thus increases the possibility of replication in future studies in other contexts.
We investigated cause-specific mortality in an attempt to substantiate that psychosocial
pathways were one plausible biological mechanism from generalised trust to health.
Unfortunately, there is no possibility to track health behaviour in the GSS-NDI data after
1994, as questions regarding smoking and drinking were no longer employed. We thus lacked
the opportunity to establish associations between trust and CVD mortality adjusting for risky
health behaviour. We deliberately focused on deaths caused by either CVD or by neoplasia
for two reasons. Firstly, because psychosocial pathways are purported to play a greater role in
CVD-related deaths. Secondly, they are the two most frequent causes of death in these data.
Future studies could investigate other associations between trust and cause-specific deaths,
e.g. the infamously theorised association between (a lack of) generalised trust and suicide
(Durkheim, 2005). Unfortunately, the GSS-NDI drawn for purposes of this study simply lack
the statistical power to consider further categories of cause-specific mortality. Finally, all
analyses were conducted at the individual level, which makes it impossible to ascertain
whether presented relationships with mortality are due to trust being an individual or a
contextual resource (Giordano et al., 2019).
Highlights
• Happiness and generalised trust both touted as independent predictors of mortality.
• Trust but not happiness predicts all-cause mortality.
• Trust predicts mortality caused by CVD but not by neoplasia.
• Psychosocial mechanisms might drive the association between trust and health.
Abstract: There has been an abundance of research discussing the health implications of generalised trust and happiness over the past two decades. Both attitudes have been touted as independent predictors of morbidity and mortality, with strikingly similar trajectories and biological pathways being hypothesised. To date, however, neither trust nor happiness have been considered simultaneously as predictors of mortality. This study, therefore, aims to investigate the effects of generalised trust and happiness on all-cause and cause-specific mortality. The distinction between different causes of death (i.e. cardiovascular vs. cancer-related mortality) allowed us to assess if psychosocial mechanisms could account for associations between generalised trust, happiness and mortality. The study sample was derived from US General Social Survey data from 1978 to 2010 (response rates ranged from 70 to 82 per cent), and combined with death records from the National Death Index. The analytical sample comprised 23,933 individuals with 5382 validated deaths from all-cause mortality by 2014. Analyses were performed with Cox regression models and competing-risk models. In final models, generalised trust, but not happiness, showed robust and independent associations with all-cause mortality. Regarding cause-specific mortality, trust only showed a significant relationship with cardiovascular mortality. The distinct patterns of association between generalised trust and all-cause/cause-specific mortality suggest that their relationship could be being driven by cardiovascular mortality. In turn, this supports the feasibility of psychosocial pathways as possible biological mechanisms from distrust to mortality.
Keywords: TrustHappinessAll-cause mortalityCause-specific mortalityPsychosocial pathwayCox regressionCompeting-risk regressionUnited States
Discussion
This US population-based study is the first to investigate whether individual-level generalised
trust and happiness independently predicted all-cause mortality. By using cause-specific
mortality outcomes, we further sought to corroborate the hypothesis that psychosocial
mechanisms could provide a feasible pathway from low trust and unhappiness to mortality.
Our findings confirmed that individual-level trust maintained independent and robust
associations with all-cause and cardiovascular-specific mortality, even after socio-economic
and other demographic factors were considered. Results presented here thus mirror previous
empirical findings of associations between generalised trust and longevity (Islam et al., 2006;
Kawachi et al., 1997; Murayama et al., 2012).
Conversely, associations between happiness and all-cause mortality were fully attenuated
once adjusting for confounders. Several mechanisms have been proposed as to why
generalised trust may lead to better health and longevity. Some argue that trust mobilises
social support and enables greater collective action, providing greater access to those
resources needed to cope better with any potential health hazard (Elgar, 2010; Moore &
Kawachi, 2017). Others hint at the genetic aetiology of trust (Oskarsson et al., 2012; Wootton
et al., 2016), though there are currently no studies that investigate if the genetic underpinnings
of distrust/trust are also linked with known disease risk and/or protective gene variants.
That generalised trust is robustly associated with cardiovascular mortality in this study lends
further weight to the idea that psychosocial pathways are a plausible biological mechanism
from trust to health (Abbott & Freeth, 2008). To clarify further, if individual-level trust
reflects social trustworthiness, then lower levels of trust could be indicative of higher social
stressors (Giordano & Lindström, 2010; Wilkinson, 1996). From this perspective, generalised
trust acts as buffer reducing the anxiety stemming from the behaviour of others (Abbott &
Freeth, 2008). If high trust facilitates collective action (Coleman, 1988), then it is reasonable
to assume that low trust hinders this process, creating greater concern during every-day
transactions compared to those conducted within a ‘high-trust’ milieu. It has been argued that
exposure to high levels of social stressors could have a deleterious impact on the
cardiovascular system. The biological pathways through which this acts is the hypothalamic
pituitary-adrenal (HPA) axis, the overstimulation of which leads to increased levels of blood
cortisol (Rosmond & Björntorp, 2000). Prolonged and/or repeatedly high blood cortisol levels
released under conditions of perceived stress have been shown to increase one’s risk of
atherosclerosis (Dekker et al., 2008) and coronary artery calcification over the life-course
(Hamer et al., 2012).
In this study, individuals who distrusted others had, in comparison to the trusting group, a 13
per cent elevated risk of death caused by cardiovascular disease (Table 3, Model 7 & 8).
However, from the analyses performed, it is not possible to distinguish if individual trust is an
interpretation of environmental trustworthiness (hinting at the social capital debate) or
whether it captures pathological distrust, an element of the personality trait known as cynical
hostility (Kawachi, 2018). Cynically hostile individuals are also reported to have an increased
cardiovascular mortality risk, with possible pathways from distrust to cardiovascular mortality
including socio-economic status and the same HPA-axis mechanisms previously described
(Everson et al., 1997).
Strengths & Limitations
This is the first study to independently investigate the effects of both generalised trust and
happiness on mortality outcomes, using rich US survey data that span over more than three
decades. The GSS data were prospectively linked to mortality registries from the NDI, which
provided objective and validated specific cause-of-death categories. While these pooled GSS
data are nationally representative, this study design relied on single cross-sectional
observations, which do not capture change over time. Though a study based on UK panel data
showed how individuals’ generalised trust could change (Giordano et al., 2012), individuals’
generalised trust tends to re-adapt to a certain ‘set point’ in the longer term (Dawson, 2017).
Considerable stability is also attributed to levels of happiness (Lucas & Donnellan, 2007).
While our study corroborates hypotheses linking generalised trust to longevity, our analysis
has consciously ignored important parts of the wider debate on social capital and health. We
refrained from analysing additional social capital measures for three important reasons. The
first is a simple methodological one: not enough rounds of the GSS contained the desired
measures to obtain statistically powerful samples. Secondly, while generalised trust is ‘sticky’
in adulthood (Uslaner, 2002), other important social capital proxies (e.g. membership in
(voluntary) associations and community ties are not. Our data lacked the possibility to track
individuals’ membership, networks and community social capital longitudinally, making
inferences from any estimates untrustworthy. Thirdly, using Canadian survey data, Carpiano
and Fitterer (2014) have showed that generalised trust could be conceptually different from
other social capital measures.
While survey research generally favours multiple-item scales over single-item measures, our
measures of trust and happiness belonged to the latter group. Regarding happiness, the GSS
simply lacks additional measures. As for trust, previous research highlighted that the single
item trust measure outperforms the GSS three-item trust scale in terms of reliability and
validity (Uslaner, 2015). Moreover, the standard single-item trust measure has, for a long
time, featured in a range of international survey studies. Opting for the single-item trust
measure thus increases the possibility of replication in future studies in other contexts.
We investigated cause-specific mortality in an attempt to substantiate that psychosocial
pathways were one plausible biological mechanism from generalised trust to health.
Unfortunately, there is no possibility to track health behaviour in the GSS-NDI data after
1994, as questions regarding smoking and drinking were no longer employed. We thus lacked
the opportunity to establish associations between trust and CVD mortality adjusting for risky
health behaviour. We deliberately focused on deaths caused by either CVD or by neoplasia
for two reasons. Firstly, because psychosocial pathways are purported to play a greater role in
CVD-related deaths. Secondly, they are the two most frequent causes of death in these data.
Future studies could investigate other associations between trust and cause-specific deaths,
e.g. the infamously theorised association between (a lack of) generalised trust and suicide
(Durkheim, 2005). Unfortunately, the GSS-NDI drawn for purposes of this study simply lack
the statistical power to consider further categories of cause-specific mortality. Finally, all
analyses were conducted at the individual level, which makes it impossible to ascertain
whether presented relationships with mortality are due to trust being an individual or a
contextual resource (Giordano et al., 2019).
Small associations between the amount of daily digital technology usage & adolescents’ well‐being: Unlikely to be of clinical or practical significance
Annual Research Review: Adolescent mental health in the digital age: facts, fears, and future directions. Candice L. Odgers, Michaeline R. Jensen. Journal of Child Psychology and Psychiatry, January 17 2020. https://doi.org/10.1111/jcpp.13190
Abstract: Adolescents are spending an increasing amount of their time online and connected to each other via digital technologies. Mobile device ownership and social media usage have reached unprecedented levels, and concerns have been raised that this constant connectivity is harming adolescents’ mental health. This review synthesized data from three sources: (a) narrative reviews and meta‐analyses conducted between 2014 and 2019, (b) large‐scale preregistered cohort studies and (c) intensive longitudinal and ecological momentary assessment studies, to summarize what is known about linkages between digital technology usage and adolescent mental health, with a specific focus on depression and anxiety. The review highlights that most research to date has been correlational, focused on adults versus adolescents, and has generated a mix of often conflicting small positive, negative and null associations. The most recent and rigorous large‐scale preregistered studies report small associations between the amount of daily digital technology usage and adolescents’ well‐being that do not offer a way of distinguishing cause from effect and, as estimated, are unlikely to be of clinical or practical significance. Implications for improving future research and for supporting adolescents’ mental health in the digital age are discussed.
Abstract: Adolescents are spending an increasing amount of their time online and connected to each other via digital technologies. Mobile device ownership and social media usage have reached unprecedented levels, and concerns have been raised that this constant connectivity is harming adolescents’ mental health. This review synthesized data from three sources: (a) narrative reviews and meta‐analyses conducted between 2014 and 2019, (b) large‐scale preregistered cohort studies and (c) intensive longitudinal and ecological momentary assessment studies, to summarize what is known about linkages between digital technology usage and adolescent mental health, with a specific focus on depression and anxiety. The review highlights that most research to date has been correlational, focused on adults versus adolescents, and has generated a mix of often conflicting small positive, negative and null associations. The most recent and rigorous large‐scale preregistered studies report small associations between the amount of daily digital technology usage and adolescents’ well‐being that do not offer a way of distinguishing cause from effect and, as estimated, are unlikely to be of clinical or practical significance. Implications for improving future research and for supporting adolescents’ mental health in the digital age are discussed.
Key Points
. Adolescents are early and enthusiastic adopters of digital technologies and are increasingly spending their time connecting to the online world and to each other through their devices. This constant connectivity has led to concerns that time spent online may be negatively impacting adolescents’ mental health and wellbeing.
. We synthesized recent findings across meta-analytic studies and narrative reviews, large-scale and preregistered cohort studies, and intensive assessment studies tracking digital technology use and mental health across time.
. Most research to date has been correlational, cross-sectional, mixed in terms of the directionality, and have resulted in small associations which leave no way of separating cause from effect.
. We recommend that future research use experimental and quasi-experimental methods and focus on online experiences versus screen time as well as heterogeneity in effects across diverse populations of youth. Knowledge generated from this research should allow researchers and practitioners to leverage online tools to reduce offline disparities and support adolescents’ mental health as they come of age in an increasingly digital and connected world.
. Adolescents are early and enthusiastic adopters of digital technologies and are increasingly spending their time connecting to the online world and to each other through their devices. This constant connectivity has led to concerns that time spent online may be negatively impacting adolescents’ mental health and wellbeing.
. We synthesized recent findings across meta-analytic studies and narrative reviews, large-scale and preregistered cohort studies, and intensive assessment studies tracking digital technology use and mental health across time.
. Most research to date has been correlational, cross-sectional, mixed in terms of the directionality, and have resulted in small associations which leave no way of separating cause from effect.
. We recommend that future research use experimental and quasi-experimental methods and focus on online experiences versus screen time as well as heterogeneity in effects across diverse populations of youth. Knowledge generated from this research should allow researchers and practitioners to leverage online tools to reduce offline disparities and support adolescents’ mental health as they come of age in an increasingly digital and connected world.
Subscribe to:
Posts (Atom)