Is the InterNet killing the movies industry? Do we see progress in
economics but at the same time is cultural expression decaying? Actually, it seems we are doing well:
Growth of US origin features, 2000-2016, as shows by queries to the IMDb. Fig 3.1 [1]
Check also Record Number of Films Produced [2].
Data from UNESCO database shows what the production of quality film was in previous years (I take documentaries as a measure of quality films) [3]. Number of documentary feature films in the UK at the same time the InterNet explodes and cheap cameras make their appearance:
2005 2
2007 9
2008 62
2009 58
2010 55
2011 56
2012 73
2013 44
2014 85
2015 79
To compare, these are the fictional feature films in the UK, according to the same DB:
2005 162
2007 112
2008 215
2009 248
2010 285
2011 239
2012 249
2013 197
2014 253
2015 213
Is greater production worse? Or better? A way to measure quality of all movies, regardless of genre, is Rotten Tomatoes [4]. These are the movies scoring 84+, 1998-2016 (Fig 3.4 [1]):
References
[1] Joel Waldfogel's Digital Reinassance. Princeton: Princeton Univ Press, 2018.
[2] UNESCO, Mar 31 2016, http://uis.unesco.org/en/news/record-number-films-produced
[3] UNESCO data, accessed Jan 2018: data.uis.unesco.org
[4] www.rottentomatoes.com/top
Sunday, January 27, 2019
The Rise of Pseudomedicine for Dementia and Brain Health
The Rise of Pseudomedicine for Dementia and Brain Health. Joanna Hellmuth, Gil D. Rabinovici, Bruce L. Miller. JAMA, January 25 2019, doi:10.1001/jama.2018.21560
The US population is aging, and with it is an increasing prevalence of Alzheimer disease, which lacks effective approaches for prevention or a cure.1 Many individuals are concerned about developing cognitive changes and dementia. With increasing amounts of readily accessible information, people independently seek and find material about brain health interventions, although not all sources contain quality medical information.
This landscape of limited treatments for dementia, concern about Alzheimer disease, and wide access to information have brought a troubling increase in “pseudomedicine.” Pseudomedicine refers to supplements and medical interventions that exist within the law and are often promoted as scientifically supported treatments, but lack credible efficacy data. Practitioners of pseudomedicine often appeal to health concerns, promote individual testimony as established fact, advocate for unproven therapies, and achieve financial gains.
With neurodegenerative disease, the most common example of pseudomedicine is the promotion of dietary supplements to improve cognition and brain health. This $3.2-billion industry promoting brain health benefits from high-penetration consumer advertising through print media, radio, television, and the internet.2 No known dietary supplement prevents cognitive decline or dementia, yet supplements advertised as such are widely available and appear to gain legitimacy when sold by major US retailers. Consumers are often unaware that dietary supplements do not undergo US Food and Drug Administration (FDA) testing for safety or review for efficacy. Indeed, supplements may cause harm, as has been shown with vitamin E, which may increase risk of hemorrhagic stroke, and, in high doses, increase risk of death.3,4 The Alzheimer’s Association highlights these concerns, noting that many of these supplements are promoted by testimony rather than science.5 These brain health supplements can also be costly, and discussion of them in clinical settings can subvert valuable time needed for clinicians and patients to review other interventions.
Patients and caregivers encounter sophisticated techniques that supply false “scientific” backing for brain health interventions. For example, referring to scientific integrity, Feynman coined the term “cargo cult science” to describe endeavors that follow “…the apparent precepts and forms of scientific investigation, but they’re missing something essential….”6 Cargo cult science is apparent in material promoting some brain health supplements; “evidence” is presented in a scientific-appearing format that lacks actual substance and rigor. Feynman suggested 1 feature of scientific integrity is “bending over backwards to show how [the study] may be wrong…,” which is a feature that is often lacking when interventions are promoted for financial gain.6
A similarly concerning category of pseudomedicine involves interventions promoted by licensed medical professionals that target unsubstantiated etiologies of neurodegenerative disease (eg, metal toxicity; mold exposure; infectious causes, such as Lyme disease). Some of these practitioners may stand to gain financially by promoting interventions that are not covered by insurance, such as intravenous nutrition, personalized detoxification, chelation therapy, antibiotics, or stem cell therapy. These interventions lack a known mechanism for treating dementia and are costly, unregulated, and potentially harmful.
Recently, detailed protocols to reverse cognitive changes have been promoted, but these protocols merely repackage known dementia interventions (eg, cognitive training, exercise, a heart-healthy diet) and add supplements and other lifestyle changes. Such protocols are promoted by medical professionals with legitimate credentials, offer a unique holistic and personal approach, and are said to be based on rigorous data published in reputable journals. However, when examining the primary data, the troubling and familiar patterns of testimony and cargo cult science emerge. The primary scientific articles superficially appear valid, yet lack essential features, such as sufficient participant characterization, uniform interventions, or treatment randomization with control or placebo groups, and may fail to include sufficient study limitations. Some of these poor-quality studies may be published in predatory open access journals.7
An argument can be made that even though pseudomedicine may be ethically questionable, these interventions are relatively benign and offer hope for patients facing an incurable disease. However, these interventions are not ethically, medically, or financially benign for patients or their families. While appealing to a sense of hope can be a motivating factor for clinical trials or complementary or alternative practices, the difference is in how these circumstances are framed. Complementary or alternative practices are often adjunct treatments and might not result in direct financial gain by the practitioner recommending the therapy. Further, in clinical trials, there are structured conversations between researchers and participants (such as during the informed consent process) that include research coordinators explaining that any studied interventions are experimental, may result in no gain, and can cause harm. In contrast, pseudomedicine may involve unethical gain for practitioners and manufactured illusion of benefit for patients.
What Can Be Done?
Health care professionals have the responsibility to learn about common pseudomedicine interventions. If a patient or family member inquiries about such an intervention, clinicians can take several steps:
Understand that motivations to pursue such interventions often come from a desire to obtain the best medical care, and convey that understanding to the patients.
Provide honest scientific interpretation of any supporting evidence, along with the associated risks and costs. This approach creates a productive dialogue, rather than dismissing any inquiries outright.
Appropriately label pseudomedicine interventions as such.
Differentiate testimony from data, and assess whether studies display scientific integrity by “bending over backward” to address any limitations.
Suggest an exploration of the financial interests behind the intervention (eg, the sale of supplements, out-of-pocket payments to a clinician or organization, book sales). Note that the gain may not only be financial, but also temporary fame that can accompany spearheading a new protocol.6
Provide education on the US Dietary Supplement Health and Education Act that limits FDA testing and regulation of supplements.
Point out that any effective interventions for common diseases would already be widely used.
Express a willingness to continue to partner with patients in their medical care even if opinions and interpretations about pseudomedicine differ.
Conclusions: It is disheartening that patients with dementia and their family members are targeted by practitioners and companies motivated by self-interest. Physicians have an ethical mandate to protect patients who may be vulnerable to promotion by these entities. More needs to be done on a national level to limit the claims of benefit for interventions that lack proven efficacy. Clinicians must distinguish testimony and cargo cult science from quality medical research and explain when interventions may appear to represent pseudomedicine. While unethical forces promote the existence of pseudomedicine, an educated community of physicians and patients is the starting point to counteract these practices.
The US population is aging, and with it is an increasing prevalence of Alzheimer disease, which lacks effective approaches for prevention or a cure.1 Many individuals are concerned about developing cognitive changes and dementia. With increasing amounts of readily accessible information, people independently seek and find material about brain health interventions, although not all sources contain quality medical information.
This landscape of limited treatments for dementia, concern about Alzheimer disease, and wide access to information have brought a troubling increase in “pseudomedicine.” Pseudomedicine refers to supplements and medical interventions that exist within the law and are often promoted as scientifically supported treatments, but lack credible efficacy data. Practitioners of pseudomedicine often appeal to health concerns, promote individual testimony as established fact, advocate for unproven therapies, and achieve financial gains.
With neurodegenerative disease, the most common example of pseudomedicine is the promotion of dietary supplements to improve cognition and brain health. This $3.2-billion industry promoting brain health benefits from high-penetration consumer advertising through print media, radio, television, and the internet.2 No known dietary supplement prevents cognitive decline or dementia, yet supplements advertised as such are widely available and appear to gain legitimacy when sold by major US retailers. Consumers are often unaware that dietary supplements do not undergo US Food and Drug Administration (FDA) testing for safety or review for efficacy. Indeed, supplements may cause harm, as has been shown with vitamin E, which may increase risk of hemorrhagic stroke, and, in high doses, increase risk of death.3,4 The Alzheimer’s Association highlights these concerns, noting that many of these supplements are promoted by testimony rather than science.5 These brain health supplements can also be costly, and discussion of them in clinical settings can subvert valuable time needed for clinicians and patients to review other interventions.
Patients and caregivers encounter sophisticated techniques that supply false “scientific” backing for brain health interventions. For example, referring to scientific integrity, Feynman coined the term “cargo cult science” to describe endeavors that follow “…the apparent precepts and forms of scientific investigation, but they’re missing something essential….”6 Cargo cult science is apparent in material promoting some brain health supplements; “evidence” is presented in a scientific-appearing format that lacks actual substance and rigor. Feynman suggested 1 feature of scientific integrity is “bending over backwards to show how [the study] may be wrong…,” which is a feature that is often lacking when interventions are promoted for financial gain.6
A similarly concerning category of pseudomedicine involves interventions promoted by licensed medical professionals that target unsubstantiated etiologies of neurodegenerative disease (eg, metal toxicity; mold exposure; infectious causes, such as Lyme disease). Some of these practitioners may stand to gain financially by promoting interventions that are not covered by insurance, such as intravenous nutrition, personalized detoxification, chelation therapy, antibiotics, or stem cell therapy. These interventions lack a known mechanism for treating dementia and are costly, unregulated, and potentially harmful.
Recently, detailed protocols to reverse cognitive changes have been promoted, but these protocols merely repackage known dementia interventions (eg, cognitive training, exercise, a heart-healthy diet) and add supplements and other lifestyle changes. Such protocols are promoted by medical professionals with legitimate credentials, offer a unique holistic and personal approach, and are said to be based on rigorous data published in reputable journals. However, when examining the primary data, the troubling and familiar patterns of testimony and cargo cult science emerge. The primary scientific articles superficially appear valid, yet lack essential features, such as sufficient participant characterization, uniform interventions, or treatment randomization with control or placebo groups, and may fail to include sufficient study limitations. Some of these poor-quality studies may be published in predatory open access journals.7
An argument can be made that even though pseudomedicine may be ethically questionable, these interventions are relatively benign and offer hope for patients facing an incurable disease. However, these interventions are not ethically, medically, or financially benign for patients or their families. While appealing to a sense of hope can be a motivating factor for clinical trials or complementary or alternative practices, the difference is in how these circumstances are framed. Complementary or alternative practices are often adjunct treatments and might not result in direct financial gain by the practitioner recommending the therapy. Further, in clinical trials, there are structured conversations between researchers and participants (such as during the informed consent process) that include research coordinators explaining that any studied interventions are experimental, may result in no gain, and can cause harm. In contrast, pseudomedicine may involve unethical gain for practitioners and manufactured illusion of benefit for patients.
What Can Be Done?
Health care professionals have the responsibility to learn about common pseudomedicine interventions. If a patient or family member inquiries about such an intervention, clinicians can take several steps:
Understand that motivations to pursue such interventions often come from a desire to obtain the best medical care, and convey that understanding to the patients.
Provide honest scientific interpretation of any supporting evidence, along with the associated risks and costs. This approach creates a productive dialogue, rather than dismissing any inquiries outright.
Appropriately label pseudomedicine interventions as such.
Differentiate testimony from data, and assess whether studies display scientific integrity by “bending over backward” to address any limitations.
Suggest an exploration of the financial interests behind the intervention (eg, the sale of supplements, out-of-pocket payments to a clinician or organization, book sales). Note that the gain may not only be financial, but also temporary fame that can accompany spearheading a new protocol.6
Provide education on the US Dietary Supplement Health and Education Act that limits FDA testing and regulation of supplements.
Point out that any effective interventions for common diseases would already be widely used.
Express a willingness to continue to partner with patients in their medical care even if opinions and interpretations about pseudomedicine differ.
Conclusions: It is disheartening that patients with dementia and their family members are targeted by practitioners and companies motivated by self-interest. Physicians have an ethical mandate to protect patients who may be vulnerable to promotion by these entities. More needs to be done on a national level to limit the claims of benefit for interventions that lack proven efficacy. Clinicians must distinguish testimony and cargo cult science from quality medical research and explain when interventions may appear to represent pseudomedicine. While unethical forces promote the existence of pseudomedicine, an educated community of physicians and patients is the starting point to counteract these practices.
Saturday, January 26, 2019
Theories of God: Explanatory coherence in religious cognition
Theories of God: Explanatory coherence in religious cognition. Andrew Shtulman, Max Rattner. PLOS, December 26, 2018. https://doi.org/10.1371/journal.pone.0209758
Abstract: Representations of God in art, literature, and discourse range from the highly anthropomorphic to the highly abstract. The present study explored whether people who endorse anthropomorphic God concepts hold different religious beliefs and engage in different religious practices than those who endorse abstract concepts. Adults of various religious affiliations (n = 275) completed a questionnaire that probed their beliefs about God, angels, Satan, Heaven, Hell, cosmogenesis, anthropogenesis, human suffering, and human misdeeds, as well as their experiences regarding prayer, worship, and religious development. Responses to the questionnaire were analyzed by how strongly participants anthropomorphized God in a property-attribution task. Overall, the more participants anthropomorphized God, the more concretely they interpreted religious ideas, importing their understanding of human affairs into their understanding of divine affairs. These findings suggest not only that individuals vary greatly in how they interpret the same religious ideas but also that those interpretations cohere along a concrete-to-abstract dimension, anchored on the concrete side by our everyday notions of people.
Abstract: Representations of God in art, literature, and discourse range from the highly anthropomorphic to the highly abstract. The present study explored whether people who endorse anthropomorphic God concepts hold different religious beliefs and engage in different religious practices than those who endorse abstract concepts. Adults of various religious affiliations (n = 275) completed a questionnaire that probed their beliefs about God, angels, Satan, Heaven, Hell, cosmogenesis, anthropogenesis, human suffering, and human misdeeds, as well as their experiences regarding prayer, worship, and religious development. Responses to the questionnaire were analyzed by how strongly participants anthropomorphized God in a property-attribution task. Overall, the more participants anthropomorphized God, the more concretely they interpreted religious ideas, importing their understanding of human affairs into their understanding of divine affairs. These findings suggest not only that individuals vary greatly in how they interpret the same religious ideas but also that those interpretations cohere along a concrete-to-abstract dimension, anchored on the concrete side by our everyday notions of people.
Testosterone doesn't impair mens' cognitive empathy: Evidence from two large-scale randomized controlled trials
Does testosterone impair mens' cognitive empathy? Evidence from two large-scale randomized controlled trials. Amos Nadler, David Zava, Triana Ortiz, Neil Watson, Justin Carre, Colin Camerer, Gideon Nave. bioRxiv 516344, https://doi.org/10.1101/516344
Abstract: The capacity to infer the mental states of others (known as cognitive empathy) is essential for social interactions, and a well-known theory proposes that it is negatively affected by intrauterine testosterone exposure. Furthermore, previous studies reported that testosterone administration impaired cognitive empathy in healthy adults, and that a biomarker of prenatal testosterone exposure (finger digit ratios) moderated the effect. However, empirical support for the relationship has relied on small-sample studies with mixed evidence. We investigate the reliability and generalizability of the relationship in two large-scale double-blind placebo-controlled experiments in young men (N=243 and N=400), using two different testosterone administration protocols. We find no evidence that cognitive empathy is impaired by testosterone administration or associated with digit ratios. With an unprecedented combined sample size, these results counter current theories and previous high-profile reports, and demonstrate that previous investigations of this topic have been statistically underpowered.
Abstract: The capacity to infer the mental states of others (known as cognitive empathy) is essential for social interactions, and a well-known theory proposes that it is negatively affected by intrauterine testosterone exposure. Furthermore, previous studies reported that testosterone administration impaired cognitive empathy in healthy adults, and that a biomarker of prenatal testosterone exposure (finger digit ratios) moderated the effect. However, empirical support for the relationship has relied on small-sample studies with mixed evidence. We investigate the reliability and generalizability of the relationship in two large-scale double-blind placebo-controlled experiments in young men (N=243 and N=400), using two different testosterone administration protocols. We find no evidence that cognitive empathy is impaired by testosterone administration or associated with digit ratios. With an unprecedented combined sample size, these results counter current theories and previous high-profile reports, and demonstrate that previous investigations of this topic have been statistically underpowered.
Penis size over-reporting is more prevalent among experienced college men than in less experienced ones; may be due to a greater sense of masculinity & sexual competence and prowess by experienced men
Social Desirability and Young Men’s Self-Reports of Penis Size. Bruce M. King, Lauren M. Duncan, Kelley M. Clinkenbeard, Morgan B. Rutland & Kelly M. Ryan. Journal of Sex & Marital Therapy, https://doi.org/10.1080/0092623X.2018.1533905
Abstract: Previous studies demonstrate that many men have insecurities about the size of their penises, often resulting in low sexual self-esteem and sexual problems. In the present study, mean self-reported erect penis length by 130 sexually experienced college men (6.62 inches) was greater than found in previous studies in which researchers took measurements. This suggests that many of the men embellished their responses. Only 26.9% of the sexually experienced men self-reported penis lengths of less than 6 inches, while 30.8% self-reported lengths of 7 inches or more (with 10% self-reporting 8 inches or more). The correlation with Marlowe–Crowne social desirability scores was +.257 (p < .01), indicating that men with a high level of social desirability were more likely than others to self-report having a large penis.
---
The difference between sexually experienced and inexperienced men for self-reported penis size has not previously been reported but suggests that over-reporting is more prevalent among experienced men, thus accounting for the stronger correlation in that group. We speculate that the difference between the two groups is due to a greater sense of masculinity and sexual competence and prowess by experienced men.
Abstract: Previous studies demonstrate that many men have insecurities about the size of their penises, often resulting in low sexual self-esteem and sexual problems. In the present study, mean self-reported erect penis length by 130 sexually experienced college men (6.62 inches) was greater than found in previous studies in which researchers took measurements. This suggests that many of the men embellished their responses. Only 26.9% of the sexually experienced men self-reported penis lengths of less than 6 inches, while 30.8% self-reported lengths of 7 inches or more (with 10% self-reporting 8 inches or more). The correlation with Marlowe–Crowne social desirability scores was +.257 (p < .01), indicating that men with a high level of social desirability were more likely than others to self-report having a large penis.
---
The difference between sexually experienced and inexperienced men for self-reported penis size has not previously been reported but suggests that over-reporting is more prevalent among experienced men, thus accounting for the stronger correlation in that group. We speculate that the difference between the two groups is due to a greater sense of masculinity and sexual competence and prowess by experienced men.
Women high on narcissism were more likely to engage in blatant lying, machiavellianism predicted blatant lying & lying to avoid confrontation; primary & secondary psychopathy predicted self-serving deception
Dark triad traits and women's use of sexual deception. Gayle Brewer, Demi De Griffa, Ezgi Uzun. Personality and Individual Differences, Volume 142, 1 May 2019, Pages 42-44. https://doi.org/10.1016/j.paid.2019.01.033
Highlights
• Dark Triad traits independently predict women's use of sexual deception.
• Women high on narcissism were more likely to engage in blatant lying.
• Machiavellianism predicted blatant lying and lying to avoid confrontation.
• Primary and secondary psychopathy each predicted the use of self-serving deception.
Abstract: Dark Triad traits are related but distinct personality traits (narcissism, Machiavellianism, psychopathy) characterised by emotional coldness, exploitation, and a lack of empathy. Previous research investigates Dark Triad traits in relation to sexual behaviour and the use of deception in mating contexts. Few studies have, however, considered the types of sexual deception performed by those high on Dark Triad traits. In the present study, heterosexual women (N = 217) aged 18–59 years completed a series of online questionnaires assessing Dark Triad traits (narcissism, Machiavellianism, primary and secondary psychopathy) and sexual deception (blatant lying, self-serving deception, and avoiding confrontation). Multiple regression analyses indicate that Dark Triad traits are associated with each form of sexual deception investigated. Women high on narcissism were more likely to engage in blatant lying whilst Machiavellianism was associated with greater use of blatant lying and lying to avoid confrontation. Primary and secondary psychopathies were each associated with self-serving deception. Future studies should consider Dark Triad traits in relation to the effectiveness of each form of deception employed.
Highlights
• Dark Triad traits independently predict women's use of sexual deception.
• Women high on narcissism were more likely to engage in blatant lying.
• Machiavellianism predicted blatant lying and lying to avoid confrontation.
• Primary and secondary psychopathy each predicted the use of self-serving deception.
Abstract: Dark Triad traits are related but distinct personality traits (narcissism, Machiavellianism, psychopathy) characterised by emotional coldness, exploitation, and a lack of empathy. Previous research investigates Dark Triad traits in relation to sexual behaviour and the use of deception in mating contexts. Few studies have, however, considered the types of sexual deception performed by those high on Dark Triad traits. In the present study, heterosexual women (N = 217) aged 18–59 years completed a series of online questionnaires assessing Dark Triad traits (narcissism, Machiavellianism, primary and secondary psychopathy) and sexual deception (blatant lying, self-serving deception, and avoiding confrontation). Multiple regression analyses indicate that Dark Triad traits are associated with each form of sexual deception investigated. Women high on narcissism were more likely to engage in blatant lying whilst Machiavellianism was associated with greater use of blatant lying and lying to avoid confrontation. Primary and secondary psychopathies were each associated with self-serving deception. Future studies should consider Dark Triad traits in relation to the effectiveness of each form of deception employed.
Unlearning implicit racial and gender biases during sleep: A failure to replicate Hu et al. (2015)
Unlearning implicit social biases during sleep: A failure to replicate. Graelyn B. Humiston, Erin J. Wamsley. PLOS January 25, 2019. https://doi.org/10.1371/journal.pone.0211416
Abstract: A 2015 article in Science (Hu et al.) proposed a new way to reduce implicit racial and gender biases during sleep. The method built on an existing counter-stereotype training procedure, using targeted memory reactivation to strengthen counter-stereotype memory by playing cues associated with the training during a 90min nap. If effective, this procedure would have potential real-world usefulness in reducing implicit biases and their myriad effects. We replicated this procedure on a sample of n = 31 college students. Contrary to the results reported by Hu et al., we found no effect of cueing on implicit bias, either immediately following the nap or one week later. In fact, bias was non-significantly greater for cued than for uncued stimuli. Our failure to detect an effect of cueing on implicit bias could indicate either that the original report was a false positive, or that the current study is a false negative. However, several factors argue against Type II error in the current study. Critically, this replication was powered at 0.9 for detecting the originally reported cueing effect. Additionally, the 95% confidence interval for the cueing effect in the present study did not overlap with that of the originally reported effect; therefore, our observations are not easily explained as a noisy estimate of the same underlying effect. Ultimately, the outcome of this replication study reduces our confidence that cueing during sleep can reduce implicit bias.
Abstract: A 2015 article in Science (Hu et al.) proposed a new way to reduce implicit racial and gender biases during sleep. The method built on an existing counter-stereotype training procedure, using targeted memory reactivation to strengthen counter-stereotype memory by playing cues associated with the training during a 90min nap. If effective, this procedure would have potential real-world usefulness in reducing implicit biases and their myriad effects. We replicated this procedure on a sample of n = 31 college students. Contrary to the results reported by Hu et al., we found no effect of cueing on implicit bias, either immediately following the nap or one week later. In fact, bias was non-significantly greater for cued than for uncued stimuli. Our failure to detect an effect of cueing on implicit bias could indicate either that the original report was a false positive, or that the current study is a false negative. However, several factors argue against Type II error in the current study. Critically, this replication was powered at 0.9 for detecting the originally reported cueing effect. Additionally, the 95% confidence interval for the cueing effect in the present study did not overlap with that of the originally reported effect; therefore, our observations are not easily explained as a noisy estimate of the same underlying effect. Ultimately, the outcome of this replication study reduces our confidence that cueing during sleep can reduce implicit bias.
Friday, January 25, 2019
Delivery of carbon, nitrogen, and sulfur to the silicate Earth by a giant impact
Delivery of carbon, nitrogen, and sulfur to the silicate Earth by a giant impact. Damanveer S. Grewal, Rajdeep Dasgupta, Chenguang Sun, Kyusei Tsuno and Gelu Costin. Science Advances Jan 23 2019:Vol. 5, no. 1, eaau3669, DOI: 10.1126/sciadv.aau3669
Abstract: Earth’s status as the only life-sustaining planet is a result of the timing and delivery mechanism of carbon (C), nitrogen (N), sulfur (S), and hydrogen (H). On the basis of their isotopic signatures, terrestrial volatiles are thought to have derived from carbonaceous chondrites, while the isotopic compositions of nonvolatile major and trace elements suggest that enstatite chondrite–like materials are the primary building blocks of Earth. However, the C/N ratio of the bulk silicate Earth (BSE) is superchondritic, which rules out volatile delivery by a chondritic late veneer. In addition, if delivered during the main phase of Earth’s accretion, then, owing to the greater siderophile (metal loving) nature of C relative to N, core formation should have left behind a subchondritic C/N ratio in the BSE. Here, we present high pressure-temperature experiments to constrain the fate of mixed C-N-S volatiles during core-mantle segregation in the planetary embryo magma oceans and show that C becomes much less siderophile in N-bearing and S-rich alloys, while the siderophile character of N remains largely unaffected in the presence of S. Using the new data and inverse Monte Carlo simulations, we show that the impact of a Mars-sized planet, having minimal contributions from carbonaceous chondrite-like material and coinciding with the Moon-forming event, can be the source of major volatiles in the BSE.
Abstract: Earth’s status as the only life-sustaining planet is a result of the timing and delivery mechanism of carbon (C), nitrogen (N), sulfur (S), and hydrogen (H). On the basis of their isotopic signatures, terrestrial volatiles are thought to have derived from carbonaceous chondrites, while the isotopic compositions of nonvolatile major and trace elements suggest that enstatite chondrite–like materials are the primary building blocks of Earth. However, the C/N ratio of the bulk silicate Earth (BSE) is superchondritic, which rules out volatile delivery by a chondritic late veneer. In addition, if delivered during the main phase of Earth’s accretion, then, owing to the greater siderophile (metal loving) nature of C relative to N, core formation should have left behind a subchondritic C/N ratio in the BSE. Here, we present high pressure-temperature experiments to constrain the fate of mixed C-N-S volatiles during core-mantle segregation in the planetary embryo magma oceans and show that C becomes much less siderophile in N-bearing and S-rich alloys, while the siderophile character of N remains largely unaffected in the presence of S. Using the new data and inverse Monte Carlo simulations, we show that the impact of a Mars-sized planet, having minimal contributions from carbonaceous chondrite-like material and coinciding with the Moon-forming event, can be the source of major volatiles in the BSE.
“Forward flow”: A new measure to quantify free thought and predict creativity
Gray, K., Anderson, S., Chen, E. E., Kelly, J. M., Christian, M. S., Patrick, J., . . . Lewis, K. (2019). “Forward flow”: A new measure to quantify free thought and predict creativity. American Psychologist, http://dx.doi.org/10.1037/amp0000391
Abstract: When the human mind is free to roam, its subjective experience is characterized by a continuously evolving stream of thought. Although there is a technique that captures people’s streams of free thought—free association—its utility for scientific research is undermined by two open questions: (a) How can streams of thought be quantified? (b) Do such streams predict psychological phenomena? We resolve the first issue—quantification—by presenting a new metric, “forward flow,” that uses latent semantic analysis to capture the semantic evolution of thoughts over time (i.e., how much present thoughts diverge from past thoughts). We resolve the second issue—prediction—by examining whether forward flow predicts creativity in the lab and the real world. Our studies reveal that forward flow predicts creativity in college students (Study 1) and a representative sample of Americans (Study 2), even when controlling for intelligence. Studies also reveal that membership in real-world creative groups—performance majors (Study 3), professional actors (Study 4) and entrepreneurs (Study 5)—is predicted by forward flow, even when controlling for performance on divergent thinking tasks. Study 6 reveals that forward flow in celebrities’ social media posts (i.e., on Twitter) predicts their creative achievement. In addition to creativity, forward flow may also help predict mental illness, emotional experience, leadership ability, adaptability, neural dynamics, group productivity, and cultural success. We present open-access online tools for assessing and visualizing forward flow for both illustrative and large-scale data analytic purposes.
---
Forward Flow and Creativity
Both anecdotal and scientific evidence suggest a potential link between forward flow and creativity—in particular, divergent thinking.3 Anecdotally, creative people are often good at generating works that leave behind the past: Jackson Pollock moved beyond easels and brushes and Einstein moved beyond Newtonian mechanics. This “forward movement” appears in many artistic and scientific creative works as visionaries eschew tradition for originality. Indeed, Walt Disney’s recipe for creativity makes this connection explicit: “Around here… we don't look backwards for very long. We keep moving forward, opening up new doors and doing new things.” As thinking is for doing (Fiske, 1992), those who complete more “forward-moving” creative actions might also have more “forward-moving” thoughts.
Recent studies support the idea that creativity is related to individual differences in memory organization (Hass, 2017; Heinen & Johnson, 2017)—differences that allow more creative participants to connect ideas that are further apart in memory (Kenett, Anaki, & Faust, 2014). Most relevant to the current work, associative dynamics appear related to both creative performance and default network activity (Marron et al., 2018). However, one limitation of these studies is that they all assess task-bound thoughts, measuring cognitive processes while participants are engaged in creative tasks or are explicitly instructed to be creative. Although many real-world situations entail conscious efforts to be creative, here we examine whether features of unconstrained thought—free association without instructions to be creative—might also predict creativity across diverse settings and populations. Can forward flow predict divergent thinking and perhaps even the choices and outcomes related to people’s careers?
Importantly, we do not claim that forward flow is the “best” predictor or measure of creativity. Creativity is predicted by expertise (Baer & Kaufman, 2005), personality (Kaufman et al., 2016), attentional control (Beaty, Benedek, Silvia, & Schacter, 2016), one’s social network (Perry-Smith & Shalley, 2003), and the broader cultural milieu (Rudowicz, 2003). There are also many measures of creativity which predict creative performance. Nevertheless, revealing the predictive power of forward flow in creativity would both contribute to this important body of work and point to the general methodological utility of assessing streams of free thought.
Linear or Non-Linear? Forward Flow and Mental Illness
One important remaining question is whether the relationship between forward flow and creativity is linear. Some psychological disorders are characterized by disorganized thoughts (e.g., schizophrenia; Moritz, Woodward, Küppers, Lausen, & Schickel, 2003); and while such disorganization likely yields high forward flow, it may not yield high creativity—especially insofar as creativity requires usefulness in addition to originality (Runco & Jaeger, 2012). While there are robust correlations between mental illness and creativity (Kyaga et al., 2013), even our most creative samples—acting students and creative professionals—had a mean forward flow of ~.83, which is some distance from the theoretical maximum of 1.00. It may be that a completely disorganized stream of consciousness represented by extreme forward flow is tied to detriments in creative performance. If true, this suggests forward flow—at its upper bounds—might also be a useful indicator of some kinds of mental illness, consistent with recent work on atypicality in semantic memory structure (Faust & Kenett, 2014). Moreover, although we have suggested high forward flow helps creativity, it may hinder task performance when focus is required—e.g., for air traffic controllers or pilots (Kanfer & Ackerman, 1989).
Abstract: When the human mind is free to roam, its subjective experience is characterized by a continuously evolving stream of thought. Although there is a technique that captures people’s streams of free thought—free association—its utility for scientific research is undermined by two open questions: (a) How can streams of thought be quantified? (b) Do such streams predict psychological phenomena? We resolve the first issue—quantification—by presenting a new metric, “forward flow,” that uses latent semantic analysis to capture the semantic evolution of thoughts over time (i.e., how much present thoughts diverge from past thoughts). We resolve the second issue—prediction—by examining whether forward flow predicts creativity in the lab and the real world. Our studies reveal that forward flow predicts creativity in college students (Study 1) and a representative sample of Americans (Study 2), even when controlling for intelligence. Studies also reveal that membership in real-world creative groups—performance majors (Study 3), professional actors (Study 4) and entrepreneurs (Study 5)—is predicted by forward flow, even when controlling for performance on divergent thinking tasks. Study 6 reveals that forward flow in celebrities’ social media posts (i.e., on Twitter) predicts their creative achievement. In addition to creativity, forward flow may also help predict mental illness, emotional experience, leadership ability, adaptability, neural dynamics, group productivity, and cultural success. We present open-access online tools for assessing and visualizing forward flow for both illustrative and large-scale data analytic purposes.
---
Forward Flow and Creativity
Both anecdotal and scientific evidence suggest a potential link between forward flow and creativity—in particular, divergent thinking.3 Anecdotally, creative people are often good at generating works that leave behind the past: Jackson Pollock moved beyond easels and brushes and Einstein moved beyond Newtonian mechanics. This “forward movement” appears in many artistic and scientific creative works as visionaries eschew tradition for originality. Indeed, Walt Disney’s recipe for creativity makes this connection explicit: “Around here… we don't look backwards for very long. We keep moving forward, opening up new doors and doing new things.” As thinking is for doing (Fiske, 1992), those who complete more “forward-moving” creative actions might also have more “forward-moving” thoughts.
Recent studies support the idea that creativity is related to individual differences in memory organization (Hass, 2017; Heinen & Johnson, 2017)—differences that allow more creative participants to connect ideas that are further apart in memory (Kenett, Anaki, & Faust, 2014). Most relevant to the current work, associative dynamics appear related to both creative performance and default network activity (Marron et al., 2018). However, one limitation of these studies is that they all assess task-bound thoughts, measuring cognitive processes while participants are engaged in creative tasks or are explicitly instructed to be creative. Although many real-world situations entail conscious efforts to be creative, here we examine whether features of unconstrained thought—free association without instructions to be creative—might also predict creativity across diverse settings and populations. Can forward flow predict divergent thinking and perhaps even the choices and outcomes related to people’s careers?
Importantly, we do not claim that forward flow is the “best” predictor or measure of creativity. Creativity is predicted by expertise (Baer & Kaufman, 2005), personality (Kaufman et al., 2016), attentional control (Beaty, Benedek, Silvia, & Schacter, 2016), one’s social network (Perry-Smith & Shalley, 2003), and the broader cultural milieu (Rudowicz, 2003). There are also many measures of creativity which predict creative performance. Nevertheless, revealing the predictive power of forward flow in creativity would both contribute to this important body of work and point to the general methodological utility of assessing streams of free thought.
Linear or Non-Linear? Forward Flow and Mental Illness
One important remaining question is whether the relationship between forward flow and creativity is linear. Some psychological disorders are characterized by disorganized thoughts (e.g., schizophrenia; Moritz, Woodward, Küppers, Lausen, & Schickel, 2003); and while such disorganization likely yields high forward flow, it may not yield high creativity—especially insofar as creativity requires usefulness in addition to originality (Runco & Jaeger, 2012). While there are robust correlations between mental illness and creativity (Kyaga et al., 2013), even our most creative samples—acting students and creative professionals—had a mean forward flow of ~.83, which is some distance from the theoretical maximum of 1.00. It may be that a completely disorganized stream of consciousness represented by extreme forward flow is tied to detriments in creative performance. If true, this suggests forward flow—at its upper bounds—might also be a useful indicator of some kinds of mental illness, consistent with recent work on atypicality in semantic memory structure (Faust & Kenett, 2014). Moreover, although we have suggested high forward flow helps creativity, it may hinder task performance when focus is required—e.g., for air traffic controllers or pilots (Kanfer & Ackerman, 1989).
Thursday, January 24, 2019
We consider public debt from a long-term historical perspective, showing how the purposes for which governments borrow have evolved over time
Public Debt Through the Ages. Barry J. Eichengreen,Asmaa A ElGanainy,Rui Pedro Esteves,Kris James Mitchener. IMF Working Paper No. 19/6, https://www.imf.org/en/Publications/WP/Issues/2019/01/15/Public-Debt-Through-the-Ages-46503
Summary: We consider public debt from a long-term historical perspective, showing how the purposes for which governments borrow have evolved over time. Periods when debt-to-GDP ratios rose explosively as a result of wars, depressions and financial crises also have a long history. Many of these episodes resulted in debt-management problems resolved through debasements and restructurings. Less widely appreciated are successful debt consolidation episodes, instances in which governments inheriting heavy debts ran primary surpluses for long periods in order to reduce those burdens to sustainable levels. We analyze the economic and political circumstances that made these successful debt consolidation episodes possible.
Summary: We consider public debt from a long-term historical perspective, showing how the purposes for which governments borrow have evolved over time. Periods when debt-to-GDP ratios rose explosively as a result of wars, depressions and financial crises also have a long history. Many of these episodes resulted in debt-management problems resolved through debasements and restructurings. Less widely appreciated are successful debt consolidation episodes, instances in which governments inheriting heavy debts ran primary surpluses for long periods in order to reduce those burdens to sustainable levels. We analyze the economic and political circumstances that made these successful debt consolidation episodes possible.
Tradeoff between robustness and efficiency across species and brain regions can contribute to differential cognitive functions between species & to fragility underlying human psychopathologies
A Tradeoff in the Neural Code across Regions and Species. Raviv Pryluk et al. Cell, Volume 176, Issue 3, p597-609.e18, January 24, 2019. https://doi.org/10.1016/j.cell.2018.12.032
Highlights
• Human neurons utilize information capacity (efficiency) better than macaque neurons
• Cingulate cortex neurons are more efficient than amygdala neurons in both species
• Amygdala and monkey neurons show more synchrony and vocabulary overlap (robustness)
• There is a tradeoff between robustness and efficiency across species and regions
Summary: Many evolutionary years separate humans and macaques, and although the amygdala and cingulate cortex evolved to enable emotion and cognition in both, an evident functional gap exists. Although they were traditionally attributed to differential neuroanatomy, functional differences might also arise from coding mechanisms. Here we find that human neurons better utilize information capacity (efficient coding) than macaque neurons in both regions, and that cingulate neurons are more efficient than amygdala neurons in both species. In contrast, we find more overlap in the neural vocabulary and more synchronized activity (robustness coding) in monkeys in both regions and in the amygdala of both species. Our findings demonstrate a tradeoff between robustness and efficiency across species and regions. We suggest that this tradeoff can contribute to differential cognitive functions between species and underlie the complementary roles of the amygdala and the cingulate cortex. In turn, it can contribute to fragility underlying human psychopathologies.
Simplified: Pioneering brain study reveals ‘software’ differences between humans and monkeys. Alison Abbott. Nature 565, 410-411 (2019), https://www.nature.com/articles/d41586-019-00198-7
Neuroscientists tracked the activity of single neurons deep in the brain and suggest the findings could explain humans’ intelligence — and susceptibility to psychiatric disorders.
Highlights
• Human neurons utilize information capacity (efficiency) better than macaque neurons
• Cingulate cortex neurons are more efficient than amygdala neurons in both species
• Amygdala and monkey neurons show more synchrony and vocabulary overlap (robustness)
• There is a tradeoff between robustness and efficiency across species and regions
Summary: Many evolutionary years separate humans and macaques, and although the amygdala and cingulate cortex evolved to enable emotion and cognition in both, an evident functional gap exists. Although they were traditionally attributed to differential neuroanatomy, functional differences might also arise from coding mechanisms. Here we find that human neurons better utilize information capacity (efficient coding) than macaque neurons in both regions, and that cingulate neurons are more efficient than amygdala neurons in both species. In contrast, we find more overlap in the neural vocabulary and more synchronized activity (robustness coding) in monkeys in both regions and in the amygdala of both species. Our findings demonstrate a tradeoff between robustness and efficiency across species and regions. We suggest that this tradeoff can contribute to differential cognitive functions between species and underlie the complementary roles of the amygdala and the cingulate cortex. In turn, it can contribute to fragility underlying human psychopathologies.
Simplified: Pioneering brain study reveals ‘software’ differences between humans and monkeys. Alison Abbott. Nature 565, 410-411 (2019), https://www.nature.com/articles/d41586-019-00198-7
Neuroscientists tracked the activity of single neurons deep in the brain and suggest the findings could explain humans’ intelligence — and susceptibility to psychiatric disorders.
Recent reports are conflicted about whether adult neurogenesis occurs in humans; discrepancies could arise from species differences in neurodevelopmental timing and differences in subject ages
Recalibrating the Relevance of Adult Neurogenesis. Jason S.Snyder. Trends in Neurosciences, https://doi.org/10.1016/j.tins.2018.12.001
Highlights
Highlights
- Animal work has revealed that immature neurons born in the adult dentate gyrus have key cellular and behavioral functions.
- Recent reports are conflicted about whether adult neurogenesis occurs in humans.
- Discrepancies could arise from species differences in neurodevelopmental timing and differences in subject ages.
- Regardless of its extent, postnatally, an extended period of neurogenesis may produce a heterogeneous population of dentate gyrus neurons, due to prolonged cellular maturation and differences in the stage of the lifespan when neurons are born.
- These developments warrant a recalibration of when and how dentate gyrus neurogenesis contributes to cognition and mental health in humans.
Promoting subjective well-being is not only desirable per se, but it is conducive to higher productivity and improved countries’ economic performances
Happiness Matters: Productivity Gains from Subjective Well-Being. Charles Henri DiMaria, Chiara Peroni, Francesco Sarracino. Journal of Happiness Studies, https://link.springer.com/article/10.1007/s10902-019-00074-1
Abstract: This article studies the link between subjective well-being and productivity at the aggregate level, using a matched dataset from surveys and official statistics. Well-being and productivity are measured, respectively, by life satisfaction and total factor productivity. The analysis, which applies non-parametric frontier techniques in a production framework, finds that life satisfaction generates significant productivity gains in a sample of 20 European countries. These results confirm the evidence of a positive association between the variables of interest found at the individual and firm level, and support the view that promoting subjective well-being is not only desirable per se, but it is conducive to higher productivity and improved countries’ economic performances.
Keywords: Productivity Subjective well-being Total factor productivity Efficiency Life satisfaction Economic growth DEA Combined data
Abstract: This article studies the link between subjective well-being and productivity at the aggregate level, using a matched dataset from surveys and official statistics. Well-being and productivity are measured, respectively, by life satisfaction and total factor productivity. The analysis, which applies non-parametric frontier techniques in a production framework, finds that life satisfaction generates significant productivity gains in a sample of 20 European countries. These results confirm the evidence of a positive association between the variables of interest found at the individual and firm level, and support the view that promoting subjective well-being is not only desirable per se, but it is conducive to higher productivity and improved countries’ economic performances.
Keywords: Productivity Subjective well-being Total factor productivity Efficiency Life satisfaction Economic growth DEA Combined data
The Bitter Pill: Cessation of Oral Contraceptives Enhances the Appeal of Alternative Mates
The Bitter Pill: Cessation of Oral Contraceptives Enhances the Appeal of Alternative Mates. Gurit E. Birnbaum, Kobi Zholtack, Moran Mizrahi, Tsachi Ein-Dor. Evolutionary Psychological Science, https://link.springer.com/article/10.1007/s40806-018-00186-6
Abstract: Hormonal contraceptives change women’s natural mate preferences, leading them to prefer nurturing but less genetically compatible men. Cessation of contraceptives reverses these preferences, decreasing women’s attraction to current partners. Two studies examined whether women who had used contraceptive pills at relationship formation and stopped doing so were more vulnerable to desire attractive alternatives, primarily around ovulation, as compared to women who had not used pills at relationship formation or had used pills then but did not stop using them. In Study 1, participants watched videos of attractive and average-looking men and described imaginary dates with them, which were coded for desire expressions. In Study 2, we measured attention adhesion to attractive and average-looking men. Results showed that women who stopped using pills and were currently in high-fertility phase were especially likely to attend to, and express desire for, attractive alternatives, suggesting that cessation of contraceptives motivates the pursuit of more suitable mates.
Keywords: Attractive alternatives Contraceptive pills Infidelity Mate choice Menstrual cycle
Abstract: Hormonal contraceptives change women’s natural mate preferences, leading them to prefer nurturing but less genetically compatible men. Cessation of contraceptives reverses these preferences, decreasing women’s attraction to current partners. Two studies examined whether women who had used contraceptive pills at relationship formation and stopped doing so were more vulnerable to desire attractive alternatives, primarily around ovulation, as compared to women who had not used pills at relationship formation or had used pills then but did not stop using them. In Study 1, participants watched videos of attractive and average-looking men and described imaginary dates with them, which were coded for desire expressions. In Study 2, we measured attention adhesion to attractive and average-looking men. Results showed that women who stopped using pills and were currently in high-fertility phase were especially likely to attend to, and express desire for, attractive alternatives, suggesting that cessation of contraceptives motivates the pursuit of more suitable mates.
Keywords: Attractive alternatives Contraceptive pills Infidelity Mate choice Menstrual cycle
Allow Fracking To Avoid An Energy Crisis: Nuclear is not being modernized & tankers/pipelines will fail some day and the uproar will left us with no spare capacity
We Need To Allow Fracking To Avoid An Energy Crisis. Stephen Glover, Daily Mail, January 24 2019. https://www.dailymail.co.uk/debate/article-6625661/STEPHEN-GLOVER-need-allow-fracking-avoid-energy-crisis.html
We have an energy crisis. And it so happens that we appear to have lots of shale gas. A whole new industry could be created if only the Luddites would see sense.
Almost everyone I know is against fracking. Not that many of them understand much about it. But this doesn’t prevent them from pursing their lips and shaking their heads while looking solemn and generally disapproving whenever the subject is raised.
What about the United States, I sometimes ask, where fracking on a massive scale has made the country much less reliant on expensive imports of oil and gas? America has vast unpopulated areas, they reply. Britain is a crowded island. They may then add that fracking — which involves extracting gas from underground rocks by injecting high-pressure chemicals — causes earthquakes and ruins the countryside.
It is because of views like these that fracking in the UK has barely got going despite there being enormous reserves of potentially recoverable shale gas, which if extracted would greatly improve our chances of keeping the lights on and the wheels of the economy turning.
Labour is against fracking. As are the Lib Dems. So is the SNP government in Scotland, which has banned all exploration north of the border. Opinion polls suggest a majority of the public is against. And of course every environmentalist you care to mention thinks fracking is the work of the Devil.
As for the Conservative Government, although supposedly pro-fracking, it proceeds cautiously, and seldom defends the practice. I can’t recall the underwhelming Greg Clark, Business Secretary and the Minister with overall responsibility, ever singing its praises.
The Government’s greatest terror is that the process might cause earthquakes. In 2011, shale gas test-drilling triggered tremors in Lancashire, and fracking was banned for a time.
So it’s no surprise that Mr Clark’s department is apparently ignoring the conclusion of two advisers in the government’s Oil and Gas Authority that the existing low limit on tremors caused by fracking be raised because the risk of harm is ‘vanishingly small’.
All in all, it is hard to find anyone in public life who will speak up for fracking, excepting Natascha Engel, the former Labour MP for North East Derbyshire, who was appointed Commissioner for Shale Gas last autumn by the Government. She recently told the Mail that fracking, if safely and sensibly pursued, could create tens of thousands of jobs, and provide Britain’s energy needs for 50 years.
[...].
It’s a roaring shame we can’t have a reasoned debate. Consider this: on Tuesday, which was unpleasantly cold, the National Grid could supply enough power only by relying on energy produced by the coal-fired power stations that environmentalists hate (13 per cent of the total) and on imported electricity from France and Holland (five per cent).
Because there was very little wind on Tuesday, less than three per cent of our energy was supplied by off-shore and on-shore turbines. That’s the trouble with wind power. When demand is high, as it is bound to be in cold weather, all the wind turbines in the world won’t help if there isn’t even a gentle breeze.
Isn’t it a bit alarming that in a country of around 65 million people, which is the fifth or sixth biggest economy in the world, the National Grid can only keep the show on the road by cranking up coal power stations and depending on imported electricity?
It wouldn’t take much to go wrong — a ruptured pipeline carrying gas from Norway or Europe, or a tanker carrying the stuff to our shores running aground — for us to discover there isn’t any spare capacity in the system, and for everything to go kaput.
Looking to the future, plenty of people believe the danger of shortages is likely to increase. Britain’s coal-fired power plants will all be shut down by the mid-2020s. Several have already been closed under EU anti-pollution rules.
Meanwhile, the country’s ageing nuclear power stations, which provide 21 per cent of current power supplies, are expected to be de-commissioned by the 2030s. But the replacement programme has been severely dented by the recent decision of two Japanese companies to pull out of building two nuclear power stations with state-of-the-art reactors in Cumbria and on Anglesey.
With the £20 billion Chinese-financed Hinkley Point now the only new nuclear reactor being built, it seems possible, if not probable, that, in 15 years’ time, nuclear power stations will supply a smaller proportion of our energy needs than they do at present.
In short, as coal-fired power stations are certain to be cashiered, and nuclear power seems likely to be curtailed, there is a looming energy gap which new off-shore turbines can’t be guaranteed to fill because the wind does not always blow.
Nor would it be sensible to make up the impending shortfall by importing more gas from Russia. It would be foolhardy to put ourselves at the mercy of a hostile regime. That is the perilous path down which Germany has recklessly gone — 20 per cent of its energy needs are supplied by Moscow — and we would be mad to follow suit.
How easy it is for virtue-signalling or ignorant politicians to condemn fracking without giving any thought to the consequences in ten or 20 years’ time, or indeed for the thousands of new jobs, often in depressed areas of northern England, which it might create. […]
If we still had vast reserves of oil and gas in the North Sea, this weakness and indecision on the part of the authorities might not matter. But we don’t. We have an energy crisis. And it so happens that we appear to have lots of shale gas.
Rolf Degen summarizing: Default manipulations, the magic bullets of the "Nudge" approach for improving decision making, work the least well when individuals care deeply about the choice
When and why defaults influence decisions: a meta-analysis of default effects. Jon M Jachimowicz et al. Behavioural Public Policy, https://doi.org/10.1017/bpp.2018.43
Abstract: When people make decisions with a pre-selected choice option – a ‘default’ – they are more likely to select that option. Because defaults are easy to implement, they constitute one of the most widely employed tools in the choice architecture toolbox. However, to decide when defaults should be used instead of other choice architecture tools, policy-makers must know how effective defaults are and when and why their effectiveness varies. To answer these questions, we conduct a literature search and meta-analysis of the 58 default studies (pooled n = 73,675) that fit our criteria. While our analysis reveals a considerable influence of defaults (d = 0.68, 95% confidence interval = 0.53–0.83), we also discover substantial variation: the majority of default studies find positive effects, but several do not find a significant effect, and two even demonstrate negative effects. To explain this variability, we draw on existing theoretical frameworks to examine the drivers of disparity in effectiveness. Our analysis reveals two factors that partially account for the variability in defaults’ effectiveness. First, we find that defaults in consumer domains are more effective and in environmental domains are less effective. Second, we find that defaults are more effective when they operate through endorsement (defaults that are seen as conveying what the choice architect thinks the decision-maker should do) or endowment (defaults that are seen as reflecting the status quo). We end with a discussion of possible directions for a future research program on defaults, including potential additional moderators, and implications for policy-makers interested in the implementation and evaluation of defaults.
Abstract: When people make decisions with a pre-selected choice option – a ‘default’ – they are more likely to select that option. Because defaults are easy to implement, they constitute one of the most widely employed tools in the choice architecture toolbox. However, to decide when defaults should be used instead of other choice architecture tools, policy-makers must know how effective defaults are and when and why their effectiveness varies. To answer these questions, we conduct a literature search and meta-analysis of the 58 default studies (pooled n = 73,675) that fit our criteria. While our analysis reveals a considerable influence of defaults (d = 0.68, 95% confidence interval = 0.53–0.83), we also discover substantial variation: the majority of default studies find positive effects, but several do not find a significant effect, and two even demonstrate negative effects. To explain this variability, we draw on existing theoretical frameworks to examine the drivers of disparity in effectiveness. Our analysis reveals two factors that partially account for the variability in defaults’ effectiveness. First, we find that defaults in consumer domains are more effective and in environmental domains are less effective. Second, we find that defaults are more effective when they operate through endorsement (defaults that are seen as conveying what the choice architect thinks the decision-maker should do) or endowment (defaults that are seen as reflecting the status quo). We end with a discussion of possible directions for a future research program on defaults, including potential additional moderators, and implications for policy-makers interested in the implementation and evaluation of defaults.
Men in the short-term mating context allocated more points to bodily traits, but only when in the low budget condition—in the high budget condition, men showed more interest in facial traits
Mate-by-Numbers: Budget, Mating Context, and Sex Predict Preferences for Facial and Bodily Traits. Carin Perilloux, Jaime M. Cloud. Evolutionary Psychological Science, https://link.springer.com/article/10.1007/s40806-019-00187-z
Abstract: Unlike women, or men considering long-term mates, men pursuing short-term mating have shown a tendency to prioritize bodily information over facial information when assessing potential mates. Prior studies have documented this tendency across a variety of methods ranging from photograph ratings to forcing a choice between faces and bodies, but have yet to ask participants to prioritize individual traits in faces and bodies. The current study used a budget allocation method to do just that. We randomly assigned participants (N = 258) to a mating context (short-term or long-term) and a budget (high or low) and asked them to allocate points across 10 traits (five facial, five bodily) to design their ideal mate within their budget. As expected, men in the short-term mating context allocated more points to bodily traits, but only when in the low budget condition—in the high budget condition, men showed more interest in facial traits. Women, also as expected, and in contrast to men, showed a general trend toward favoring facial traits regardless of budget and condition. Overall, the results are consistent with the hypothesis that women’s bodies provide better information regarding immediate fertility and are thus more important for men to assess in short-term mating contexts.
Keywords: Physical attractiveness Mate preferences Face Body Traits Evolution
Abstract: Unlike women, or men considering long-term mates, men pursuing short-term mating have shown a tendency to prioritize bodily information over facial information when assessing potential mates. Prior studies have documented this tendency across a variety of methods ranging from photograph ratings to forcing a choice between faces and bodies, but have yet to ask participants to prioritize individual traits in faces and bodies. The current study used a budget allocation method to do just that. We randomly assigned participants (N = 258) to a mating context (short-term or long-term) and a budget (high or low) and asked them to allocate points across 10 traits (five facial, five bodily) to design their ideal mate within their budget. As expected, men in the short-term mating context allocated more points to bodily traits, but only when in the low budget condition—in the high budget condition, men showed more interest in facial traits. Women, also as expected, and in contrast to men, showed a general trend toward favoring facial traits regardless of budget and condition. Overall, the results are consistent with the hypothesis that women’s bodies provide better information regarding immediate fertility and are thus more important for men to assess in short-term mating contexts.
Keywords: Physical attractiveness Mate preferences Face Body Traits Evolution
Wednesday, January 23, 2019
Raising mice in an enriched environment (better opportunities for social interaction, voluntary physical exercise & explorative behaviour) boosts cortical plasticity & now ocular dominance plasticity
Transgenerational transmission of enhanced ocular dominance plasticity from enriched mice to their non-enriched offspring. Evgenia Kalogeraki, Rashad Yusifov and Siegrid Löwel. eNeuro January 21 2019, ENEURO.0252-18.2018; https://doi.org/10.1523/ENEURO.0252-18.2018
Abstract: In recent years, evidence has accumulated that non-Mendelian transgenerational inheritance of qualities acquired through experience is possible. In particular, it has been shown that raising rodents in a so-called enriched environment (EE) can not only modify the animals’ behaviour and increase their susceptibility to activity-dependent neuronal network changes, but also influences both behaviour and neuronal plasticity of the non-enriched offspring. Here, we tested whether such a transgenerational transmission can also be observed in the primary visual cortex (V1) using ocular dominance (OD) plasticity after monocular deprivation (MD) as a paradigm. While OD-plasticity after 7 days of MD is absent in standard-cage (SC) raised mice beyond postnatal day (P) 110, it is present lifelong in EE-raised mice. Using intrinsic signal optical imaging to visualize cortical activity, we confirm these previous observations and additionally show that OD-plasticity is not only preserved in adult EE-mice but also in their adult non-enriched offspring: mice born to enriched parents, but raised in SCs at least until P110 displayed similar OD-shifts towards the open eye after 7 days of MD as age-matched EE-raised animals. Furthermore, testing the offspring of EE-female versus EE-males with SC-mating partners revealed that only pups of EE-females, but not of EE-males, preserved OD-plasticity into adulthood, suggesting that the life experiences of the mother have a greater impact on the continued V1-plasticity of the offspring. The OD-plasticity of the non-enriched pups of EE-mothers was, however, mechanistically different from that of non-enriched pups of EE-parents or EE-mice.
Significance statement: Recently evidence is accumulating that life experiences and thus acquired qualities of parents can be transmitted across generations in a non-Mendelian fashion and have a significant impact on the fitness of offspring. Raising mice in a so-called enriched environment with enhanced opportunities for social interaction, voluntary physical exercise and explorative behaviour has been shown to boost cortical plasticity. Our results now show that the plasticity-promoting effect of enrichment on ocular dominance plasticity, a well-established plasticity paradigm in a primary sensory cortex, can also be transmitted from enriched parents to their non-enriched offspring. Thus cortical plasticity is not only influenced by an animal’s life experiences but can also be modified by the life experiences of its parents.
Abstract: In recent years, evidence has accumulated that non-Mendelian transgenerational inheritance of qualities acquired through experience is possible. In particular, it has been shown that raising rodents in a so-called enriched environment (EE) can not only modify the animals’ behaviour and increase their susceptibility to activity-dependent neuronal network changes, but also influences both behaviour and neuronal plasticity of the non-enriched offspring. Here, we tested whether such a transgenerational transmission can also be observed in the primary visual cortex (V1) using ocular dominance (OD) plasticity after monocular deprivation (MD) as a paradigm. While OD-plasticity after 7 days of MD is absent in standard-cage (SC) raised mice beyond postnatal day (P) 110, it is present lifelong in EE-raised mice. Using intrinsic signal optical imaging to visualize cortical activity, we confirm these previous observations and additionally show that OD-plasticity is not only preserved in adult EE-mice but also in their adult non-enriched offspring: mice born to enriched parents, but raised in SCs at least until P110 displayed similar OD-shifts towards the open eye after 7 days of MD as age-matched EE-raised animals. Furthermore, testing the offspring of EE-female versus EE-males with SC-mating partners revealed that only pups of EE-females, but not of EE-males, preserved OD-plasticity into adulthood, suggesting that the life experiences of the mother have a greater impact on the continued V1-plasticity of the offspring. The OD-plasticity of the non-enriched pups of EE-mothers was, however, mechanistically different from that of non-enriched pups of EE-parents or EE-mice.
Significance statement: Recently evidence is accumulating that life experiences and thus acquired qualities of parents can be transmitted across generations in a non-Mendelian fashion and have a significant impact on the fitness of offspring. Raising mice in a so-called enriched environment with enhanced opportunities for social interaction, voluntary physical exercise and explorative behaviour has been shown to boost cortical plasticity. Our results now show that the plasticity-promoting effect of enrichment on ocular dominance plasticity, a well-established plasticity paradigm in a primary sensory cortex, can also be transmitted from enriched parents to their non-enriched offspring. Thus cortical plasticity is not only influenced by an animal’s life experiences but can also be modified by the life experiences of its parents.
Fake News & Ideological (a)symmetries in Perceptions of Media Legitimacy: Partisans are motivated to believe fake news & dismiss true news that contradicts their position as fake news
Harper, Craig A., and Thom Baguley. 2019. ““you Are Fake News”: Ideological (a)symmetries in Perceptions of Media Legitimacy” PsyArXiv. January 23. doi:10.31234/osf.io/ym6t5
Abstract: The concept of ‘fake news’ has exploded into the public’s consciousness since the election of Donald Trump to the US presidency in late 2016. Since then, this phrase has witnessed a more than 350% increase in its popular usage, and was named Collins Dictionary’s word of the year for 2017. However, the concept of fake news has received surprisingly little attention within the social psychological literature. We present three well-powered studies (combined N = 2,275) using American and British samples to establish whether liberal and conservative partisans are motivated to believe fake news (Study 1; n = 722) or dismiss true news that contradicts their position as being fake (Study 2; n = 570). We found support for both of these hypotheses. Further, these effects were asymmetrically moderated by collective narcissism, need for cognition, and faith in intuition (Study 3; n = 983). Together, our findings suggest that partisans of both sides of the political spectrum engage with the ‘fake news’ label (and perceive media story legitimacy) in a way that is consistent with a motivated reasoning approach, though these motivations appear to differ between-groups. Theoretical and practical implications are discussed, particularly in relation to growing levels of political polarization and incivility in modern Western democracies.
Abstract: The concept of ‘fake news’ has exploded into the public’s consciousness since the election of Donald Trump to the US presidency in late 2016. Since then, this phrase has witnessed a more than 350% increase in its popular usage, and was named Collins Dictionary’s word of the year for 2017. However, the concept of fake news has received surprisingly little attention within the social psychological literature. We present three well-powered studies (combined N = 2,275) using American and British samples to establish whether liberal and conservative partisans are motivated to believe fake news (Study 1; n = 722) or dismiss true news that contradicts their position as being fake (Study 2; n = 570). We found support for both of these hypotheses. Further, these effects were asymmetrically moderated by collective narcissism, need for cognition, and faith in intuition (Study 3; n = 983). Together, our findings suggest that partisans of both sides of the political spectrum engage with the ‘fake news’ label (and perceive media story legitimacy) in a way that is consistent with a motivated reasoning approach, though these motivations appear to differ between-groups. Theoretical and practical implications are discussed, particularly in relation to growing levels of political polarization and incivility in modern Western democracies.
There were to Neanderthals two effective methods of minimizing C vitamin loss and prevent scurvy: eating meat raw (fresh or frozen); and eating the meat after it has been putrefied
Neanderthals, vitamin C, and scurvy. John D.Speth. Quaternary International, https://doi.org/10.1016/j.quaint.2018.11.042
Abstract: This paper explores the role of vitamin C (ascorbic acid) in the foodways of hunter-gatherers—both ethnohistoric and Paleolithic—whose diet seasonally or over much of the year, of necessity, was comprised largely of animal foods. In order to stave off scurvy, such foragers had to obtain a minimum of about 10 mg per day of vitamin C. However, there is little to no vitamin C in muscle meat, being concentrated instead in various internal organs and brain. Even ruminant stomach contents, despite the abundance of partially digested plants, contains almost none. Moreover, many of the “meatiest” anatomical units in a carcass, such as the thigh muscles or “hams” associated with the femur, are extremely lean in most wild ungulates, making them nutritionally much less valuable to northern foragers than archaeologists commonly assume (for example, Inuit and other indigenous peoples of the arctic and subarctic commonly use the thigh meat as dog food). Vitamin C is also the most unstable vitamin, rapidly degrading or disappearing when exposed to water, air, light, heat, and pH levels above about 4.0. As a consequence, common methods of preparing meat for storage and consumption (e.g., drying, roasting, boiling) may lead to significant loss of vitamin C. There are two effective methods of minimizing such loss: (1) eating meat raw (fresh or frozen); and (2) eating the meat after it has been putrefied. Putrefaction has distinct advantages that make it a common, if not essential, way of preparing and preserving meat among northern latitude foragers and, for the same reasons, very likely also among Paleolithic foragers in the colder climes of Pleistocene Eurasia. Putrefaction “pre-digests” the meat (including the organs), making it much less costly to ingest and metabolize than raw meat; and it lowers the pH, greatly increasing the stability of vitamin C. These observations offer insights into critical nutritional constraints that likely had to be addressed by Neanderthals and later hominins in any context where their diet was heavily meat-based for a substantial part of the year.
10.1016/j.quaint.2018.09.003
Abstract: This paper explores the role of vitamin C (ascorbic acid) in the foodways of hunter-gatherers—both ethnohistoric and Paleolithic—whose diet seasonally or over much of the year, of necessity, was comprised largely of animal foods. In order to stave off scurvy, such foragers had to obtain a minimum of about 10 mg per day of vitamin C. However, there is little to no vitamin C in muscle meat, being concentrated instead in various internal organs and brain. Even ruminant stomach contents, despite the abundance of partially digested plants, contains almost none. Moreover, many of the “meatiest” anatomical units in a carcass, such as the thigh muscles or “hams” associated with the femur, are extremely lean in most wild ungulates, making them nutritionally much less valuable to northern foragers than archaeologists commonly assume (for example, Inuit and other indigenous peoples of the arctic and subarctic commonly use the thigh meat as dog food). Vitamin C is also the most unstable vitamin, rapidly degrading or disappearing when exposed to water, air, light, heat, and pH levels above about 4.0. As a consequence, common methods of preparing meat for storage and consumption (e.g., drying, roasting, boiling) may lead to significant loss of vitamin C. There are two effective methods of minimizing such loss: (1) eating meat raw (fresh or frozen); and (2) eating the meat after it has been putrefied. Putrefaction has distinct advantages that make it a common, if not essential, way of preparing and preserving meat among northern latitude foragers and, for the same reasons, very likely also among Paleolithic foragers in the colder climes of Pleistocene Eurasia. Putrefaction “pre-digests” the meat (including the organs), making it much less costly to ingest and metabolize than raw meat; and it lowers the pH, greatly increasing the stability of vitamin C. These observations offer insights into critical nutritional constraints that likely had to be addressed by Neanderthals and later hominins in any context where their diet was heavily meat-based for a substantial part of the year.
10.1016/j.quaint.2018.09.003
Many animals show evidence of culture (innovations in multiple domains whose frequencies are influenced by social learning), but only humans show strong evidence of complex, cumulative culture. Why?
Teaching and curiosity: sequential drivers of cumulative cultural evolution in the hominin lineage. Carel P. van Schaik, Gauri R. Pradhan, Claudio Tennie. Behavioral Ecology and Sociobiology, January 2019, 73:2, https://link.springer.com/article/10.1007/s00265-018-2610-7
Abstract: Many animals, and in particular great apes, show evidence of culture, in the sense of having multiple innovations in multiple domains whose frequencies are influenced by social learning. But only humans show strong evidence of complex, cumulative culture, which is the product of copying and the resulting effect of cumulative cultural evolution. The reasons for this increase in complexity have recently become the subject of extensive debate. Here, we examine these reasons, relying on both comparative and paleoarcheological data. The currently best-supported inference is that culture began to be truly cumulative (and so, outside the primate range) around 500,000 years ago. We suggest that the best explanation for its onset is the emergence of verbal teaching, which not only requires language and thus probably coevolved with the latter’s evolution but also reflects the overall increase in proactive cooperation due to extensive allomaternal care. A subsequent steep increase in cumulative culture, roughly 75 ka, may reflect the rise of active novelty seeking (curiosity), which led to a dramatic range expansion and steep increase in the diversity and complexity of material culture. A final, and continuing, period of acceleration began with the Neolithic (agricultural) revolution.
Keywords: Cumulative culture Stone tools Out of Africa Imitation Verbal instruction Teaching
Abstract: Many animals, and in particular great apes, show evidence of culture, in the sense of having multiple innovations in multiple domains whose frequencies are influenced by social learning. But only humans show strong evidence of complex, cumulative culture, which is the product of copying and the resulting effect of cumulative cultural evolution. The reasons for this increase in complexity have recently become the subject of extensive debate. Here, we examine these reasons, relying on both comparative and paleoarcheological data. The currently best-supported inference is that culture began to be truly cumulative (and so, outside the primate range) around 500,000 years ago. We suggest that the best explanation for its onset is the emergence of verbal teaching, which not only requires language and thus probably coevolved with the latter’s evolution but also reflects the overall increase in proactive cooperation due to extensive allomaternal care. A subsequent steep increase in cumulative culture, roughly 75 ka, may reflect the rise of active novelty seeking (curiosity), which led to a dramatic range expansion and steep increase in the diversity and complexity of material culture. A final, and continuing, period of acceleration began with the Neolithic (agricultural) revolution.
Keywords: Cumulative culture Stone tools Out of Africa Imitation Verbal instruction Teaching
Short periods of unoccupied waking rest can facilitate consolidation in a manner similar to that proposed to occur during sleep
Memory Consolidation during Waking Rest. Erin J. Wamsley. Trends in Cognitive Sciences, https://doi.org/10.1016/j.tics.2018.12.007
Abstract: Recent studies show that brief periods of rest after learning facilitate consolidation of new memories. This effect is associated with memory-related brain activity during quiet rest and suggests that in our daily lives, moments of unoccupied rest may serve an essential cognitive function.
---
In fact, a growing body of evidence suggests that short periods of unoccupied waking rest can facilitate consolidation in a manner similar to that proposed to occur during sleep [1–3,5,6] (quiet wake conditions, Figure 1). Our group and others have demonstrated that a 15min period of eyes-closed rest following encoding enhances memory for both procedural [5] and declarative [1,2] memory tasks, compared to an equivalent period spent completing a distractor task. Other recent studies have demonstrated that post-learning rest enhances subsequent memory for spatial and temporal information [7] , facilitates insight into a complex problem [3] , and enhances auditory statistical learning [6]. These memory effects can be maintained for a week or more after the rest intervention [2,7]. Together, these observations suggest that even during wakefulness, memory is preferentially consolidated during offline states characterized by reduced attentional demands.
Thus, the fundamental insight yielded by these new studies of waking rest is not so much that consolidation can occur during wakefulness but that consolidation is not uniformly distributed throughout all of wakefulness. Instead, memory is preferentially facilitated during periods of unoccupied time in which attentional and cognitive demands are reduced [1,2,5]. This insight helps us to understand the necessary and sufficient conditions for consolidation to occur. Increasingly, it appears that for many forms of consolidation, sleep-specific neural mechanisms may not be strictly required. Instead, both sleep and other of fl ine states share common neurobiological features essential for consolidation to take place.
Indeed, many of the same neurobiological mechanisms thought to underlie sleep’s effect on memory are shared in common by waking rest. First, cellular-level memory ‘reactivation’ occurs during quiescent waking rest in the hippocampus as well as in other brain regions. During this process, sequences of neuronal fi ring representing recent experience are reiterated of fl ine. Blocking these reactivations impairs learning and memory [8]. In humans, a growing number of neuroimaging studies demonstrate memory-related brain activity during periods of post-training rest that predicts subsequent memory. For example, fMRI has been used to demonstrate that patterns of hippocampal activity characterizing encoding persist into post-learning rest and that this predicts subsequent memory [9]. Our own group has meanwhile reported that low-frequency electroencephalogram oscillations thought to support consolidation during sleep similarly predict memory retention across quiet waking rest [1]. And the neuromodulatory environment during quiet rest is also well own group has meanwhile reported that low-frequency electroencephalogram oscillations thought to support consolidation during sleep similarly predict memory retention across quiet waking rest [1]. And the neuromodulatory environment during quiet rest is also well suited to facilitate consolidation; in both sleep and quiet rest, acetylcholine levels are substantially reduced from active waking levels, thought to promote hippocampal-cortical communication dynamics that benefit consolidation, as opposed to new learning. Thus, converging lines of evidence suggest that like sleep, rest benefits memory by enabling an active process of consolidation, facilitated by the offline reactivation and synaptic plasticity.
Abstract: Recent studies show that brief periods of rest after learning facilitate consolidation of new memories. This effect is associated with memory-related brain activity during quiet rest and suggests that in our daily lives, moments of unoccupied rest may serve an essential cognitive function.
---
In fact, a growing body of evidence suggests that short periods of unoccupied waking rest can facilitate consolidation in a manner similar to that proposed to occur during sleep [1–3,5,6] (quiet wake conditions, Figure 1). Our group and others have demonstrated that a 15min period of eyes-closed rest following encoding enhances memory for both procedural [5] and declarative [1,2] memory tasks, compared to an equivalent period spent completing a distractor task. Other recent studies have demonstrated that post-learning rest enhances subsequent memory for spatial and temporal information [7] , facilitates insight into a complex problem [3] , and enhances auditory statistical learning [6]. These memory effects can be maintained for a week or more after the rest intervention [2,7]. Together, these observations suggest that even during wakefulness, memory is preferentially consolidated during offline states characterized by reduced attentional demands.
Thus, the fundamental insight yielded by these new studies of waking rest is not so much that consolidation can occur during wakefulness but that consolidation is not uniformly distributed throughout all of wakefulness. Instead, memory is preferentially facilitated during periods of unoccupied time in which attentional and cognitive demands are reduced [1,2,5]. This insight helps us to understand the necessary and sufficient conditions for consolidation to occur. Increasingly, it appears that for many forms of consolidation, sleep-specific neural mechanisms may not be strictly required. Instead, both sleep and other of fl ine states share common neurobiological features essential for consolidation to take place.
Indeed, many of the same neurobiological mechanisms thought to underlie sleep’s effect on memory are shared in common by waking rest. First, cellular-level memory ‘reactivation’ occurs during quiescent waking rest in the hippocampus as well as in other brain regions. During this process, sequences of neuronal fi ring representing recent experience are reiterated of fl ine. Blocking these reactivations impairs learning and memory [8]. In humans, a growing number of neuroimaging studies demonstrate memory-related brain activity during periods of post-training rest that predicts subsequent memory. For example, fMRI has been used to demonstrate that patterns of hippocampal activity characterizing encoding persist into post-learning rest and that this predicts subsequent memory [9]. Our own group has meanwhile reported that low-frequency electroencephalogram oscillations thought to support consolidation during sleep similarly predict memory retention across quiet waking rest [1]. And the neuromodulatory environment during quiet rest is also well own group has meanwhile reported that low-frequency electroencephalogram oscillations thought to support consolidation during sleep similarly predict memory retention across quiet waking rest [1]. And the neuromodulatory environment during quiet rest is also well suited to facilitate consolidation; in both sleep and quiet rest, acetylcholine levels are substantially reduced from active waking levels, thought to promote hippocampal-cortical communication dynamics that benefit consolidation, as opposed to new learning. Thus, converging lines of evidence suggest that like sleep, rest benefits memory by enabling an active process of consolidation, facilitated by the offline reactivation and synaptic plasticity.
Tuesday, January 22, 2019
Emotion Perception in Members of Mensa: Better at differentiating emotion, above all anger; the positive manifold extends also to social cognition, & runs counter to the concept of a cost to giftedness
Emotion Perception in Members of Norwegian Mensa. Jens Egeland. Front. Psychol., Jan 23 2019, https://doi.org/10.3389/fpsyg.2019.00027
Abstract: Are people with superior intelligence also superior in interpreting the emotions of others? Some studies find that an underlying g-factor links all mental processes leading to an expectation of a positive answer to the question, while other studies find that there is a cost to giftedness. No previous study have tested social cognition among highly gifted, or the Mensa society specifically. The study measures emotion recognition in 63 members of the Norwegian Mensa and 101 community controls. The Mensa group had a higher total score on the EmoBio test and was specifically better at differentiating the anger emotion, otherwise hypothesized to be mediated by subcortical processes. There was no difference in heterogeneity between the groups, contrary to the expectation of an autistic subgroup in Mensa. The study indicate that the positive manifold extends also to social cognition, and runs counter to the concept of a cost to giftedness.
Abstract: Are people with superior intelligence also superior in interpreting the emotions of others? Some studies find that an underlying g-factor links all mental processes leading to an expectation of a positive answer to the question, while other studies find that there is a cost to giftedness. No previous study have tested social cognition among highly gifted, or the Mensa society specifically. The study measures emotion recognition in 63 members of the Norwegian Mensa and 101 community controls. The Mensa group had a higher total score on the EmoBio test and was specifically better at differentiating the anger emotion, otherwise hypothesized to be mediated by subcortical processes. There was no difference in heterogeneity between the groups, contrary to the expectation of an autistic subgroup in Mensa. The study indicate that the positive manifold extends also to social cognition, and runs counter to the concept of a cost to giftedness.
Supermarket Access and Childhood Bodyweight: Supermarket openings reduce the weight of low-income children, although by little
Supermarket Access and Childhood Bodyweight: Evidence from Store Openings and Closings. Di Zeng et al. Economics & Human Biology, https://doi.org/10.1016/j.ehb.2019.01.004
Highlights
• We assess the child weight impacts of supermarket openings and closings.
• There is little overall impact with either supermarket openings or closings.
• Supermarket openings reduce the weight of low-income children.
• Supermarket closings does not have a clear impact on children.
Abstract: Retail food environment is increasingly considered in relation to obesity. This study investigates the impacts of access to supermarkets, the primary source of healthy foods in the United States, on the bodyweight of children. Empirical analysis uses individual-level panel data covering health screenings of public schoolchildren from Arkansas with annual georeferenced business lists, and utilizes the variations of supermarket openings and closings. There is little overall impact in either case. However, supermarket openings are found to reduce the BMI z-scores of low-income children by 0.090 to 0.096 standard deviations. Such impact remains in a variety of robustness exercises. Therefore, improvement in healthy food access could at least help reduce childhood obesity rates among certain population groups.
Highlights
• We assess the child weight impacts of supermarket openings and closings.
• There is little overall impact with either supermarket openings or closings.
• Supermarket openings reduce the weight of low-income children.
• Supermarket closings does not have a clear impact on children.
Abstract: Retail food environment is increasingly considered in relation to obesity. This study investigates the impacts of access to supermarkets, the primary source of healthy foods in the United States, on the bodyweight of children. Empirical analysis uses individual-level panel data covering health screenings of public schoolchildren from Arkansas with annual georeferenced business lists, and utilizes the variations of supermarket openings and closings. There is little overall impact in either case. However, supermarket openings are found to reduce the BMI z-scores of low-income children by 0.090 to 0.096 standard deviations. Such impact remains in a variety of robustness exercises. Therefore, improvement in healthy food access could at least help reduce childhood obesity rates among certain population groups.
Sexual arousal was associated with reduced disgust & reduced judgments of disease risk, & with enhanced willingness to have sex with all (risky & non-risky) targets; trait disgust was a predictor
The Role of Disgust in Male Sexual Decision-Making. Megan Oaten et al. Front. Psychol., Jan 22 2019, https://doi.org/10.3389/fpsyg.2018.02602
Abstract: Sexual arousal is known to increase risky behaviors, such as having unprotected sex. This may in part relate to the emotion of disgust, which normally serves a disease avoidant function, and is suppressed by sexual arousal. In this report we examine disgust's role in sexual decision-making. Male participants received two study packets that were to be completed at home across two different time-points. Participants were asked to complete one packet in a sexually aroused state and the other in a non-aroused state. Participants were asked to rate: (1) arousal, (2) disgust, (3) willingness for sex, and (4) disease risk toward a range of female targets, which varied in level of potential disease risk (sex-worker vs. non sex-worker) and attractiveness. A measure of trait disgust was also included along with other related scales. Sexual arousal was associated with reduced disgust and reduced judgments of disease risk for all targets—these latter two variables being correlated—and with enhanced willingness to have sex with all of the depicted persons. Willingness to have sex when aroused (in contrast to non-aroused) was predicted by disease risk judgments and trait disgust, suggesting both direct (state) and indirect (trait) effects of disgust on sexual decision-making.
Abstract: Sexual arousal is known to increase risky behaviors, such as having unprotected sex. This may in part relate to the emotion of disgust, which normally serves a disease avoidant function, and is suppressed by sexual arousal. In this report we examine disgust's role in sexual decision-making. Male participants received two study packets that were to be completed at home across two different time-points. Participants were asked to complete one packet in a sexually aroused state and the other in a non-aroused state. Participants were asked to rate: (1) arousal, (2) disgust, (3) willingness for sex, and (4) disease risk toward a range of female targets, which varied in level of potential disease risk (sex-worker vs. non sex-worker) and attractiveness. A measure of trait disgust was also included along with other related scales. Sexual arousal was associated with reduced disgust and reduced judgments of disease risk for all targets—these latter two variables being correlated—and with enhanced willingness to have sex with all of the depicted persons. Willingness to have sex when aroused (in contrast to non-aroused) was predicted by disease risk judgments and trait disgust, suggesting both direct (state) and indirect (trait) effects of disgust on sexual decision-making.
Mistaken belief that genetic influence implies genetic essentialism, and is therefore tantamount to prejudice, is raised as possible reason why heritability is often ignored in the social sciences
Nature vs. nurture is nonsense: On the necessity of an integrated genetic, social, developmental, and personality psychology. Fiona Kate Barlow. Australian Journal of Psychology, https://doi.org/10.1111/ajpy.12240
Abstract: The field of behavioural genetics unambiguously demonstrates that heritable individual differences exist and are important in explaining human behaviour. Despite this, some psychological perspectives ignore this research. If we wish to comprehensively understand the impact of parenting, the environment, or any social factor, however, we must engage with genetics. In this article, I review research that reveals that genes affect not only our personalities, but the way that we understand and react to the social world. Studies further reveal that notable life events are in part explained by genetic variance. I detail how this could be the case through active, evocative, and passive genetic correlations, and go on to argue that all complex psychological traits are likely the result of multifaceted gene by environment interactions. A mistaken belief that genetic influence implies genetic essentialism, and is therefore tantamount to prejudice, is raised as possible reason why heritability is often ignored in the social sciences. The article concludes with practical suggestions for how we can embrace behavioural genetics as our methods struggle to match the divine complexity of human existence.
Abstract: The field of behavioural genetics unambiguously demonstrates that heritable individual differences exist and are important in explaining human behaviour. Despite this, some psychological perspectives ignore this research. If we wish to comprehensively understand the impact of parenting, the environment, or any social factor, however, we must engage with genetics. In this article, I review research that reveals that genes affect not only our personalities, but the way that we understand and react to the social world. Studies further reveal that notable life events are in part explained by genetic variance. I detail how this could be the case through active, evocative, and passive genetic correlations, and go on to argue that all complex psychological traits are likely the result of multifaceted gene by environment interactions. A mistaken belief that genetic influence implies genetic essentialism, and is therefore tantamount to prejudice, is raised as possible reason why heritability is often ignored in the social sciences. The article concludes with practical suggestions for how we can embrace behavioural genetics as our methods struggle to match the divine complexity of human existence.
Monty Hall Dilemmas in capuchin monkeys, rhesus macaques, and humans
Monty Hall Dilemmas in capuchin monkeys, rhesus macaques, and humans. Watzek, Julia, Whitham, Will, Washburn, David A, Brosnan, Sarah. International Journal of Comparative PsychologyVolume 31, https://escholarship.org/uc/item/1jn0t21r
Abstract: The Monty Hall Dilemma (MHD) is a simple probability puzzle famous for its counterintuitive solution. Participants initially choose among three doors, one of which conceals a prize. A different door is opened and shown not to contain the prize. Participants are then asked whether they would like to stay with their original choice or switch to the other remaining door. Although switching doubles the chances of winning, people overwhelmingly choose to stay with their original choice. To assess how experience and the chance of winning affect decisions in the MHD, we used a comparative approach to test 264 college students, 24 capuchin monkeys, and 7 rhesus macaques on a nonverbal, computerized version of the game. Participants repeatedly experienced the outcome of their choices and we varied the chance of winning by changing the number of doors (three or eight). All species quickly and consistently switched doors, especially in the eight-door condition. After the computer task, we presented humans with the classic text version of the MHD to test whether they would generalize the successful switch strategy from the computer task. Instead, participants showed their characteristic tendency to stick with their pick, regardless of the number of doors. This disconnect between strategies in the classic version and a repeated nonverbal task with the same underlying probabilities may arise because they evoke different decision-making processes, such as explicit reasoning versus implicit learning.
Abstract: The Monty Hall Dilemma (MHD) is a simple probability puzzle famous for its counterintuitive solution. Participants initially choose among three doors, one of which conceals a prize. A different door is opened and shown not to contain the prize. Participants are then asked whether they would like to stay with their original choice or switch to the other remaining door. Although switching doubles the chances of winning, people overwhelmingly choose to stay with their original choice. To assess how experience and the chance of winning affect decisions in the MHD, we used a comparative approach to test 264 college students, 24 capuchin monkeys, and 7 rhesus macaques on a nonverbal, computerized version of the game. Participants repeatedly experienced the outcome of their choices and we varied the chance of winning by changing the number of doors (three or eight). All species quickly and consistently switched doors, especially in the eight-door condition. After the computer task, we presented humans with the classic text version of the MHD to test whether they would generalize the successful switch strategy from the computer task. Instead, participants showed their characteristic tendency to stick with their pick, regardless of the number of doors. This disconnect between strategies in the classic version and a repeated nonverbal task with the same underlying probabilities may arise because they evoke different decision-making processes, such as explicit reasoning versus implicit learning.
On the associations between indicators of resting arousal levels, physiological reactivity, sensation seeking, and psychopathic traits
On the associations between indicators of resting arousal levels, physiological reactivity, sensation seeking, and psychopathic traits. Nicholas Kavish et al. Personality and Individual Differences, Volume 141, 15 April 2019, Pages 218-225. https://doi.org/10.1016/j.paid.2019.01.013
Abstract: Despite consistent findings associating autonomic activity, such as resting heart rate, with antisocial behavior, the research connecting autonomic variables to related phenotypes, such as psychopathy and sensation seeking, has been mixed. The existing research in this area has been limited by underpowered samples, focused predominantly on incarcerated males, frequently dichotomized samples into “psychopaths” and controls, and failed to consider potential gender differences. The current study sought to address some of these limitations using a relatively large undergraduate sample (N = 453), four measures of autonomic activity (e.g., resting heart rate, resting skin conductance, heart rate reactivity, and skin conductance reactivity), a sensation seeking scale, and two measures of psychopathic traits. In order to thoroughly assess possible gender differences, the analyses were conducted for males and females separately. Few significant associations were found between the autonomic and psychological variables, and most became insignificant after controlling for age and race and correcting for multiple comparisons. The current study offers little support for an association between autonomic activity and sensation seeking or psychopathic traits.
Abstract: Despite consistent findings associating autonomic activity, such as resting heart rate, with antisocial behavior, the research connecting autonomic variables to related phenotypes, such as psychopathy and sensation seeking, has been mixed. The existing research in this area has been limited by underpowered samples, focused predominantly on incarcerated males, frequently dichotomized samples into “psychopaths” and controls, and failed to consider potential gender differences. The current study sought to address some of these limitations using a relatively large undergraduate sample (N = 453), four measures of autonomic activity (e.g., resting heart rate, resting skin conductance, heart rate reactivity, and skin conductance reactivity), a sensation seeking scale, and two measures of psychopathic traits. In order to thoroughly assess possible gender differences, the analyses were conducted for males and females separately. Few significant associations were found between the autonomic and psychological variables, and most became insignificant after controlling for age and race and correcting for multiple comparisons. The current study offers little support for an association between autonomic activity and sensation seeking or psychopathic traits.
Extensive comparison 22 kHz vocalizations in rats with human cry: 76% of common features; vocalizations may be an evolutionary vocal homolog of human crying, expressing anxiety, not depression
Emission of 22 kHz vocalizations in rats as an evolutionary equivalent of human crying: Relationship to depression. Stefan M. Brudzynski. Behavioural Brain Research, https://doi.org/10.1016/j.bbr.2019.01.033
Highlights
• Rat 22 kHz ultrasonic vocalizations (USVs) were compared with human crying
• Extensive comparison of 22 kHz USV with human cry showed 76% of common features
• Rat 22 kHz USVs may be treated as an evolutionary vocal homolog of human crying
• Rat 22 kHz USVs and human crying are both expressing anxiety and not depression
Abstract: There is no clear relationship between crying and depression based on human neuropsychiatric observations. This situation originates from lack of suitable animal models of human crying. In the present article, an attempt will be made to answer the question whether emission of rat aversive vocalizations (22 kHz calls) may be regarded as an evolutionary equivalent of adult human crying. Using this comparison, the symptom of crying in depressed human patients will be reanalyzed. Numerous features and characteristics of rat 22 kHz aversive vocalizations and human crying vocalizations are equivalent. Comparing evolutionary, biological, physiological, neurophysiological, social, pharmacological, and pathological aspects have shown vast majority of common features. It is concluded that emission of rat 22 kHz vocalizations may be treated as an evolutionary vocal homolog of human crying, although emission of 22 kHz calls is not exactly the same phenomenon because of significant differences in cognitive processes between these species. It is further concluded that rat 22 kHz vocalizations and human crying vocalizations are both expressing anxiety and not depression. Analysis of the relationship between anxiety and depression reported in clinical studies supports this conclusion regardless of the nature and extent of comorbidity between these pathological states.
Highlights
• Rat 22 kHz ultrasonic vocalizations (USVs) were compared with human crying
• Extensive comparison of 22 kHz USV with human cry showed 76% of common features
• Rat 22 kHz USVs may be treated as an evolutionary vocal homolog of human crying
• Rat 22 kHz USVs and human crying are both expressing anxiety and not depression
Abstract: There is no clear relationship between crying and depression based on human neuropsychiatric observations. This situation originates from lack of suitable animal models of human crying. In the present article, an attempt will be made to answer the question whether emission of rat aversive vocalizations (22 kHz calls) may be regarded as an evolutionary equivalent of adult human crying. Using this comparison, the symptom of crying in depressed human patients will be reanalyzed. Numerous features and characteristics of rat 22 kHz aversive vocalizations and human crying vocalizations are equivalent. Comparing evolutionary, biological, physiological, neurophysiological, social, pharmacological, and pathological aspects have shown vast majority of common features. It is concluded that emission of rat 22 kHz vocalizations may be treated as an evolutionary vocal homolog of human crying, although emission of 22 kHz calls is not exactly the same phenomenon because of significant differences in cognitive processes between these species. It is further concluded that rat 22 kHz vocalizations and human crying vocalizations are both expressing anxiety and not depression. Analysis of the relationship between anxiety and depression reported in clinical studies supports this conclusion regardless of the nature and extent of comorbidity between these pathological states.
Monday, January 21, 2019
In Canada, the gap in Atheism prevalence of men and women is widening
The evolution of the gender religiosity gap among the Canadian-born. Maryam Dilmaghani. Review of Social Economy, https://doi.org/10.1080/00346764.2018.1562198
Abstract: The higher religiosity of women in the Western Christian societies is one of the best documented findings in the religious scholarship. In spite of the recent vibrancy of secular movements in North America, the higher religiosity of women appears persistent. As a result, the gender ratio is greatly skewed in the secular groups in favour of males. For instance, for every atheist female in North America, there are at least three males. Using the Canadian General Social Surveys of 1985–2014, this paper examines how the gender religiosity gap has evolved among the Canadian-born. Throughout the period, Canadian-born women are found less likely to be unaffiliated and show a greater frequency of religious attendance. The religious attendance gap is found to be closing. The unaffiliation gap, on the other hand, seems to have widened in the 21st century. Limiting the analyses to the gainfully employed respondents only reduces the religious attendance gap. For the high earners, the attendance gap effectively disappears, while a large unaffiliation gap persists into the 2010s. This pattern is best explained by the recent literature asserting that men and women are differentially socially sanctioned for the adoption of a secularized identity. The alleged sexism of the new secular movements is also noted as a potential explanation. The examination of the recent Canadian data on perceived religious and gender discrimination produces evidence congruent with both of these potential explanations.
Keywords: Gender, religiosity, secularity, Canada
Abstract: The higher religiosity of women in the Western Christian societies is one of the best documented findings in the religious scholarship. In spite of the recent vibrancy of secular movements in North America, the higher religiosity of women appears persistent. As a result, the gender ratio is greatly skewed in the secular groups in favour of males. For instance, for every atheist female in North America, there are at least three males. Using the Canadian General Social Surveys of 1985–2014, this paper examines how the gender religiosity gap has evolved among the Canadian-born. Throughout the period, Canadian-born women are found less likely to be unaffiliated and show a greater frequency of religious attendance. The religious attendance gap is found to be closing. The unaffiliation gap, on the other hand, seems to have widened in the 21st century. Limiting the analyses to the gainfully employed respondents only reduces the religious attendance gap. For the high earners, the attendance gap effectively disappears, while a large unaffiliation gap persists into the 2010s. This pattern is best explained by the recent literature asserting that men and women are differentially socially sanctioned for the adoption of a secularized identity. The alleged sexism of the new secular movements is also noted as a potential explanation. The examination of the recent Canadian data on perceived religious and gender discrimination produces evidence congruent with both of these potential explanations.
Keywords: Gender, religiosity, secularity, Canada
Is Meat Sexy? Meat symbolizes status both evolutionarily & in modern times; men’s sexual motivation system might increase preference for meat; women, when are sexually motivated, might have less meat
Is Meat Sexy? Meat Preference as a Function of the Sexual Motivation System. Eugene Y.Chan, Natalina Zlatevska. Food Quality and Preference, https://doi.org/10.1016/j.foodqual.2019.01.008
Highlights
• Meat symbolizes status both evolutionarily and in modern times.
• Thus, men’s sexual motivation system might increase their preference for meat.
• Men’s desire for status mediates the effect.
• An internal meta-analysis shows that women, when they are sexually motivated, might lower meat consumption.
• The findings add to knowledge about how evolutionary processes shape food preferences.
Abstract: When their sexual motivation system is activated, men behave in ways that would increase their desirability as a mating partner to women. For example, they take greater risks and become more altruistic. We examine the possibility that men’s sexual motivation, when elicited, can influence their preference for meat because meat signals status to others, including women—and signalling status is one way to help men achieve their mating goals. We find support for this hypothesis in three studies involving consumption (Study 1) and preference (Studies 2 and 3) for meat. Men’s desire for status mediates their liking for meat. In contrast, when their sexual motivation system is activated, women like meat less, possibly since they pursue other strategies such as beauty and health to make themselves desirable to men. Thus, we suggest that evolutionary processes shape food preferences. We discuss the contributions and limitations of our results as well as practical implications for reducing meat consumption—to not only improve one’s physical health but food sustainability.
Highlights
• Meat symbolizes status both evolutionarily and in modern times.
• Thus, men’s sexual motivation system might increase their preference for meat.
• Men’s desire for status mediates the effect.
• An internal meta-analysis shows that women, when they are sexually motivated, might lower meat consumption.
• The findings add to knowledge about how evolutionary processes shape food preferences.
Abstract: When their sexual motivation system is activated, men behave in ways that would increase their desirability as a mating partner to women. For example, they take greater risks and become more altruistic. We examine the possibility that men’s sexual motivation, when elicited, can influence their preference for meat because meat signals status to others, including women—and signalling status is one way to help men achieve their mating goals. We find support for this hypothesis in three studies involving consumption (Study 1) and preference (Studies 2 and 3) for meat. Men’s desire for status mediates their liking for meat. In contrast, when their sexual motivation system is activated, women like meat less, possibly since they pursue other strategies such as beauty and health to make themselves desirable to men. Thus, we suggest that evolutionary processes shape food preferences. We discuss the contributions and limitations of our results as well as practical implications for reducing meat consumption—to not only improve one’s physical health but food sustainability.
Demographic, phenotypic, and genetic characteristics of centenarians in Okinawa and Japan: Part 1—centenarians in Okinawa
Demographic, phenotypic, and genetic characteristics of centenarians in Okinawa and Japan: Part 1—centenarians in Okinawa. Bradley J. Willcox, Donald Craig Willcox, Makoto Suzuki. Mechanisms of Ageing and Development, Volume 165, Part B, July 2017, Pages 75-79, https://doi.org/10.1016/j.mad.2016.11.001
Highlights
• Okinawa has among the longest lifespans and highest prevalence rates of centenarians in the world − greater than 85% are female.
• The Okinawan centenarian phenotype is typically shorter, leaner, has less prevalent age-related disease, and healthier metabolic profiles than other Japanese.
• Despite consumption of a diet consistent with natural caloric restriction, which likely contributed to the longevity phenotype, Okinawans are also genetically distinct from other Asian populations.
• The relative contribution of environment versus genetics to the longevity phenotype in Okinawa is still under investigation.
Abstract: A study of elderly Okinawans has been carried out by the Okinawa Centenarian Study (OCS) research group for over four decades. The OCS began in 1975 as a population-based study of centenarians (99-year-olds and older) and other selected elderly persons residing in the main island of the Japanese prefecture of Okinawa. As of 2015, over 1000 centenarians have been examined. By several measures of health and longevity the Okinawans can claim to be the world’s healthiest and longest-lived people. In this paper we explore the demographic, phenotypic, and genetic characteristics of this fascinating population.
Highlights
• Okinawa has among the longest lifespans and highest prevalence rates of centenarians in the world − greater than 85% are female.
• The Okinawan centenarian phenotype is typically shorter, leaner, has less prevalent age-related disease, and healthier metabolic profiles than other Japanese.
• Despite consumption of a diet consistent with natural caloric restriction, which likely contributed to the longevity phenotype, Okinawans are also genetically distinct from other Asian populations.
• The relative contribution of environment versus genetics to the longevity phenotype in Okinawa is still under investigation.
Abstract: A study of elderly Okinawans has been carried out by the Okinawa Centenarian Study (OCS) research group for over four decades. The OCS began in 1975 as a population-based study of centenarians (99-year-olds and older) and other selected elderly persons residing in the main island of the Japanese prefecture of Okinawa. As of 2015, over 1000 centenarians have been examined. By several measures of health and longevity the Okinawans can claim to be the world’s healthiest and longest-lived people. In this paper we explore the demographic, phenotypic, and genetic characteristics of this fascinating population.
Republicans have greater longevity compared to Democrats, adjusting for demographics; partly explained by Republican's higher socioeconomic status, & partly by their personal responsibility ethos
Political parties and mortality: The role of social status and personal responsibility. Viji Diane Kannan et al. Social Science & Medicine, https://doi.org/10.1016/j.socscimed.2019.01.029
Highlights
• Republicans have greater longevity compared to Democrats, adjusting for demographics.
• This relationship is partly explained by Republican's higher socioeconomic status.
• This relationship is partly explained by Republican's personal responsibility ethos.
Abstract: Previous research findings across a variety of nations show that affiliation with the conservative party is associated with greater longevity; however, it is thus far unclear what characteristics contribute to this relationship. We examine the political party/mortality relationship in the United States context. The goal of this paper is two-fold: first, we seek to replicate the mortality difference between Republicans and Democrats in two samples, controlling for demographic confounders. Second, we attempt to isolate and test two potential contributors to the relationship between political party affiliation and mortality: (1) socioeconomic status and (2) dispositional traits reflecting a personal responsibility ethos, as described by the Republican party. Graduate and sibling cohorts from the Wisconsin Longitudinal Study were used to estimate mortality risk from 2004 to 2014. In separate Cox proportional hazards models controlling for age and sex, we adjusted first for markers of socioeconomic status (such as wealth and education), then for dispositional traits (such as conscientiousness and active coping), and finally for both socioeconomic status and dispositional traits together. Clogg's method was used to test the statistical significance of attenuation in hazard ratios for each model. In both cohorts, Republicans exhibited lower mortality risk compared to Democrats (Hazard Ratios = 0.79 and 0.73 in graduate and sibling cohorts, respectively [p < 0.05]). This relationship was explained, in part, by socioeconomic status and traits reflecting personal responsibility. Together, socioeconomic factors and dispositional traits account for about 52% (graduates) and 44% (siblings) of Republicans' survival advantage. This study suggests that mortality differences between political parties in the US may be linked to structural and individual determinants of health. These findings highlight the need for better understanding of political party divides in mortality rates.
Subscribe to:
Posts (Atom)