Following the poppy trail: Origins and consequences of Mexican drug cartels. Tommy E. Murphy, Martín A. Rossi. Journal of Development Economics, Volume 143, March 2020, 102433. https://doi.org/10.1016/j.jdeveco.2019.102433
Highlights
• We study the origins, and economic and social consequences of Mexican drug cartels.
• The location of current cartels is strongly linked to the location of Chinese migration at the beginning of the 20th century.
• We report a positive connection between cartel presence and better socioeconomic outcomes at the municipality level.
• Our results help to understand why drug lords have great support in the local communities in which they operate.
Abstract: This paper studies the origins, and economic and social consequences of some of the most prominent drug trafficking organizations in the world: the Mexican cartels. It first traces the current location of cartels to the places where Chinese migrated at the beginning of the 20th century, discussing and documenting how both events are strongly connected. Information on Chinese presence at the beginning of the 20th century is then used to instrument for cartel presence today, to identify the effect of cartels on society. Contrary to what seems to happen with other forms of organized crime, the IV estimates in this study indicate that at the local level there is a positive link between cartel presence and better socioeconomic outcomes (e.g. lower marginalization rates, lower illiteracy rates, higher salaries), better public services, and higher tax revenues, evidence that is consistent with the known stylized fact that drug lords tend have great support in the local communities in which they operate.
JEL classification: N36, O15
Monday, February 10, 2020
Increasingly, evidence suggests aggressive video games have little impact on player behavior in the realm of aggression and violence, but most professional guild policy statements failed to reflect these data
Aggressive Video Games Research Emerges from its Replication Crisis (Sort of). Christopher J Ferguson. Current Opinion in Psychology, February 10 2020. https://doi.org/10.1016/j.copsyc.2020.01.002
Highlights
• Previous research on aggressive video games (AVGs) suffered from high false positive rates.
• New, preregistered studies suggest AVGs have little impact on player aggression.
• Prior meta-analyses overestimated the evidence for effects.
• Professional guild statements by the American Psychological Association and American Academy of Pediatrics are inaccurate.
• Consumers may not mimic behaviors seen in fictional media.
Abstract: The impact of aggressive video games (AVGs) on aggression and violent behavior among players, particularly youth, has been debated for decades. In recent years, evidence for publication bias, questionable researcher practices, citation bias and poor standardization of many measures and research designs has indicated that the false positive rate among studies of AVGs has been high. Several studies have undergone retraction. A small recent wave of preregistered studies has largely returned null results for outcomes related to youth violence as well as outcomes related to milder aggression. Increasingly, evidence suggests AVGs have little impact on player behavior in the realm of aggression and violence. Nonetheless, most professional guild policy statements (e.g. American Psychological Association) have failed to reflect these changes in the literature. Such policy statements should be retired or revised lest they misinform the public or do damage to the reputation of these organizations.
Highlights
• Previous research on aggressive video games (AVGs) suffered from high false positive rates.
• New, preregistered studies suggest AVGs have little impact on player aggression.
• Prior meta-analyses overestimated the evidence for effects.
• Professional guild statements by the American Psychological Association and American Academy of Pediatrics are inaccurate.
• Consumers may not mimic behaviors seen in fictional media.
Abstract: The impact of aggressive video games (AVGs) on aggression and violent behavior among players, particularly youth, has been debated for decades. In recent years, evidence for publication bias, questionable researcher practices, citation bias and poor standardization of many measures and research designs has indicated that the false positive rate among studies of AVGs has been high. Several studies have undergone retraction. A small recent wave of preregistered studies has largely returned null results for outcomes related to youth violence as well as outcomes related to milder aggression. Increasingly, evidence suggests AVGs have little impact on player behavior in the realm of aggression and violence. Nonetheless, most professional guild policy statements (e.g. American Psychological Association) have failed to reflect these changes in the literature. Such policy statements should be retired or revised lest they misinform the public or do damage to the reputation of these organizations.
The Nuclear Family Was a Mistake: Loneliness, lack of support, fragility
The Nuclear Family Was a Mistake. David Brooks. The Atlantic. Mar 2020. https://www.theatlantic.com/magazine/archive/2020/03/the-nuclear-family-was-a-mistake/605536/
The family structure we’ve held up as the cultural ideal for the past half century has been a catastrophe for many. It’s time to figure out better ways to live together.
Excerpts:
This is the story of our times—the story of the family, once a dense cluster of many siblings and extended kin, fragmenting into ever smaller and more fragile forms. The initial result of that fragmentation, the nuclear family, didn’t seem so bad. But then, because the nuclear family is so brittle, the fragmentation continued. In many sectors of society, nuclear families fragmented into single-parent families, single-parent families into chaotic families or no families.
If you want to summarize the changes in family structure over the past century, the truest thing to say is this: We’ve made life freer for individuals and more unstable for families. We’ve made life better for adults but worse for children. We’ve moved from big, interconnected, and extended families, which helped protect the most vulnerable people in society from the shocks of life, to smaller, detached nuclear families (a married couple and their children), which give the most privileged people in society room to maximize their talents and expand their options. The shift from bigger and interconnected extended families to smaller and detached nuclear families ultimately led to a familial system that liberates the rich and ravages the working-class and the poor.
...
Ever since I started working on this article, a chart has been haunting me [https://www.pewforum.org/2019/12/12/religion-and-living-arrangements-around-the-world/pf_12-12-19_religion-households-00-02/]. It plots the percentage of people living alone in a country against that nation’s GDP. There’s a strong correlation. Nations where a fifth of the people live alone, like Denmark and Finland, are a lot richer than nations where almost no one lives alone, like the ones in Latin America or Africa. Rich nations have smaller households than poor nations. The average German lives in a household with 2.7 people. The average Gambian lives in a household with 13.8 people.
That chart suggests two things, especially in the American context. First, the market wants us to live alone or with just a few people. That way we are mobile, unattached, and uncommitted, able to devote an enormous number of hours to our jobs. Second, when people who are raised in developed countries get money, they buy privacy.
For the privileged, this sort of works. The arrangement enables the affluent to dedicate more hours to work and email, unencumbered by family commitments. They can afford to hire people who will do the work that extended family used to do. But a lingering sadness lurks, an awareness that life is emotionally vacant when family and close friends aren’t physically present, when neighbors aren’t geographically or metaphorically close enough for you to lean on them, or for them to lean on you. Today’s crisis of connection flows from the impoverishment of family life.
I often ask African friends who have immigrated to America what most struck them when they arrived. Their answer is always a variation on a theme—the loneliness. It’s the empty suburban street in the middle of the day, maybe with a lone mother pushing a baby carriage on the sidewalk but nobody else around.
For those who are not privileged, the era of the isolated nuclear family has been a catastrophe. It’s led to broken families or no families; to merry-go-round families that leave children traumatized and isolated; to senior citizens dying alone in a room. All forms of inequality are cruel, but family inequality may be the cruelest. It damages the heart. Eventually family inequality even undermines the economy the nuclear family was meant to serve: Children who grow up in chaos have trouble becoming skilled, stable, and socially mobile employees later on.
The family structure we’ve held up as the cultural ideal for the past half century has been a catastrophe for many. It’s time to figure out better ways to live together.
Excerpts:
This is the story of our times—the story of the family, once a dense cluster of many siblings and extended kin, fragmenting into ever smaller and more fragile forms. The initial result of that fragmentation, the nuclear family, didn’t seem so bad. But then, because the nuclear family is so brittle, the fragmentation continued. In many sectors of society, nuclear families fragmented into single-parent families, single-parent families into chaotic families or no families.
If you want to summarize the changes in family structure over the past century, the truest thing to say is this: We’ve made life freer for individuals and more unstable for families. We’ve made life better for adults but worse for children. We’ve moved from big, interconnected, and extended families, which helped protect the most vulnerable people in society from the shocks of life, to smaller, detached nuclear families (a married couple and their children), which give the most privileged people in society room to maximize their talents and expand their options. The shift from bigger and interconnected extended families to smaller and detached nuclear families ultimately led to a familial system that liberates the rich and ravages the working-class and the poor.
...
Ever since I started working on this article, a chart has been haunting me [https://www.pewforum.org/2019/12/12/religion-and-living-arrangements-around-the-world/pf_12-12-19_religion-households-00-02/]. It plots the percentage of people living alone in a country against that nation’s GDP. There’s a strong correlation. Nations where a fifth of the people live alone, like Denmark and Finland, are a lot richer than nations where almost no one lives alone, like the ones in Latin America or Africa. Rich nations have smaller households than poor nations. The average German lives in a household with 2.7 people. The average Gambian lives in a household with 13.8 people.
That chart suggests two things, especially in the American context. First, the market wants us to live alone or with just a few people. That way we are mobile, unattached, and uncommitted, able to devote an enormous number of hours to our jobs. Second, when people who are raised in developed countries get money, they buy privacy.
For the privileged, this sort of works. The arrangement enables the affluent to dedicate more hours to work and email, unencumbered by family commitments. They can afford to hire people who will do the work that extended family used to do. But a lingering sadness lurks, an awareness that life is emotionally vacant when family and close friends aren’t physically present, when neighbors aren’t geographically or metaphorically close enough for you to lean on them, or for them to lean on you. Today’s crisis of connection flows from the impoverishment of family life.
I often ask African friends who have immigrated to America what most struck them when they arrived. Their answer is always a variation on a theme—the loneliness. It’s the empty suburban street in the middle of the day, maybe with a lone mother pushing a baby carriage on the sidewalk but nobody else around.
For those who are not privileged, the era of the isolated nuclear family has been a catastrophe. It’s led to broken families or no families; to merry-go-round families that leave children traumatized and isolated; to senior citizens dying alone in a room. All forms of inequality are cruel, but family inequality may be the cruelest. It damages the heart. Eventually family inequality even undermines the economy the nuclear family was meant to serve: Children who grow up in chaos have trouble becoming skilled, stable, and socially mobile employees later on.
Human populations vary substantially & unexpectedly in both the range and pattern of facial sexually dimorphic traits; European & South American populations display larger levels of facial sexual dimorphism than African populations
Kleisner, Karel, Petr Tureček, S. Craig Roberts, Jan Havlicek, Jaroslava V. Valentova, Robert M. Akoko, Juan David Leongómez, et al. 2020. “How and Why Patterns of Sexual Dimorphism in Human Faces Vary Across the World.” PsyArXiv. February 10. doi:10.31234/osf.io/7vdm
Abstract: Sexual selection, including mate choice and intrasexual competition, is responsible for the evolution of some of the most elaborated and sexually dimorphic traits in animals. Although there is clear sexual dimorphism in the shape of human faces, it is not clear whether this is similarly due to mate choice, or whether mate choice affects only part of the facial shape difference between men and women. Here we explore these questions by investigating patterns of both facial shape and facial preference across a diverse set of human populations. We find evidence that human populations vary substantially and unexpectedly in both the range and pattern of facial sexually dimorphic traits. In particular, European and South American populations display larger levels of facial sexual dimorphism than African populations. Neither cross-cultural differences in facial shape variation, differences in body height between sexes, nor differing preferences for facial sex-typicality across countries, explain the observed patterns of facial dimorphism. Altogether, the association between morphological sex-typicality and attractiveness is moderate for women and weak (or absent) for men. Analysis that distinguishes between allometric and non-allometric components reveals that non-allometric sex-typicality is preferred in women’s faces but not in faces of men. This might be due to different regimes of ongoing sexual selection acting on men, such as stronger intersexual selection for body height and more intense intrasexual physical competition, compared with women.
Abstract: Sexual selection, including mate choice and intrasexual competition, is responsible for the evolution of some of the most elaborated and sexually dimorphic traits in animals. Although there is clear sexual dimorphism in the shape of human faces, it is not clear whether this is similarly due to mate choice, or whether mate choice affects only part of the facial shape difference between men and women. Here we explore these questions by investigating patterns of both facial shape and facial preference across a diverse set of human populations. We find evidence that human populations vary substantially and unexpectedly in both the range and pattern of facial sexually dimorphic traits. In particular, European and South American populations display larger levels of facial sexual dimorphism than African populations. Neither cross-cultural differences in facial shape variation, differences in body height between sexes, nor differing preferences for facial sex-typicality across countries, explain the observed patterns of facial dimorphism. Altogether, the association between morphological sex-typicality and attractiveness is moderate for women and weak (or absent) for men. Analysis that distinguishes between allometric and non-allometric components reveals that non-allometric sex-typicality is preferred in women’s faces but not in faces of men. This might be due to different regimes of ongoing sexual selection acting on men, such as stronger intersexual selection for body height and more intense intrasexual physical competition, compared with women.
Caffeine improved global processing, without effect on local information processing, alerting, spatial attention & executive or phonological functions; also was accompanied by faster text reading speed of meaningful sentences
Caffeine improves text reading and global perception. Sandro Franceschini et al. Journal of Psychopharmacology, October 3, 2019. https://doi.org/10.1177/0269881119878178
Abstract
Background: Reading is a unique human skill. Several brain networks involved in this complex skill mainly involve the left hemisphere language areas. Nevertheless, nonlinguistic networks found in the right hemisphere also seem to be involved in sentence and text reading. These areas do not deal with phonological information, but are involved in verbal and nonverbal pattern information processing. The right hemisphere is responsible for global processing of a scene, which is needed for developing reading skills.
Aims: Caffeine seems to affect global pattern processing specifically. Consequently, our aim was to discover if it could enhance text reading skill.
Methods: In two mechanistic studies (n=24 and n=53), we tested several reading skills, global and local perception, alerting, spatial attention and executive functions, as well as rapid automatised naming and phonological memory, using a double-blind, within-subjects, repeated-measures design in typical young adult readers.
Results: A single dose of 200 mg caffeine improved global processing, without any effect on local information processing, alerting, spatial attention and executive or phonological functions. This improvement in global processing was accompanied by faster text reading speed of meaningful sentences, whereas single word/pseudoword or pseudoword text reading abilities were not affected. These effects of caffeine on reading ability were enhanced by mild sleep deprivation.
Conclusions: These findings show that a small quantity of caffeine could improve global processing and text reading skills in adults.
Keywords: Visual perception, reading enhancement, parallel processing, psychostimulant, context processing
Check also Zabelina, Darya, and Paul Silvia. 2020. “Percolating Ideas: The Effects of Caffeine on Creative Thinking and Problem Solving.” PsyArXiv. February 9. https://www.bipartisanalliance.com/2020/02/those-who-consumed-200-mg-of-caffeine.html
And Surprise: Consuming 1–5 cups of coffee/day was related to lower mortality among never smokers; they forgot to discount/adjust for pack-years of smoking, healthy & unhealthy foods, & added sugar
And Inverse association between caffeine intake and depressive symptoms in US adults: data from National Health and Nutrition Examination Survey (NHANES) 2005–2006. Sohrab Iranpour, Siamak Sabour. Psychiatry Research, Nov 2018. https://doi.org/10.1016/j.psychres.2018.11.004
Abstract
Background: Reading is a unique human skill. Several brain networks involved in this complex skill mainly involve the left hemisphere language areas. Nevertheless, nonlinguistic networks found in the right hemisphere also seem to be involved in sentence and text reading. These areas do not deal with phonological information, but are involved in verbal and nonverbal pattern information processing. The right hemisphere is responsible for global processing of a scene, which is needed for developing reading skills.
Aims: Caffeine seems to affect global pattern processing specifically. Consequently, our aim was to discover if it could enhance text reading skill.
Methods: In two mechanistic studies (n=24 and n=53), we tested several reading skills, global and local perception, alerting, spatial attention and executive functions, as well as rapid automatised naming and phonological memory, using a double-blind, within-subjects, repeated-measures design in typical young adult readers.
Results: A single dose of 200 mg caffeine improved global processing, without any effect on local information processing, alerting, spatial attention and executive or phonological functions. This improvement in global processing was accompanied by faster text reading speed of meaningful sentences, whereas single word/pseudoword or pseudoword text reading abilities were not affected. These effects of caffeine on reading ability were enhanced by mild sleep deprivation.
Conclusions: These findings show that a small quantity of caffeine could improve global processing and text reading skills in adults.
Keywords: Visual perception, reading enhancement, parallel processing, psychostimulant, context processing
Check also Zabelina, Darya, and Paul Silvia. 2020. “Percolating Ideas: The Effects of Caffeine on Creative Thinking and Problem Solving.” PsyArXiv. February 9. https://www.bipartisanalliance.com/2020/02/those-who-consumed-200-mg-of-caffeine.html
And Surprise: Consuming 1–5 cups of coffee/day was related to lower mortality among never smokers; they forgot to discount/adjust for pack-years of smoking, healthy & unhealthy foods, & added sugar
Dietary research on coffee: Improving adjustment for confounding. David R Thomas, Ian D Hodges. Current Developments in Nutrition, nzz142, December 26 2019. https://www.bipartisanalliance.com/2019/12/surprise-consuming-15-cups-of-coffeeday.html
And Inverse association between caffeine intake and depressive symptoms in US adults: data from National Health and Nutrition Examination Survey (NHANES) 2005–2006. Sohrab Iranpour, Siamak Sabour. Psychiatry Research, Nov 2018. https://doi.org/10.1016/j.psychres.2018.11.004
Unbearable psychological pain and hopelessness are overwhelmingly important motivations for suicidal behavior, both for men and women
Motivations for Suicide: Converging Evidence from Clinical and Community Samples. Alexis M. May, Mikayla C. Pachkowski, E. David Klonsky. Journal of Psychiatric Research, February 10 2020. https://doi.org/10.1016/j.jpsychires.2020.02.010
Highlights
• Unbearable psychological pain and hopelessness are overwhelmingly important motivations for suicidal behavior.
• Regardless of the time since attempt, pain and hopelessness were critical motivations.
• Pain and hopelessness were the strongest attempt motivations for both men and women.
• The Inventory of Motivations for Suicide Attempts (IMSA) quickly assesses individual motivations.
Abstract: Understanding what motivates suicidal behavior is critical to effective prevention and clinical intervention. The Inventory of Motivations for Suicide Attempts (IMSA) is a self-report measure developed to assess a wide variety of potential motivations for suicide. The purpose of this study is to examine the measure’s psychometric and descriptive properties in two distinct populations: 1) adult psychiatric inpatients (n = 59) with recent suicide attempts (median of 3 days prior) and 2) community participants assessed online (n = 222) who had attempted suicide a median of 5 years earlier. Findings were very similar across both samples and consistent with initial research on the IMSA in outpatients and undergraduates who had attempted suicide. First, the individual IMSA scales demonstrated good internal reliability and were well represented by a two factor superordinate structure: 1) Internal Motivations and 2) Communication Motivations. Second, in both samples unbearable mental pain and hopelessness were the most common and strongly endorsed motivations, while interpersonal influence was the least endorsed. Finally, motivations were similar in men and women -- a pattern that previous work was not in a position to examine. Taken together with previous work, findings suggest that the nature, structure, and clinical correlates of suicide attempt motivations remain consistent across diverse individuals and situations. The IMSA may serve as a useful tool in both research and clinical contexts to quickly assess individual suicide attempt motivations.
Highlights
• Unbearable psychological pain and hopelessness are overwhelmingly important motivations for suicidal behavior.
• Regardless of the time since attempt, pain and hopelessness were critical motivations.
• Pain and hopelessness were the strongest attempt motivations for both men and women.
• The Inventory of Motivations for Suicide Attempts (IMSA) quickly assesses individual motivations.
Abstract: Understanding what motivates suicidal behavior is critical to effective prevention and clinical intervention. The Inventory of Motivations for Suicide Attempts (IMSA) is a self-report measure developed to assess a wide variety of potential motivations for suicide. The purpose of this study is to examine the measure’s psychometric and descriptive properties in two distinct populations: 1) adult psychiatric inpatients (n = 59) with recent suicide attempts (median of 3 days prior) and 2) community participants assessed online (n = 222) who had attempted suicide a median of 5 years earlier. Findings were very similar across both samples and consistent with initial research on the IMSA in outpatients and undergraduates who had attempted suicide. First, the individual IMSA scales demonstrated good internal reliability and were well represented by a two factor superordinate structure: 1) Internal Motivations and 2) Communication Motivations. Second, in both samples unbearable mental pain and hopelessness were the most common and strongly endorsed motivations, while interpersonal influence was the least endorsed. Finally, motivations were similar in men and women -- a pattern that previous work was not in a position to examine. Taken together with previous work, findings suggest that the nature, structure, and clinical correlates of suicide attempt motivations remain consistent across diverse individuals and situations. The IMSA may serve as a useful tool in both research and clinical contexts to quickly assess individual suicide attempt motivations.
Minimal Relationship between Local Gyrification (wrinkles in the cerebral cortex) and General Cognitive Ability in Humans
Minimal Relationship between Local Gyrification and General Cognitive Ability in Humans. Samuel R Mathias et al. Cerebral Cortex, bhz319, February 9 2020. https://doi.org/10.1093/cercor/bhz319
Abstract: Previous studies suggest that gyrification is associated with superior cognitive abilities in humans, but the strength of this relationship remains unclear. Here, in two samples of related individuals (total N = 2882), we calculated an index of local gyrification (LGI) at thousands of cortical surface points using structural brain images and an index of general cognitive ability (g) using performance on cognitive tests. Replicating previous studies, we found that phenotypic and genetic LGI–g correlations were positive and statistically significant in many cortical regions. However, all LGI–g correlations in both samples were extremely weak, regardless of whether they were significant or nonsignificant. For example, the median phenotypic LGI–g correlation was 0.05 in one sample and 0.10 in the other. These correlations were even weaker after adjusting for confounding neuroanatomical variables (intracranial volume and local cortical surface area). Furthermore, when all LGIs were considered together, at least 89% of the phenotypic variance of g remained unaccounted for. We conclude that the association between LGI and g is too weak to have profound implications for our understanding of the neurobiology of intelligence. This study highlights potential issues when focusing heavily on statistical significance rather than effect sizes in large-scale observational neuroimaging studies.
A novel finding of the present study was that LGI was heritable across the cortex, extending a previous study that established the heritability of whole-brain GI (Docherty et al. 2015). This finding was not particularly surprising because many features of brain morphology are heritable. Nevertheless, it was necessary to establish the heritability of LGI before calculating genetic LGI–g correlations, which are only meaningful if both LGI and g are heritable traits. The previous study estimated the heritability of GI to be 0.71, which is much greater than most of the heritability estimates for LGI observed in GOBS or HCP. This result is also not surprising, because GI is likely to be contaminated by less measurement error than LGI. Heritabilities of all other traits were consistent with those published in previous studies.
The present study represents a replication of previous work and provides several important extensions to our understanding of the relationship between gyrification and cognition. First, we replicated previous work by finding positive and significant phenotypic LGI–g correlations (e.g., Gregory et al. 2016). Furthermore, we found that genetic LGI–g correlations were positive and significant (but only in HCP), suggesting that the relationship between gyrification and intelligence may be driven by pleiotropy. Since environmental LGI–g correlations were not significant, their net sign differed across GOBS and HCP, and their spatial patterns showed no consistency across samples, it is reasonable to conclude that they mostly reflected measurement error rather than meaningful shared environmental contributions to LGI and g.
In our view, the most important finding from the present study is that all LGI–g correlations, even the significant ones, were weak. Phenotypically, LGI at a typical vertex poorly predicted g. Even when the predictive ability of all LGIs was considered together via ridge regression, at least 89% of the variance of g remained unaccounted for. Phenotypic and genetic LGI–g correlations were weaker than ICV–g correlations in the same participants, and about the same as area–g correlations. Partialing out ICV or area further reduced LGI–g correlations.
The volume of cortical mantle is often computed as the product of its area and thickness, but at the resolution of meshes typically used to represent the cortex, the variability of area is higher than the variability of thickness such that surface area is the primary contributor to the variability of cortical volume (Winkler et al. 2010), and therefore of its relationship to other measurements; the same holds, more strongly even, for parcellations of the cortex in large anatomical or functional regions. This means that the association between overall brain volume and cognitive abilities reported by previous studies (e.g., Pietschnig et al. 2015) is probably primarily driven by area–g correlations (Vuoksimaa et al. 2015). LGI is strongly correlated with area (Gautam et al. 2015; Hogstrom et al. 2013), which explains why partialing out either ICV or area reduced phenotypic and genetic LGI–g correlations in the present study. Thus, we conclude, based on our results, that the association between gyrification and cognitive abilities to a large extent reflects the already well-established relationship between surface area and cognitive abilities, and that the particular association between the unique portion of gyrification and cognitive abilities is extremely small.
The above conclusion is consistent with that of a previous twin study (Docherty et al. 2015), which examined genetic associations between overall cortical surface area, whole-brain GI, and cognitive abilities. The authors concluded that the genetic GI–g correlation could be more or less fully explained by the area–g correlation. It has been argued previously that focusing on whole-brain GI may miss important neuroanatomical specificity; however, our findings suggest that Docherty et al.’s conclusion holds for both local and global gyrification.
The P-FIT is a popular hypothesis concerning which brain regions matter most for human cognition (Jung and Haier 2007). The P-FIT was initially proposed to explain activation patterns observed during functional MRI experiments, but has been extended to aspects of brain structure. Previous studies have suggested that the association between gyrification and cognitive abilities may be stronger in P-FIT regions than the rest of the brain (Green et al. 2018; Gregory et al. 2016). However, when we tested this hypothesis, we actually found evidence to the contrary. Since neuroanatomical patterns of phenotypic and genetic LGI–g correlations were consistent across GOBS and HCP, this unexpected finding was unlikely to have been caused by a lack of specificity, such as if LGI–g correlations were distributed randomly over the cortex. Instead, while LGI–g correlations exhibited a characteristic neuroanatomical pattern, this pattern did not match the P-FIT. A potential limitation of the present study in this regard is that there is no widely accepted method of matching Brodmann areas (used to define P-FIT regions) to surface-based ROIs (used to group vertices). Therefore, one could argue that our selection of P-FIT regions was incorrect. While our selection was based on that of a previous study (Green et al. 2018), we nevertheless reperformed our analysis several times with different selections of P-FIT regions, and the results remained the same. Importantly, although we argue that the P-FIT is not a good model for the association between gyrification—a purely structural aspect of cortical organization—and cognitive abilities, our results should not be used to criticize the P-FIT as a hypothesis of the brain’s functional organization, because function does not necessarily follow structure.
Most of our results were consistent across samples. However, estimates of heritability and genetic correlations were generally weaker in GOBS than HCP. Notably, some genetic LGI–g correlations were strong enough to surpass the FDR-corrected threshold for significance in HCP, but not GOBS. Such differences could be related to study design. One limitation of all family studies is that polygenic effects are susceptible to inflation due to shared environmental factors, which would cause overestimation of both heritability and genetic correlations. It could be argued that extended-pedigree studies, such as GOBS, are less susceptible to this kind of inflation than twin studies, such as HCP, because there are usually fewer shared environmental factors between distantly related individuals than twins (Almasy and Blangero 2010); this reduction in inflation comes at the expense of a reduction in power to detect polygenic effects, which could also explain the lack of significant genetic correlations in GOBS. It is unlikely that differences in results between samples were caused by differences in scanner or scanning protocol (Han et al. 2006). Furthermore, while GOBS and HCP participants completed different cognitive batteries, both were comprehensive in terms of measured cognitive abilities, ensuring that g indexed a similar construct in both samples.
With the recent emergence of large, open-access data sets and international consortia, neuroimaging and genetics studies have entered a new era characterized by samples comprising many thousands of participants. In such large studies, trivial effects may be labeled as statistically significant. This observation is not new (Berkson 1938) and numerous solutions have been proposed, such as adopting more stringent significance criteria (Benjamin et al. 2018), scaling criteria by sample size (Mudge et al. 2012), testing interval-null rather than point-null hypotheses (Morey and Rouder 2011), and, most radically, abandoning the notion of statistical significance altogether (McShane et al. 2019). One could argue that these solutions suffer from their own drawbacks and are unlikely to be adopted by the scientific mainstream in near future. Therefore, in the meantime, we believe that it is imperative to judge, at least qualitatively, whether the sizes of statistically significant effects are large enough to justify one’s conclusions, particularly when these conclusions may have broad, overarching implications. This idea is not new either (Kelley and Preacher 2012) but deserves to be restated. Based on the results of the present study, we are inclined to believe that gyrification minimally explains variation in cognitive abilities and therefore has somewhat limited implications for our understanding of the neurobiology of human intelligence.
Abstract: Previous studies suggest that gyrification is associated with superior cognitive abilities in humans, but the strength of this relationship remains unclear. Here, in two samples of related individuals (total N = 2882), we calculated an index of local gyrification (LGI) at thousands of cortical surface points using structural brain images and an index of general cognitive ability (g) using performance on cognitive tests. Replicating previous studies, we found that phenotypic and genetic LGI–g correlations were positive and statistically significant in many cortical regions. However, all LGI–g correlations in both samples were extremely weak, regardless of whether they were significant or nonsignificant. For example, the median phenotypic LGI–g correlation was 0.05 in one sample and 0.10 in the other. These correlations were even weaker after adjusting for confounding neuroanatomical variables (intracranial volume and local cortical surface area). Furthermore, when all LGIs were considered together, at least 89% of the phenotypic variance of g remained unaccounted for. We conclude that the association between LGI and g is too weak to have profound implications for our understanding of the neurobiology of intelligence. This study highlights potential issues when focusing heavily on statistical significance rather than effect sizes in large-scale observational neuroimaging studies.
Discussion
In the present study, we analyzed data from two samples of related individuals to examine the association between gyrification and general cognitive ability. We used a popular automatic method to calculate LGI across the cortex from MRI images (Schaer et al. 2008), and calculated g from performance on batteries of cognitive tests. We estimated the heritability of height, ICV, and g, as well as the heritability LGI, area, and thickness at all vertices. We estimated phenotypic, genetic, and environmental LGI–g correlations, as well as partial LGI–g correlations with height, ICV, area (at the same vertex), and thickness (at the same vertex) as potential confounding variables. We estimated the amount of phenotypic variance of g explained by all LGIs together via ridge regression, and examined the across-sample consistency of neuroanatomical specificity in heritability of LGI, area, and thickness, as well as LGI–g correlations. Finally, we tested whether heritability estimates and LGI–g correlations were stronger in regions implicated by the P-FIT, a model of the neurological basis of human intelligence (Jung and Haier 2007).A novel finding of the present study was that LGI was heritable across the cortex, extending a previous study that established the heritability of whole-brain GI (Docherty et al. 2015). This finding was not particularly surprising because many features of brain morphology are heritable. Nevertheless, it was necessary to establish the heritability of LGI before calculating genetic LGI–g correlations, which are only meaningful if both LGI and g are heritable traits. The previous study estimated the heritability of GI to be 0.71, which is much greater than most of the heritability estimates for LGI observed in GOBS or HCP. This result is also not surprising, because GI is likely to be contaminated by less measurement error than LGI. Heritabilities of all other traits were consistent with those published in previous studies.
The present study represents a replication of previous work and provides several important extensions to our understanding of the relationship between gyrification and cognition. First, we replicated previous work by finding positive and significant phenotypic LGI–g correlations (e.g., Gregory et al. 2016). Furthermore, we found that genetic LGI–g correlations were positive and significant (but only in HCP), suggesting that the relationship between gyrification and intelligence may be driven by pleiotropy. Since environmental LGI–g correlations were not significant, their net sign differed across GOBS and HCP, and their spatial patterns showed no consistency across samples, it is reasonable to conclude that they mostly reflected measurement error rather than meaningful shared environmental contributions to LGI and g.
In our view, the most important finding from the present study is that all LGI–g correlations, even the significant ones, were weak. Phenotypically, LGI at a typical vertex poorly predicted g. Even when the predictive ability of all LGIs was considered together via ridge regression, at least 89% of the variance of g remained unaccounted for. Phenotypic and genetic LGI–g correlations were weaker than ICV–g correlations in the same participants, and about the same as area–g correlations. Partialing out ICV or area further reduced LGI–g correlations.
The volume of cortical mantle is often computed as the product of its area and thickness, but at the resolution of meshes typically used to represent the cortex, the variability of area is higher than the variability of thickness such that surface area is the primary contributor to the variability of cortical volume (Winkler et al. 2010), and therefore of its relationship to other measurements; the same holds, more strongly even, for parcellations of the cortex in large anatomical or functional regions. This means that the association between overall brain volume and cognitive abilities reported by previous studies (e.g., Pietschnig et al. 2015) is probably primarily driven by area–g correlations (Vuoksimaa et al. 2015). LGI is strongly correlated with area (Gautam et al. 2015; Hogstrom et al. 2013), which explains why partialing out either ICV or area reduced phenotypic and genetic LGI–g correlations in the present study. Thus, we conclude, based on our results, that the association between gyrification and cognitive abilities to a large extent reflects the already well-established relationship between surface area and cognitive abilities, and that the particular association between the unique portion of gyrification and cognitive abilities is extremely small.
The above conclusion is consistent with that of a previous twin study (Docherty et al. 2015), which examined genetic associations between overall cortical surface area, whole-brain GI, and cognitive abilities. The authors concluded that the genetic GI–g correlation could be more or less fully explained by the area–g correlation. It has been argued previously that focusing on whole-brain GI may miss important neuroanatomical specificity; however, our findings suggest that Docherty et al.’s conclusion holds for both local and global gyrification.
The P-FIT is a popular hypothesis concerning which brain regions matter most for human cognition (Jung and Haier 2007). The P-FIT was initially proposed to explain activation patterns observed during functional MRI experiments, but has been extended to aspects of brain structure. Previous studies have suggested that the association between gyrification and cognitive abilities may be stronger in P-FIT regions than the rest of the brain (Green et al. 2018; Gregory et al. 2016). However, when we tested this hypothesis, we actually found evidence to the contrary. Since neuroanatomical patterns of phenotypic and genetic LGI–g correlations were consistent across GOBS and HCP, this unexpected finding was unlikely to have been caused by a lack of specificity, such as if LGI–g correlations were distributed randomly over the cortex. Instead, while LGI–g correlations exhibited a characteristic neuroanatomical pattern, this pattern did not match the P-FIT. A potential limitation of the present study in this regard is that there is no widely accepted method of matching Brodmann areas (used to define P-FIT regions) to surface-based ROIs (used to group vertices). Therefore, one could argue that our selection of P-FIT regions was incorrect. While our selection was based on that of a previous study (Green et al. 2018), we nevertheless reperformed our analysis several times with different selections of P-FIT regions, and the results remained the same. Importantly, although we argue that the P-FIT is not a good model for the association between gyrification—a purely structural aspect of cortical organization—and cognitive abilities, our results should not be used to criticize the P-FIT as a hypothesis of the brain’s functional organization, because function does not necessarily follow structure.
Most of our results were consistent across samples. However, estimates of heritability and genetic correlations were generally weaker in GOBS than HCP. Notably, some genetic LGI–g correlations were strong enough to surpass the FDR-corrected threshold for significance in HCP, but not GOBS. Such differences could be related to study design. One limitation of all family studies is that polygenic effects are susceptible to inflation due to shared environmental factors, which would cause overestimation of both heritability and genetic correlations. It could be argued that extended-pedigree studies, such as GOBS, are less susceptible to this kind of inflation than twin studies, such as HCP, because there are usually fewer shared environmental factors between distantly related individuals than twins (Almasy and Blangero 2010); this reduction in inflation comes at the expense of a reduction in power to detect polygenic effects, which could also explain the lack of significant genetic correlations in GOBS. It is unlikely that differences in results between samples were caused by differences in scanner or scanning protocol (Han et al. 2006). Furthermore, while GOBS and HCP participants completed different cognitive batteries, both were comprehensive in terms of measured cognitive abilities, ensuring that g indexed a similar construct in both samples.
With the recent emergence of large, open-access data sets and international consortia, neuroimaging and genetics studies have entered a new era characterized by samples comprising many thousands of participants. In such large studies, trivial effects may be labeled as statistically significant. This observation is not new (Berkson 1938) and numerous solutions have been proposed, such as adopting more stringent significance criteria (Benjamin et al. 2018), scaling criteria by sample size (Mudge et al. 2012), testing interval-null rather than point-null hypotheses (Morey and Rouder 2011), and, most radically, abandoning the notion of statistical significance altogether (McShane et al. 2019). One could argue that these solutions suffer from their own drawbacks and are unlikely to be adopted by the scientific mainstream in near future. Therefore, in the meantime, we believe that it is imperative to judge, at least qualitatively, whether the sizes of statistically significant effects are large enough to justify one’s conclusions, particularly when these conclusions may have broad, overarching implications. This idea is not new either (Kelley and Preacher 2012) but deserves to be restated. Based on the results of the present study, we are inclined to believe that gyrification minimally explains variation in cognitive abilities and therefore has somewhat limited implications for our understanding of the neurobiology of human intelligence.
Those who consumed 200 mg of caffeine showed significantly enhanced problem-solving abilities; caffeine had no significant effects on creative generation or on working memory
Zabelina, Darya, and Paul Silvia. 2020. “Percolating Ideas: The Effects of Caffeine on Creative Thinking and Problem Solving.” PsyArXiv. February 9. doi:10.31234/osf.io/6g9av
Abstract: Caffeine is the most widely consumed psychotropic drug in the world, with numerous studies documenting the effects of caffeine on people’s alertness, vigilance, mood, concentration, and attentional focus. The effects of caffeine on creative thinking, however, remain unknown. In a randomized placebo-controlled between-subject double-blind design the present study investigated the effect of moderate caffeine consumption on creative problem solving (i.e., convergent thinking) and creative idea generation (i.e., divergent thinking). We found that participants who consumed 200 mg of caffeine (approximately one 12 oz cup of coffee, n = 44), compared to those in the placebo condition (n = 44), showed significantly enhanced problem-solving abilities. Caffeine had no significant effects on creative generation or on working memory. The effects remained after controlling for participants’ caffeine expectancies, whether they believed they consumed caffeine or a placebo, or for changes in mood. Possible mechanisms and future directions are discussed.
Abstract: Caffeine is the most widely consumed psychotropic drug in the world, with numerous studies documenting the effects of caffeine on people’s alertness, vigilance, mood, concentration, and attentional focus. The effects of caffeine on creative thinking, however, remain unknown. In a randomized placebo-controlled between-subject double-blind design the present study investigated the effect of moderate caffeine consumption on creative problem solving (i.e., convergent thinking) and creative idea generation (i.e., divergent thinking). We found that participants who consumed 200 mg of caffeine (approximately one 12 oz cup of coffee, n = 44), compared to those in the placebo condition (n = 44), showed significantly enhanced problem-solving abilities. Caffeine had no significant effects on creative generation or on working memory. The effects remained after controlling for participants’ caffeine expectancies, whether they believed they consumed caffeine or a placebo, or for changes in mood. Possible mechanisms and future directions are discussed.
Subscribe to:
Posts (Atom)