Burch, R. L., & Widman, D. R. (2022). The point of nipple erection 3: Sexual and social expectations of women with nipple erection. Evolutionary Behavioral Sciences, Sep 2022. https://doi.org/10.1037/ebs0000312
Abstract: If nipple erection signals/is a cue to sexual interest or arousal, we would expect that women with nipple erection would be sexualized: having a presumed higher sexual arousal and promiscuity and lower mental abilities and morality. To examine this, 234 participants rated pictures of women with and without salient nipple erection (faces were obscured to eliminate facial cues). Participants completed a hypothetical sexual behavior profile (30 items including perceptions of morality and intelligence) for each stimulus photo. Both men and women perceived women with nipple erection as less intelligent, less moral, and more likely to engage in sexual behaviors. These are the primary markers of sexual objectification. They also rated these women as having poorer sexual health and being less sexually manipulative. Men perceived all the women in the stimulus photos as being less moral and having more male sexual partners, indicating that men objectified the stimuli overall more than women. Women reported that women with erect nipples had more male sex partners, lost their virginity at a younger age, and had lower quality relationships. In summary, female nipple erection, which is an uncontrollable reflex, triggers sexualization and objectification by both men and women who observe it.
Impact Statement: Nipple erection is a cue that triggers sexualization and objectification of women; women with nipple erection are thought of as less intelligent, less moral, and more promiscuous by both men and women. Women cannot control their nipple erection, yet these data show that it is used by men and women to make presumptions about women’s character and behavior.
Friday, September 30, 2022
Both men and women perceived women with nipple erection as less intelligent, less moral, more likely to engage in sexual behaviors, & as having poorer sexual health & less sexually manipulative
Schizophrenia has been an evolutionary paradox: it has high heritability, but it is associated with decreased reproductive success
Schizophrenia: the new etiological synthesis. Markus J. Rantala et al. Neuroscience & Biobehavioral Reviews, September 28 2022, 104894. https://doi.org/10.1016/j.neubiorev.2022.104894
Highlights
• Schizophrenia (SZR) is rare or nonexistent in hunter-gatherer populations
• Several microbial infections can trigger SZR
• SZR is associated with neuroinflammation and gut dysbiosis
• Parasite x genotype x stress interaction forms the new etiological synthesis for SZR
• Evolutionary mismatch explains SZR better than other evolutionary hypotheses
Abstract: Schizophrenia has been an evolutionary paradox: it has high heritability, but it is associated with decreased reproductive success. The causal genetic variants underlying schizophrenia are thought to be under weak negative selection. To unravel this paradox, many evolutionary explanations have been suggested for schizophrenia. We critically discuss the constellation of evolutionary hypotheses for schizophrenia, highlighting the lack of empirical support for most existing evolutionary hypotheses—with the exception of the relatively well supported evolutionary mismatch hypothesis. It posits that evolutionarily novel features of contemporary environments, such as chronic stress, low-grade systemic inflammation, and gut dysbiosis, increase susceptibility to schizophrenia. Environmental factors such as microbial infections (e.g., Toxoplasma gondii) can better predict the onset of schizophrenia than polygenic risk scores. However, researchers have not been able to explain why only a small minority of infected people develop schizophrenia. The new etiological synthesis of schizophrenia indicates that an interaction between host genotype, microbe infection, and chronic stress causes schizophrenia, with neuroinflammation and gut dysbiosis mediating this etiological pathway. Instead of just alleviating symptoms with drugs, the parasite x genotype x stress model emphasizes that schizophrenia treatment should focus on detecting and treating possible underlying microbial infection(s), neuroinflammation, gut dysbiosis, and chronic stress.
6. Critical evaluation of previous evolutionary hypotheses for schizophrenia
Many of these previous evolutionary hypotheses lack proper empirical validation (Fig. 2) and, above and beyond the critical discussion presented in this article, scientific approaches to schizophrenia would benefit from critical appraisals and rigorous research devising crucial experiments that pitch each hypothesis against one other (cf. Platt, 1964). Even though some of the previous evolutionary hypotheses presented above might explain some of the characteristic manifestations of psychosis or delusions and their relations to the genetics of schizophrenia, all of them—with the exception of the mismatch hypothesis—fail to provide a convincing explanation for the negative symptoms of schizophrenia such as blunting of affect, violent behavior, aggression, apathy, anhedonia, loss of motivation, or cognitive deficit. The hypotheses—with the exception of the mismatch hypothesis—also fail to explain impairments in executive functions typically observed in schizophrenic patients. In addition, the hypotheses have not been able to explain why schizophrenia is commonly comorbid with many other mental disorders (Buckley et al., 2009, Tsai and Rosenheck, 2013). Major depressive disorder, for instance, is a common psychiatric comorbidity in patients with schizophrenia. A recent meta-analysis found that 32.6% of patients with schizophrenia would meet the diagnostic criteria of major depressive disorder (Etchecopar-Etchart et al., 2021) and symptoms of major depressive disorder are common prodromal symptoms of psychosis (Hafner and an der Heiden, 2011). In addition, a 12-month follow-up study found that 80% of patients with first-episode psychosis would also fulfill the diagnostic criteria of major depressive disorder (Upthegrove et al., 2010). Psychosis may also occur in patients with major depressive disorder, bipolar disorder, and schizoaffective disorder (Dubovsky et al., 2021), highlighting that these disorders are not completely separate entities. The comorbidity of these disorders is most elegantly explained by the parasite hypothesis of schizophrenia presented in Section 2, as well as their shared genetic basis (Anttila et al., 2018; Hindley, 2021; Legge et al., 2021). These disorders are characterized by sickness behavior that is caused either by the activation of the immune system via low-grade systemic inflammation (Rantala et al., 2018, Rantala et al., 2021) or direct manipulation of host behavior by parasites (Borráz-León et al., 2021).
Schizophrenia is not a discrete disorder and separating it from other disorders is often difficult. This makes all previous evolutionary hypotheses (except the mismatch hypothesis) somewhat problematic. The classification of these disorders is often difficult because of overlapping symptoms, which may result in a patient receiving different diagnoses from different psychiatrists. Hallucinations and delusions occur also in patients with bipolar disorder and Alzheimer’s disease. About half of Alzheimer’s patients have psychosis (Murray et al., 2014) and a study on 1 342 patients with bipolar disorder type I found that 73.8% had a lifetime prevalence of psychotic symptoms (van Bergen et al., 2019). A plausible scientific framework would also explain occurrences of psychosis, delusions, and hallucinations in other disorders, not just in schizophrenia. The parasite hypothesis is able to explain why psychosis may occur in other disorders: differences in the parasite species that individuals are infected with, differences in timing of infection, genetic vulnerability, and microbiota may explain whether a person will have symptoms of schizophrenia or, say, bipolar disorder, or depression with psychotic features (see Section 2.3).
Schizophrenia has traditionally been classified into different subtypes that differ in symptomatology. The prevalence of these subtypes varies geographically and by time (Jablensky et al., 1992). Schizophrenia is not a single disorder. Instead, it seems that schizophrenia is only an umbrella term for a group of separate disorders with some overlapping symptoms. None of the previous evolutionary explanations for schizophrenia have explained the heterogeneous subtypes of schizophrenia and their persistence in modern human populations—although the mismatch hypothesis and the balanced polymorphism hypothesis could conceivably account for these findings. The parasite hypothesis, in contrast, provides an explanation for the heterogeneity by suggesting that it results from different parasite/pathogen species causing schizophrenia and/or individual differences in responses to microbial infections (see Section 2).
With the exception of the mismatch hypothesis, all previous evolutionary hypotheses also fail to provide a rationale for why and how neuroinflammation plays a role in schizophrenia (see Section 2.1.) and why there are inflammatory marker subtypes in schizophrenia (see e.g., Lizano et al., 2021). The parasite hypothesis, in combination with the mismatch hypothesis, explains why neuroinflammation occurs in schizophrenia and why there are different inflammatory marker subtypes. The parasite x genotype x stress model also explains why schizophrenia is more common in cities than in rural areas (see Section 3). Since chronic stress—which is often a triggering factor in psychosis—is rare among people with hunter-gatherer lifestyle(s) (Brenner et al., 2015), the parasite x genotype x stress model, coupled with the mismatch hypothesis, explains why schizophrenia is rare among them (cf. (Abed and Abbas, 2011b). Except for the mismatch hypothesis, previous evolutionary hypotheses for schizophrenia have not been able to explain why schizophrenia is more common in people with modern western lifestyles and why exposure to natural environments in neighborhoods or around residential areas is associated with lower schizophrenia rates (Engemann et al., 2019, Engemann et al., 2020, Kristine et al., 2018).
None of the previous evolutionary hypotheses have been able to explain why adverse life events play an important role in the onset of schizophrenia and psychosis. For example, exposure to childhood trauma is associated with a 2- to 3-fold increase in risk of psychotic outcomes (Croft et al., 2019, Rokita et al., 2021, Trotta et al., 2015). Likewise, cumulative stress pathophysiology is often a triggering factor in psychosis (Nugent et al., 2015), and cortical stress regulation is disrupted in patients with schizophrenia (Schifani et al., 2018). There are also differences in gut microbiota between healthy people and those with schizophrenia (Section 4). These observations do not fit well with previous evolutionary hypotheses for schizophrenia, with the exception of the mismatch hypothesis. Thus, most existing evolutionary hypotheses do not explain empirical findings about schizophrenia and have acquired limited empirical support of their own. The parasite x genotype x stress model, in contrast, suggests that stress negatively impacts immune function and thereby facilitates parasitic/pathogenic effects on the brain (see Section 2).
Many of the previous evolutionary explanations of schizophrenia are examples of evolutionary storytelling: they provide adaptive explanations for a phenomenon which is neither adaptive nor an adaptation, but rather a pathological side effect of microbial infection and chronic stress. Although the parasite x genotype x stress model can explain the occurrence of schizophrenia at one level of analysis, the sexual selection hypothesis (Nettle, 2001), the reformulated social brain hypothesis (Abed and Abbas, 2011b), and the life history hypothesis of schizophrenia (Del Giudice, 2010, 2017; Del Giudice et al., 2014) may partly explain why genetic variants that interact with pathogen infection and chronic stress (Fig. 1) may exist in the human gene pool in the first place. Furthermore, the mismatch hypothesis is an integral component of the parasite x genotype x stress model; environmental mismatch, after all, leads to the chronic stress that makes individuals with contemporary western lifestyles more susceptible to schizophrenia than those with traditional lifestyles (Fig. 1). Despite the abundant evidence supporting the parasite x genotype x stress model coupled with the environmental mismatch hypothesis of schizophrenia, it is also possible that future research will discover other hypotheses at proximate and ultimate levels of analysis that more accurately carve the biopsychosocial nature of schizophrenia at its joints.
Thursday, September 29, 2022
Self-perceived attractiveness, either chronically experienced or temporarily heightened, predicted and increased self-interested behavioral intention and behavior
Mirror, Mirror on the Wall, I Deserve More Than All: Perceived Attractiveness and Self-Interested Behavior. Fei Teng et al. Evolution and Human Behavior, September 29 2022, https://doi.org/10.1016/j.evolhumbehav.2022.09.005
Abstract: A substantial amount of research has demonstrated that good-looking individuals are perceived and treated in a favorable manner; however, relatively little research has examined how attractive people actually behave. There are two predominant theories on attractiveness: the self-fulfilling nature of “what is beautiful is good” from social psychology and the evolutionary perspective of attractiveness, make divergent predictions in this regard. The current research systematically investigated whether physical attractiveness can predict self-interested behavior and, if so, in which direction. Across five studies (N = 1303), self-perceived attractiveness, either chronically experienced (Studies 1–3) or temporarily heightened (Studies 4 and 5), predicted and increased self-interested behavioral intention and behavior. Increased psychological entitlement acted as a mediator in this process (Studies 1–5). Furthermore, the publicity of the act was a boundary condition for the effect of attractiveness on self-interested behavior (Study 5). We have discussed theoretical and practical implications.
Middle-age mortality increases among non-Hispanic Whites from 1992 to 2018 are driven almost entirely by the bottom 10 percent of the education distribution
Novosad, Paul, Charlie Rafkin and Sam Asher. 2022. "Mortality Change among Less Educated Americans." American Economic Journal: Applied Economics, 14(4):1-34. DOI: 10.1257/app.20190297
Abstract: Measurements of mortality change among less educated Americans can be biased because the least educated groups (e.g., dropouts) become smaller and more negatively selected over time. We show that mortality changes at constant education percentiles can be bounded with minimal assumptions. Middle-age mortality increases among non-Hispanic Whites from 1992 to 2018 are driven almost entirely by the bottom 10 percent of the education distribution. Drivers of mortality change differ substantially across groups. Deaths of despair explain most of the mortality change among young non-Hispanic Whites, but less among older Whites and non-Hispanic Blacks. Our bounds are applicable in many other contexts.
Sweden: More time served in prison reduces mortality
Hjalmarsson, Randi and Matthew J. Lindquist. 2022. "The Health Effects of Prison." American Economic Journal: Applied Economics, 14(4):234-70. DOI: 10.1257/app.20200615
Abstract: This paper studies the health effects of Swedish prison reforms that held sentences constant but increased the share of time inmates had to serve. The increased time served did not harm post-release health and actually reduced mortality risk. We find especially large decreases in mortality for offenders not previously incarcerated, younger offenders, and those more attached to the labor market. Risk of suicide and circulatory death fell for inmates with mental health problems and older inmates, respectively. In-prison health care utilization and program participation increased with time served, suggesting health care treatment and services as the key mechanism for mortality declines.
Wednesday, September 28, 2022
People placing themselves at the political extremes offer much less accurate estimates of the economy
The delusive economy: how information and affect colour perceptions of national economic performance. Lukas Linsi, Daniel Mügge & Ana Carillo-López. Acta Politica, Sep 27 2022. https://rd.springer.com/article/10.1057/s41269-022-00258-30
Abstract: Economic knowledge plays a central role in many theories of political behavior. But empirical studies have found many citizens to be poorly informed about the official state of the economy. Analyzing two waves of the Eurobarometer database, we re-examine the distribution of public knowledge of three macroeconomic indicators in two dozen European countries. Respondents with high income and education give more accurate estimates than others, in line with previous studies. As we show, however, such differences in knowledge do not only reflect varying levels of information. People’s estimates are also shaped by affective dynamics, in particular a more pessimistic outlook that leads to overestimation of official unemployment and inflation (but not growth) figures. We find that emotive factors can bias inflation and unemployment estimates of respondents who find themselves in a privileged economic situation in a direction that incidentally also makes them more accurate, even though respondents are not necessarily being better informed. In real-world politics, official economic statistics thus do not function as a shared information backdrop that could buttress the quality of public deliberation. Instead, knowledge of them is itself driven by personal socio-economic circumstances.
---
Robustness checks
To probe the robustness of these results, Appendix Table 8 re-estimates the results from Table 2 in a model with country fixed-effects instead of random intercepts. The results are nearly identical. Taking into account the sharp decrease in ‘don’t know’ responses from the first to the second wave (shown in Table 1), we also compare the results of the 2007 survey and the 2015 survey separately in Appendix Tables 9 and 10. Encouragingly, the results are very similar in substantive terms, indicating that the increase in responses after the crisis does not drive our findings about the general dynamics at play. One interesting difference concerns the result about ideological distance from the incumbent government whose estimates appear to be clearly less accurate and more systematically biased in the more recent 2015 wave than in the pre-crisis survey, in line with observations about political polarization and growing misinformation on the political fringes (‘echo chamber’ effects) in recent years (Matteo et al. 2021). Otherwise the patterns in either of the waves alone are largely identical to the pooled results.
Conclusion
What shapes what people know about the economy? How do they relate to official economic statistics? To what degree can such data count as a shared informational background to public deliberations and the formation of individual economic assessments and preferences?
On the whole, our findings are not encouraging. In the samples of the two large-scale Eurobarometer waves we have examined, less than a third of respondents across Europe offer an estimate of the unemployment, inflation or growth rate that lies within two percentage points of the actual official figure. In that sense, public quantitative economic knowledge is wanting. Furthermore, we have found that (the lack of) such knowledge is not randomly distributed. All else equal, socio-economic insiders—such as highly educated, financially comfortable and politically centrist males—are most likely to be aware of official macroeconomic statistics. The further we move away from this privileged subgroup, the worse estimates become.
Why is that so? Our analysis suggests that it is due to a confluence of informational and affective dynamics. On the one hand, higher levels of education and an active interest in political affairs are associated with better (generally lower) estimates across the three indicators. But such informational dynamics face limitations in explaining the distribution of the (lack of) public economic knowledge on their own. For instance, we find little evidence for the idea that the accuracy of people’s information is a function of how relevant particular knowledge might be to them. There is relatively little evidence, for example, that people in socio-economic positions or employment categories for whom particular information might be especially useful also report it more accurately. Personal economic and political gloom strongly affects what people think they know about the economy—not just general assessments, but actual numbers they are willing to attach to economic conditions. For labor markets, general pessimism about the future translates into higher unemployment estimates here and now. Personal economic distress equally lets people report much higher unemployment and inflation rates than is true for people with fewer economic worries. At least in part, the higher accuracy of economic insiders’ estimates is rooted in their positive economic outlook—their optimism—rather than better knowledge per se. Across the board, we find a clear tendency, in other words, to extrapolate from one’s personal situation.
Political gloom, too, colors subjective economic knowledge. People placing themselves at the political extremes, and hence dissatisfied with the status quo, offer much less accurate estimates across the board than others, suggesting that they tune out of official information channels about economic conditions. The evidence suggests, even if less strongly so, that this results in a pessimistic bias: people at the political extremes think they know economic conditions to be worse. The same holds for citizens who distrust statistics more generally. Here too we find a strong correlation with more pessimistic economic estimates.
These findings upend models of economic opinion formation that theorize economic information as an exogenous input. At first sight, this does not augur well for deliberative democracy, which thrives on the availability of intersubjectively shared common ground–the facts everyone can agree on. (During the past years, American politics has offered a worrying illustration of what happens when that common ground crumbles.) At the same time, our analysis does not assume that official figures offer accurate or even universally useful assessments of economic conditions. They differ enough across socio-economic classes, regions and individuals to justify doubts about just how meaningful “national economic conditions” are for citizens (cf. Jacobs et al. 2021). In that view, it may be understandable after all that people are less invested in such information than common imageries would expect. Either way, our findings underline that statistics are far from the objective economic yardsticks that their champions all too often still hold them to be.
Tuesday, September 27, 2022
White, Male, and Angry: A Reputation-based Rationale
Wolton, Stephane, White, Male, and Angry: A Reputation-based Rationale (September 1, 2022). SSRN http://dx.doi.org/10.2139/ssrn.4206475
Abstract: From the bottom to the top of society, white men are angry. This paper provides a reputation-based rationale for this anger. Individuals care about their social reputation and engage in belief-motivated reasoning. In the presence of uncertainty, white men tend to have too high an opinion of their group, whether they belong to the elite or not. When new information reveal that the elite is biased in favor of white men, the reputation of all white men decreases causing a payoff loss and the anger that comes with it. I also show how policies in favor of disadvantaged groups can be supported by some white men and opposed by some individuals from the minority when social reputation is taken into account. Reducing white men's privileges can have a very different effect than disclosing the advantage this group enjoys.
Keywords: Privileges, Affirmative Action, Discrimination, Bias
Gay males are less likely to be characterized by overweight or obesity than are straight males
Are Gay Men More Fit? Obesity and Overweight Differences Among Gay and Straight Men. Sharon Baker-Hughes & Dudley L. Poston Jr. Chapter in International Handbook of the Demography of Obesity pp 273–285. September 22 2022. https://link.springer.com/chapter/10.1007/978-3-031-10936-2_16
Abstract: Despite an expansive literature in the last few decades on the qualities and characteristics of gay and straight men, research exploring the prevalence of obesity and overweight in the gay male population is limited, especially comparing them to the straight male population. In this chapter, we examine the distribution of weight status (underweight, normal weight, overweight, obese) in gay men. We also examine the likelihood the likelihood of overweight or obesity in gay males compared to heterosexual males. We find that gay males are less likely to be characterized by overweight or obesity than are straight males. Our empirical findings are consistent with those in the limited literature about the prevalence of obesity and overweight in the gay and straight male populations.
As morning turns to evening, engagement on Twitter shifts away from virtue and toward vice content (celebrity gossip, food, etc.)
Tweets We Like Aren’t Alike: Time of Day Affects Engagement with Vice and Virtue TweetsOzum Zor, Kihyun Hannah Kim, Ashwani Monga. Journal of Consumer Research, Volume 49, Issue 3, October 2022, Pages 473–495, https://doi.org/10.1093/jcr/ucab072
Abstract: Consumers are increasingly engaging with content on social media platforms, such as by “following” Twitter accounts and “liking” tweets. How does their engagement change through the day for vice content offering immediate gratification versus virtue content offering long-term knowledge benefits? Examining when (morning vs. evening) engagement happens with which content (vice vs. virtue), the current research reveals a time-of-day asymmetry. As morning turns to evening, engagement shifts away from virtue and toward vice content. This asymmetry is documented in three studies using actual Twitter data—millions of data points collected every 30 minutes over long periods of time—and one study using an experimental setting. Consistent with a process of self-control failure, one of the Twitter data studies shows a theory-driven moderation of the asymmetry, and the experiment shows mediation via self-control. However, multiple processes are likely at play, as time does not unfold in isolation during a day, but co-occurs with the unfolding of multiple events. These results provide new insights into social media engagement and guide practitioners on when to post which content.
Keywords: time of day, vice, virtue, content engagement, self-control failure, Twitter
Monday, September 26, 2022
Setting ambitious goals is a proven strategy for improving performance, but those with highly ambitious goals (and those with unambitious goals) were seen as less warm and as offering less relationship potential
Interpersonal consequences of conveying goal ambition. Sara Wingrove, Gráinne M. Fitzsimons. Organizational Behavior and Human Decision Processes, Volume 172, September 2022, 104182. https://doi.org/10.1016/j.obhdp.2022.104182
Highlights
• Ambition influences interpersonal expectations of warmth and relationship potential.
• Interpersonal expectations are driven by perceived goal supportiveness.
• Unambitious and highly ambitious goals both signal lower goal supportiveness.
• Moderately ambitious goals are evaluated the best interpersonally.
Abstract: Setting ambitious goals is a proven strategy for improving performance, but we suggest it may have interpersonal costs. We predict that relative to those with moderately ambitious goals, those with highly ambitious goals (and those with unambitious goals) will receive more negative interpersonal evaluations, being seen as less warm and as offering less relationship potential. Thirteen studies including nine preregistered experiments, three preregistered replications, and one archival analysis of graduate school applications (total N = 6,620) test these hypotheses. Across career, diet, fitness, savings, and academic goals, we found a robust effect of ambition on judgments, such that moderately ambitious goals led to the most consistently positive interpersonal expectations. To understand this phenomenon, we consider how ambition influences judgments of investment in one’s own goals as opposed to supportiveness for other people’s goals and explore expectations about goal supportiveness as one mechanism through which ambition may influence interpersonal judgments.
Keywords: GoalsAmbitionInterpersonal perceptionAttributions
Sunday, September 25, 2022
Sexual Repertoire, Duration of Partnered Sex, Sexual Pleasure, and Orgasm: A US Nationally Representative Survey of Adults show that while women and men reported a similar actual duration of sex, men wished it to last longer
Sexual Repertoire, Duration of Partnered Sex, Sexual Pleasure, and Orgasm: Findings from a US Nationally Representative Survey of Adults. Debby Herbenick, Tsung-chieh Fu & Callie Patterson. Journal of Sex & Marital Therapy, Sep 23 2022. https://doi.org/10.1080/0092623X.2022.2126417
Abstract: In a confidential U.S. nationally representative survey of 2,525 adults (1300 women, 1225 men), we examined participants’ event-level sexual behaviors, predictors of pleasure and orgasm, and perceived actual and ideal duration of sex, by gender and age. Event-level kissing, cuddling, vaginal intercourse, and oral sex were prevalent. Sexual choking was more prevalent among adults under 40. While women and men reported a similar actual duration of sex, men reported a longer ideal duration. Participants with same-sex partners reported a longer ideal duration than those with other-sex partners. Finally, findings show that gendered sexual inequities related to pleasure and orgasm persist.
Credence to assign to philosophical claims that were formed without any knowledge of the current philosophical debate & little or no knowledge of the relevant empirical or scientific data
The end of history. Hanno Sauer. Inquiry, Sep 19 2022. https://doi.org/10.1080/0020174X.2022.2124542
Abstract: What credence should we assign to philosophical claims that were formed without any knowledge of the current state of the art of the philosophical debate and little or no knowledge of the relevant empirical or scientific data? Very little or none. Yet when we engage with the history of philosophy, this is often exactly what we do [sic, it means, to give credence]. In this paper, I argue that studying the history of philosophy is philosophically unhelpful. The epistemic aims of philosophy, if there are any, are frustrated by engaging with the history of philosophy, because we have little reason to think that the claims made by history’s great philosophers would survive closer scrutiny today. First, I review the case for philosophical historiography and show how it falls short. I then present several arguments for skepticism about the philosophical value of engaging with the history of philosophy and offer an explanation for why philosophical historiography would seem to make sense even if it didn’t.
Keywords: History of philosophymetaphilosophyphilosophical methodologysocial epistemologyepistemic peerhood
Consider Plato’s or Rousseau’s evaluation of the virtues and vices of democracy. Here is a (non-exhaustive) list of evidence and theories that were unavailable to them at the time:
Historical experiences with developed democracies
Empirical evidence regarding democratic movements in developing countries
Various formal theorems regarding collective decision making and preference aggregation, such as the Condorcet Jury-Theorem, Arrow’s Impossibility-Results, the Hong-Page-Theorem, the median voter theorem, the miracle of aggregation, etc.
Existing studies on voter behavior, polarization, deliberation, information
Public choice economics, incl. rational irrationality, democratic realism
The whole subsequent debate on their own arguments
[…]
When it comes to people currently alive, we would steeply discount the merits of the contribution of any philosopher whose work were utterly uninformed by the concepts, theories and evidence just mentioned (and whatever other items belong on this list). It is not clear why the great philosophers of the past should not be subjected to the same standard. (Bear in mind that time and attention are severely limited resources. Therefore, every decision we make about whose work to dedicate our time and attention to faces important trade-offs.)
The nature/nurture debate in moral psychology illustrates the same point. Philosophers have long discussed whether there is an innate moral faculty, and what its content may consist in. Now consider which theories and evidence were unavailable to historical authors such as Hume or Kant when they developed their views on the topic, and compare this to a recent contribution to the debate (Nichols et al. 2016):
Linguistic corpus data
Evolutionary psychology
Universal moral grammar theory
Sophisticated statistical methods
Bayesian formal modeling
250 years of the nature/nurture debate
250 years of subsequent debates on Hume or Kant
[…]
Finally, consider Hobbes’ justification of political authority in terms of how it allows us to avoid the unpleasantness of the state of nature. Here are some concepts and theories that were not available to him when he devised his arguments:
Utility functions
Nash equilibria
Dominant strategy
Backward induction
Behavioral economics
Experimental game theory
Biological evidence on the adaptivity of cooperation
Empirical evidence regarding life in hunter/gatherer societies
Cross-cultural data regarding life in contemporary tribal societies
[…]
Again, when it comes to deciding whose philosophical work to devote our time and attention to, any person that didn’t have any knowledge whatsoever of the above items would be a dubious choice.
A version of this problem that is somewhat more specific to moral philosophy is that in ethics, it is often important not to assign disproportionate testimonial weight to people of which we have good reasons to suspect that they harbored deeply objectionable attitudes or publicly expressed moral beliefs we have reason to deem unjustified and/or morally odious. Personally, I have made a habit of not heeding the ethical advice of Adolf Eichmann, Ted Bundy, and various of my family members. But upon looking at the moral views held by many of the most prominent authors in the history of philosophy, one often cannot help but shudder: Plato advocated abolishing the family, violently if need be; Aristotle defended (a version of) slavery as natural; Locke advocated religious toleration, only to exclude atheists from the social contract; Kant argued that masturbation is one of the gravest moral transgressions there is; Hegel claimed that it is an a priori truth that the death penalty is morally obligatory, and indeed a form of respect towards the executed; the list of historical philosophers who held sexist, racist and other discriminatory views would be too long to recount here.
Saturday, September 24, 2022
Acute climate risks in the financial system: 'Top-down' approaches are likely to be flawed when applied at spatial and temporal granular scales. as the Network of Central Banks and Supervisors for Greening the Financial System does
Pitman AJ; Fiedler T; Ranger N; Jakob C; Ridder N; Perkins-Kirkpatrick S; Wood N; Abramowitz G, Aug 2022, 'Acute climate risks in the financial system: examining the utility of climate model projections', Environmental Research: Climate, vol. 1, pp. 025002 - 025002, http://dx.doi.org/10.1088/2752-5295/ac856f
Abstract: Efforts to assess risks to the financial system associated with climate change are growing. These commonly combine the use of integrated assessment models to obtain possible changes in global mean temperature (GMT) and then use coupled climate models to map those changes onto finer spatial scales to estimate changes in other variables. Other methods use data mined from 'ensembles of opportunity' such as the Coupled Model Intercomparison Project (CMIP). Several challenges with current approaches have been identified. Here, we focus on demonstrating the issues inherent in applying global 'top-down' climate scenarios to explore financial risks at geographical scales of relevance to financial institutions (e.g. city-scale). We use data mined from the CMIP to determine the degree to which estimates of GMT can be used to estimate changes in the annual extremes of temperature and rainfall, two compound events (heatwaves and drought, and extreme rain and strong winds), and whether the emission scenario provides insights into the change in the 20, 50 and 100 year return values for temperature and rainfall. We show that GMT provides little insight on how acute risks likely material to the financial sector ('material extremes') will change at a city-scale. We conclude that 'top-down' approaches are likely to be flawed when applied at a granular scale, and that there are risks in employing the approaches used by, for example, the Network of Central Banks and Supervisors for Greening the Financial System. Most fundamental, uncertainty associated with projections of future climate extremes must be propagated through to estimating risk. We strongly encourage a review of existing top-down approaches before they develop into de facto standards and note that existing approaches that use a 'bottom-up' strategy (e.g. catastrophe modelling and storylines) are more likely to enable a robust assessment of material risk.
4. Discussion and conclusions
We welcome the initiatives within the global financial system to examine acute risks associated with physical climate change and we strongly concur that acute risks associated with weather and climate threaten elements of the financial system. Using physical climate models to examine large-scale risks or as guides for scenario or storyline planning is useful, and using reduced complexity models such as IAMs to develop large ensembles of how GMT responds to emission scenarios is well established. Our analysis is not examining whether acute risks are material, rather we examine the assumption, within methodologies including but not limited to NGFS, that large ensembles of GMT can be used to inform acute climate risk at spatial scales well below the sub-regional scale.
The NGFS methodology links large ensembles of GMT, via ISIMIP, to local and regional-scale climate risk. The methods used by NGFS to create large ensembles of GMT are not in question, nor are the climate models used in ISIMIP which have considerable validity for the large-scale assessment of impacts of climate change. The issue is the link implied within the NGFS methodology that translates GMT, through ISIMIP, to a granular level of physical climate risk which, in reality, is generated through climate-induced weather-scales and weather-related extremes. This link depends on the patterns simulated by the ISIMIP models, balancing the thermodynamic and dynamic responses, and their capacity to reflect the correlations between GMT and material extremes at a granular scale.
Our results show that irrespective of the capacity to derive a distribution for possible changes in GMT, and however well this distribution samples uncertainty, the methods used to link GMT to local, i.e. city-scale, annual extremes of rainfall and wind, or the return periods of two compound events, or the 1 in 20, 1 in 50 and 1 in 100 year rainfall or temperature extremes is deeply uncertain. Whether the ISIMIP models, or CMIP models are used, the translation of GMT into spatial expressions of extremes leads to uncertainty not merely in the magnitude of change, but in the sign of many changes. The uncertainty dwarfs any signal from emission scenarios, at least over the next 50 year. There are strategies to reduce the apparent uncertainty in projected extremes by sampling climate models according to skill or independence, but whether this reduces actual uncertainty, thereby enabling more robust decisions on managing risk, is unknown. Before we continue, we emphasise that the conclusion that there is no useful link between GMT and material risks does not mean that climate models have no role to play in assessing the impact of climate change on financial risk.
One of the advantages of physical climate models, including those used in ISIMIP, is that they provide easily accessible and quantitative information. Within CMIP6 for example, which include newer models than ISIMIP, a multi-petabyte store of open access climate change information exists. This is obviously very attractive to groups seeking to build approaches, or undertake analyses, that can be applied anywhere in the world. However, there are two fundamental principles to consider in using any physical climate modelling system. First, accuracy and precision are not the same thing; physical climate models are very precise, but not necessarily accurate and may not be accurate for problems they were not designed for. Second, uncertainty cannot be ignored; deep uncertainty exists in climate projections (Lempert et al 2013) and affects both the magnitude and sign of the change in most physical risks and very probably most material risks. This cannot be ignored because the consequences are not easy to predict. Ranger et al (2022) describe, for example, the stress testing run by the Bank of England (2020), noting that the input data is largely sourced via the NGFS methodology and that no uncertainty information is provided. From a physical climate projections perspective this is simply flawed. Refer to figures 2(d) and 3(a) and take any value of GMT and select the associated wind speed change or return period. Depending on which CMIP6 model and emission scenario is selected, increases, decreases or no change can be obtained. It is deeply misleading to select a single value from the ranges shown in figures 2 and 3 without also accounting for the uncertainty. Further, despite claims within NGFS (Bertram et al 2021) that the IAM used (MAGICC6) is designed 'to capture the full GMT uncertainty for different emissions scenarios', and accepting MAGICC6 is a legitimate tool to use, it is misleading to suggest it captures the full range of uncertainty. It is not known, and it is probably unknowable, to what degree any IAM captures the full range of uncertainty. The ISIMIP project is not designed to select global models that capture uncertainty, or independence (Abramowitz and Bishop 2014), or particularly good or bad models. It is simply an ensemble of opportunity (Tebaldi and Knutti 2007) with strengths and weaknesses. The ISIMIP models are legitimate tools to use, but they are quite old model versions, quite coarse in terms of spatial resolution and only six models complete the ensemble. Referring to the uncertainty bars shown in figures 4–7, selecting six CMIP models would reduce the apparent uncertainty because of the smaller sample size, but it would not reduce the actual uncertainty. It is noteworthy here that even the full CMIP6 ensemble, which now includes over 50 models, samples an unknown fraction of the true uncertainty. We also note that assessing material risks using CMIP6 (with SSPs) is unlikely to lead to more robust conclusions that using CMIP5 (with RCPs). While climate models are improving, at the spatial scales of individual cities and on time scales of decades both CMIP5 and CMIP6 provide projections that cannot be clearly differentiated.
We acknowledge that many of these issues are clearly highlighted in the literature. Bertram et al (2021) notes that 'findings from the Climate Impact Explorer should thus be used to supplement rather than replace national or regional risk assessments'. They further note that 'uncertainty in the climate sensitivity is sampled by considering four different GCMs', and that several impact models are used to sample the uncertainty'. Bertram et al (2021) also notes:
Following established approaches in the scientific literature (see e.g. James et al 2017 ), we assess impact indicators as a function of the GMT level. This means we assume that a given GMT level will on average lead to the same change in that indicator even if it is reached at two different moments in time in two different emission scenarios. This assumption is generally well justified and differences are small compared to the spread across changes projected by different models (Herger et al 2015 ).
We strongly agree with these statements and emphasize the 'on average' and 'generally well justified'. The problem is, however, that while these approaches are well justified on average, the acute physical risks and the material extremes associated with regional-scale and finer scale climate change are not well described by averages. After all, the financial sector seeks to know which specific regions are most at risk, not that a fraction of the globe is at increased risk. If financial risk is aggregated to a continent, systematic errors associated with these assumptions might be averaged out, but the NGFS methodology is being used at a granularity well below that examined in this paper. This involves very significant uncertainties and determining whether climate change results in a material extreme is country, economy and business specific. At these scales, and in the context of material extremes associated with climate-induced weather-scale phenomenon, the ways in which the NGFS methodology are being employed is very likely misleading. There is a key implication here that is deeply concerning:
If all Central Banks (or the over 100 members of NGFS) use a methodology that is systemically biased, this could itself lead to a major systemic risk to the global financial system.
The current NGFS scenarios do not represent the range of plausible climate outcomes possible at a country level—a systematic bias—and most banks, insurers and investors are using these scenarios without fully accounting for uncertainty. Misuse or misunderstanding of what climate models tell us, and assumptions that products like NGFS have utility at sub-national scales could make the risks we are trying to avoid through the NGFS scenarios worse. Rectifying this is important and requires an open collaboration between banks and the scientific community to develop scenarios appropriate for stress testing.
The most fundamental issue with assessing financial risk associated with acute physical risk relates to the acknowledgement that these risks are associated with weather, usually locally, and usually (but not necessarily) statistically extreme. The use of global climate models, which do not resolve weather-scales, are not appropriate for local scales and may not capture material extremes, is highly questionable. While using the quantitative information from climate models is tempting and provides a considerable amount of apparently precise information, failure to fully represent uncertainty leads to false confidence. By contrast, there are well-known ways to decouple assessments of acute physical risks from climate model quantitative information. Using climate models to inform scenarios, storylines (Shepherd 2019, Jack et al 2020) and stress testing, or using climate models to modify the statistics represented in current-day catastrophe modelling can all help break the false assumption that the numerical precision in climate models equates to accuracy at a granular level. In many ways, this echoes guidance from Schinko et al (2017) to consider models as tools to explore a system as distinct from predicting a system, or Saravanan (2022) who explores the need to take climate models seriously, but not literally. Given the material risks from climate change are commonly the tail risks, more use of catastrophe modelling might lead to decision making that builds more resilient systems. However, some material risks are likely associated with long periods of drizzle, or of high cloud cover and still winds. These are events associated with persistence which climate models are known to capture with relatively low skills (see for example Kumar et al 2013).
The relative ease with which large ensembles using IAMs can be generated and linked to acute risk at sub-regional scales is understandably attractive for large financial institutions, central banks and financial regulators. It is therefore unlikely that these will be wholly replaced by an alternative approach. This relative ease, however, hides immense uncertainty that is likely material, and that risks misleading an institution or regulator, exposing entities to litigation, and directly challenging centuries of accounting and assurance practice. We suggest three immediate actions:
- (a)the NGFS method is likely misleading in determining granular level acute or material risks to the financial sector and we strongly advise that it is openly critiqued and does not become a de facto standard by default.
- (b)no products or methods should be employed that fail to properly account for uncertainty, and how uncertainty is estimated needs very carefully consideration. There is no evidence that merely adding more climate models, or more estimates of GMT reduces uncertainty.
- (c)there is a rich history of assessing risk at the local scale (Ranger et al 2022). This 'bottom-up' assessment can utilize historical climate data, existing risk estimates, analysis of the vulnerability of an entity to these acute physical risks, stress testing of investment portfolios and so on. The historical data can be perturbed using expert judgement based on multiple lines of evidence, including climate models. A financial institution should confront the 'top-down' methodologies proposed by regulators with bottom-up assessments of their acute physical risks and review how different the resulting estimates are.
Perhaps the single most important point here is that while the 'top-down' approach is likely to become the de facto standard for assessing a financial institution's exposure to climate change, this should only be done in conjunction with alternative 'bottom-up' methods.
Finally, we note that climate science and the science of climate projections is evolving rapidly. Further, regulation and disclosure linked with climate risk is developing rapidly. A company with the ability to undertake, at least to some degree, a bottom-up assessment of material risks, and to engage with external parties from a position of understanding, will be well positioned as climate projections change. A company with internal capability will be more able to ask the right questions, avoid buying risk advice that is misleading, and be able to identify opportunities associated with climate change more quickly. While building some internal capability might seem confronting and expensive, building future strategies on information that is not understood and is potentially misleading is likely more so, and quite possibly exposes the global financial system to systemic risks of its own making.
Self-preferencing shouldn't be an antitrust offense
Antitrust Unchained: The EU’s Case Against Self-Preferencing. Giuseppe Colangelo. International Center for Law and Economics Working Paper No. 2022-09-22, Sep 22 2022. https://laweconcenter.org/resource/antitrust-unchained-the-eus-case-against-self-preferencing
Abstract: Whether self-preferencing is inherently anticompetitive has emerged as perhaps the core question in competition policy for digital markets. Large online platforms who act as gatekeepers of their ecosystems and engage in dual-mode intermediation have been accused of taking advantage of these hybrid business model to grant preferential treatment to their own products and services. In Europe, courts and competition authorities have advanced new antitrust theories of harm that target such practices, as have various legislative initiatives around the world. In the aftermath of the European General Court’s decision in Google Shopping, however, it is important to weigh the risk that labeling self-preferencing as per se anticompetitive may merely allow antitrust enforcers to bypass the legal standards and evidentiary burdens typically required to prove anticompetitive behavior. This paper investigates whether and to what extent self-preferencing should be considered a new standalone offense under European competition law.
Envy seems a rather stable disposition (both at the global level and within specific envy domains)
Erz, Elina, and Katrin Rentzsch. 2022. “Stability and Change in Dispositional Envy: Longitudinal Evidence on Envy as a Stable Trait.” PsyArXiv. September 20. doi:10.1177/08902070221128137
Abstract: Dispositional envy has been conceptualized as an emotional trait that varies across comparison domains (e.g., attraction, competence, wealth). Despite its prevalence and potentially detrimental effects, little is known about stability and change in dispositional envy across time due to a lack of longitudinal data. The goal of the present research was to close this gap by investigating stability and developmental change in dispositional envy over time. In a preregistered longitudinal study across 6 years, we analyzed data from N = 1,229 German participants (n = 510-634 per wave) with a mean age of 47.0 years at intake (SD = 12.4, range 18 to 88). Results from latent factor models revealed that both global and domain-specific dispositional envy were stable across 6 years in terms of their rank order and mean levels, with stability coefficients similar to those of other trait measures reported in literature. Moreover, a substantial amount of variance in global and domain-specific dispositional envy was accounted for by a stable trait factor. Results thus provide evidence for a stable disposition toward the experience of envy both at the global level and within specific envy domains. The present findings have important theoretical and practical implications for the stability and development of dispositional envy in adulthood and advance the understanding of emotional traits in general.
Specific cognitive abilities (SCA) are 56% heritable, similar to general intelligence, g; some SCA are significantly more or less heritable than others, 39-64%; SCA do not show the dramatic developmental increase in heritability seen for g
The genetics of specific cognitive abilities. Francesca Procopioa et al. Intelligence, Volume 95, November–December 2022, 101689. https://doi.org/10.1016/j.intell.2022.101689
Highlights
• Specific cognitive abilities (SCA) are 56% heritable, similar to g.
• Some SCA are significantly more heritable than others, 39% to 64%.
• Independent of g (‘SCA.g’), SCA remain substantially heritable (∼50%).
• SCA do not show the dramatic developmental increase in heritability seen for g.
• Genomic research on SCA.g is needed to create profiles of strengths and weaknesses.
Abstract: Most research on individual differences in performance on tests of cognitive ability focuses on general cognitive ability (g), the highest level in the three-level Cattell-Horn-Carroll (CHC) hierarchical model of intelligence. About 50% of the variance of g is due to inherited DNA differences (heritability) which increases across development. Much less is known about the genetics of the middle level of the CHC model, which includes 16 broad factors such as fluid reasoning, processing speed, and quantitative knowledge. We provide a meta-analytic review of 747,567 monozygotic-dizygotic twin comparisons from 77 publications for these middle-level factors, which we refer to as specific cognitive abilities (SCA), even though these factors are not independent of g. Twin comparisons were available for 11 of the 16 CHC domains. The average heritability across all SCA is 56%, similar to that of g. However, there is substantial differential heritability across SCA and SCA do not show the developmental increase in heritability seen for g. We also investigated SCA independent of g (SCA.g). A surprising finding is that SCA.g remain substantially heritable (53% on average), even though 25% of the variance of SCA that covaries with g has been removed. Our review highlights the need for more research on SCA and especially on SCA.g. Despite limitations of SCA research, our review frames expectations for genomic research that will use polygenic scores to predict SCA and SCA.g. Genome-wide association studies of SCA.g are needed to create polygenic scores that can predict SCA profiles of cognitive abilities and disabilities independent of g.
Keywords: Specific cognitive abilityIntelligencemeta-analysisTwin studyHeritability
4. Discussion
Although g is one of the most powerful constructs in the behavioural sciences (Jensen, 1998), there is much to learn about the genetics of cognitive abilities beyond g. Our meta-analysis of 747,567 twin comparisons yielded four surprising findings. One of the most interesting findings about g is that its heritability is similar to that of SCA. The heritability of g is about 50% (Knopik et al., 2017) and the average heritability of SCA from our meta-analysis is 56%.
We focused on three additional questions: Are some SCA more heritable than others (differential heritability)? Does the heritability of SCA increase during development as it does for g? What is the heritability of SCA independent of g?
4.1. Differential heritability
We conclude that some SCA are more heritable than others. The estimates ranged from 39% for auditory processing (Gt) to 64% for quantitative knowledge and processing speed (Gs). Our expectation that domains conceptually closer to g would have higher heritability than ones more conceptually distinct from g led us to be surprised which SCA were most heritable.
For example, we hypothesised that acquired knowledge would be less heritable than fluid reasoning. This is because acquired knowledge is a function of experience, while fluid reasoning involves the ability to solve novel problems. To the contrary, our results indicate that acquired knowledge is the most heritable grouping of CHC domains, with an average heritability of 58%. In contrast, fluid reasoning has a comparatively low heritability estimate of 40%.
We were also surprised to find significantly large differences in heritability between SCA within the same functional grouping. For example, processing speed (Gs), one of the most heritable CHC domains, is within the functional grouping of general speed. Processing speed is defined as ‘the ability to automatically and fluently perform relatively easy or over-learned elementary cognitive tasks, especially when high mental efficiency (i.e., attention and focused concentration) is required’ (McGrew, 2009, p. 6). In contrast, reaction and decision speed (Gt), another CHC domain within the functional grouping of general speed for which twin comparisons were available, yielded one of the lowest heritabilities of 42%. It is defined as ‘the ability to make elementary decisions and/or responses (simple reaction time) or one of several elementary decisions and/or responses (complex reaction time) at the onset of simple stimuli’ (McGrew, 2009, p. 6). Why is reaction and decision speed (Gt) so much less heritable than processing speed (Gs) (42% vs 64%)? One possibility is that processing speed picks up ‘extra’ genetic influence because it involves more cognitive processing than reaction time, as suggested by their definitions. Moreover, Schneider and McGrew (2018) propose a hierarchy of speeded abilities (Kaufman, 2018, p. 108) in which Gs (which they call broad cognitive speed) has a higher degree of cognitive complexity than Gt (broad decision speed). But this would not explain why processing speed is more heritable than fluid reasoning (40%), which seems to involve the highest level of cognitive processing such as problem solving and inductive and deductive reasoning.
One direction for future research is to understand why some SCA are more heritable than others. A first step in this direction is to assess the extent to which differential reliability underlies differential heritability because reliability, especially test-retest reliability rather than internal consistency, creates a ceiling for heritability. For example, the least heritable SCA is short-term memory (Gsm), for which concerns about test-retest reliability have been raised (Waters & Caplan, 2003).
If differential reliability is not a major factor in accounting for differential heritability, a substantive direction for research on SCA is to conduct multivariate genetic analyses investigating the covariance among SCA to explore the genetic architecture of SCA. This would be most profitable if these analyses also included g, as discussed below (SCA.g).
4.2. Developmental changes in SCA heritability
One of the most interesting findings about g is that its heritability increases linearly from 20% in infancy to 40% in childhood to 60% in adulthood. SCA show average decreases in heritability from childhood to later life (column 1 in Fig. 4). Although several CHC domains show increases from early childhood (0–6 years) to middle childhood (7–11 years), this seems likely to be due at least in part to difficulties in reliably assessing cognitive abilities in the first few years of life.
It is puzzling that heritability increases developmentally for g but not for SCA because g represents what is in common among SCA. A previous meta-analysis that investigated cognitive aging found that the heritability of verbal ability, spatial ability and perceptual speed decreased after the age of around 60 (Reynolds & Finkel, 2015). While we did not find evidence for this for any of the SCA domains, we did observe the general trend of decreasing heritability for reading and writing (Grw) and visual processing (Gv) from middle childhood onwards.
We hoped to investigate the environmental hypothesis proposed by Kovas et al. (2013) to account for their finding that the heritability of literacy and numeracy SCA were consistent throughout the school years (∼65%), whereas the heritability of g increased from 38% age 7 to 49% at age 12 (Kovas et al., 2013). They hypothesised that universal education for basic literacy and numeracy skills in the early school years reduces environmental disparities, which leads to higher heritability as compared to g, which is not a skill taught in schools.
We hoped to test this hypothesis by comparing SCA that are central to educational curricula and those that are not. For example, reading and writing (Grw), quantitative knowledge (Gq) and comprehension-knowledge (Gc) are central to all curricula, whereas other SCA are not explicitly taught in schools, such as auditory processing (Ga), fluid reasoning (Gf), processing speed (Gs), short-term memory (Gsm) and reaction and decision speed (Gt). Congruent with the Kovas et al. hypothesis, Grw, Gq and Gc yield high and stable heritabilities of about 60% during the school years. However, too few twin comparisons are available to test whether Ga, Gf, Gs, Gsm and Gt show increasing heritability during the school years.
4.3. SCA independent of g (SCA.g)
Although few SCA.g data are available, they suggest another surprising finding. In these studies, SCA independent of g are substantially heritable, 53%, very similar to the heritability estimate of about 50% for SCA uncorrected for g. This finding is surprising because a quarter of the variance of SCA is lost when SCA are corrected for g. More SCA.g data are needed to assess SCA issues raised in our review about the influence of g in differential heritability and developmental changes in heritability.
Although more data on SCA.g are needed, our preliminary results are encouraging in suggesting that genetic influence on SCA does not merely reflect genetic influence on g. Although g drives much of the predictive power of cognitive abilities, it should not overshadow the potential for SCA to predict profiles of cognitive strengths and weaknesses independent of g.
An exciting aspect of these findings is their implication for research that aims to identify specific inherited DNA differences responsible for the heritability of SCA and especially SCA.g. Genome-wide association (GWA) methods can be used to assess correlations across millions of DNA variants in the genome with any trait and these data can be used to create a polygenic score for the trait that aggregates these weighted associations into a single score for each individual (Plomin, 2018). The most powerful polygenic scores in the behavioural sciences are derived from GWA analyses for the general cognitive traits of g (Savage et al., 2018) and educational attainment (Lee et al., 2018; Okbay et al., 2022). It is possible to use these genomic data for g and educational attainment to explore the extent to which they can predict SCA independent of g and educational attainment even when SCA were not directly measured in GWA analyses, an approach called GWAS-by-subtraction (Demange et al., 2021), which uses genomic structural equation modeling (Grotzinger et al., 2019). We are also employing a simpler approach using polygenic scores for g and educational attainment corrected for g, which we call GPS-by-subtraction (Procopio et al., 2021).
Ultimately, we need GWA studies that directly assess SCA and especially SCA.g. Ideally, multiple measures of each SCA domain would be used and a general factor extracted rather than relying on a single test of the domain. The problem is that GWA requires huge samples to detect the miniscule associations between thousands of DNA variants and complex traits known to contribute to their heritabilities. The power of the polygenic scores for g and educational attainment comes from their GWA sample sizes of >250,000 for g and more than three million for educational attainment. It is daunting to think about creating GWA samples of this size for tested SCA as well as g in order to investigate SCA.g. However, a cost-effective solution is to create brief but psychometrically valid measures of SCA that can be administered to the millions of people participating in ongoing biobanks for whom genomic data are available. For example, a gamified 15-min test has been created for this purpose to assess verbal ability, nonverbal ability and g (Malanchini et al., 2021). This approach could be extended to assess other SCA and SCA.g.
We conclude that SCA.g are reasonable targets for genome-wide association studies, which could enable polygenic score predictions of profiles of specific cognitive strengths and weaknesses independent of g (Plomin, 2018). For example, SCA.g polygenic scores could predict, from birth, aptitude for STEM subjects independent of g. Polygenic score profiles for SCA.g could be used to maximise children's cognitive strengths and minimise their weaknesses. Rather than waiting for problems to develop, SCA.g polygenic scores could be used to intervene to attenuate problems before they occur and help children reach their full potential.
4.4. Other issues
An interesting finding from our review is that SCA.g scores in which SCA are corrected phenotypically for g by creating residualised scores from the regression of g on SCA yield substantially higher estimates of heritability than SCA.g derived from Cholesky analyses.
We suspect that the difference is that regression-derived SCA.g scores remove phenotypic covariance with g, thus removing environmental as well as genetic variance associated with g. In contrast, Cholesky-derived estimates of the heritability of SCA independent of g are calibrated to the total variance of SCA, not to the phenotypic variance of SCA after g is controlled. Regardless of the reason for the lower Cholesky-derived estimates of the heritability of g as compared to regression-derived SCA.g scores, regression-derived phenotypic scores of SCA.g are likely the way that phenotypic measures of SCA will be used in phenotypic and genomic analyses. Instead, the Cholesky models involve latent variables that cannot be converted to phenotypic scores for SCA.g.
Another finding from our review is that heritability appears to be due to additive genetic factors. The average weighted MZ and DZ correlations across the 11 CHC domains for which twin comparisons were available were 0.72 and 0.44, respectively. This pattern of twin correlations, which is similar to that seen across all SCA as well as g, is consistent with the hypothesis that genetic influence on cognitive abilities is additive (Knopik et al., 2017). Additive genetic variance involves genetic effects that add up according to genetic relationships so that if heritability were 100%, MZ twins would correlate 1.0 and DZ twins would correlate 0.5 as dictated by their genetic relatedness. In contrast, if genetic effects operated in a non-additive way, the correlation between DZ twins would be less than half the correlation between MZ twins. Because MZ twins are identical in their inherited DNA sequence, only MZ twins capture the entirety of non-additive interactions among DNA variants. In other words, the hallmark of non-additive genetic variance for a trait is that the DZ correlation is less than half the MZ correlation. None of the SCA show this pattern of results (Fig. 3), suggesting that genetic effects on SCA are additive.
Finding that genetic effects on SCA are additive is important for genomic research because GWA models identify the additive effects of each DNA variant and polygenic scores sum these additive effects (Plomin, 2018). If genetic effects were non-additive, it would be much more difficult to detect associations between DNA variants and SCA. The additivity of genetic effects on cognitive abilities is in part responsible for the finding that the strongest polygenic scores in the behavioural sciences are for cognitive abilities (Allegrini et al., 2019) (Cheesman et al., 2017) (Plomin et al., 2013).
4.5. Limitations
The usual limitations of the twin method apply, although it should be noted that twin results in the cognitive domain are supported by adoption studies (Knopik et al., 2017) and by genomic analyses (Plomin & von Stumm, 2018).
As noted earlier, a general limitation is that some CHC categories have too few studies to include in meta-analyses. This is especially the case in the developmental analyses. Power is diminished by dividing the twin comparisons into five age categories. In addition, different measures are used at different ages; even when measures with the same label are used across ages, they might measure different things. Finally, the developmental results are primarily driven by cross-sectional results from different studies. Nonetheless, longitudinal comparisons within the same study have also found no developmental change in heritability estimates for some SCA (Kovas et al., 2013).
Another limitation of this study is that there might be disagreement concerning the CHC categories to which we assigned tests. We reiterate that we used the CHC model merely as a heuristic to make some sense of the welter of tests that have been used in twin studies, not as a definitive assignment of cognitive tests to CHC categories. We hope that Supplementary Table S-3 with details about the studies and measures will allow researchers to categorise the tests differently or to focus on particular tests. This limitation is also a strength of our review in that it points to SCA for which more twin research is needed. The same could be said for other limitations of SCA twin research such as the use of different measures across studies and the absence of any twin research at some ages.
A specific limitation of SCA.g is that removing all phenotypic covariance with g might remove too much variance of SCA, as mentioned in the Introduction. A case could be made that bi-factor models (Murray & Johnson, 2013) or other multivariate genetic models (Rijsdijk, Vernon, & Boomsma, 2002) would provide a more equitable distribution of variance between SCA and g indexed as a latent variable representing what is in common among SCA. However, the use of bifactor models is not straightforward (Decker, 2021). Moreover, phenotypic and genomic analyses of SCA.g are likely to use regression-derived SCA.g scores because bifactor models, like Cholesky models, involve latent variables that cannot be converted to phenotypic scores for SCA.g.
Finally, in this paper we did not investigate covariates such as average birth year of the cohort, or country of origin, nor did we examine sex differences in differential heritability or in developmental changes in heritability or SCA.g. Opposite-sex DZ twins provide a special opportunity to investigate sex differences. We have investigated these group differences in follow-up analyses (Zhou, Procopio, Rimfeld, Malanchini, & Plomin, 2022).
4.6. Directions for future research
SCA is a rich territory to be explored in future research. At the most general level, no data at all are available for five of the 16 CHC broad categories. Only two of the 16 CHC categories have data across the lifespan.
More specifically, the findings from our review pose key questions for future research. Why are some SCA significantly and substantially more heritable than others? How is it possible that SCA.g are as heritable as SCA? How is it possible that the heritability of g increases linearly across the lifespan, but SCA show no clear developmental trends?
Stepping back from these specific findings, for us the most far-reaching issue is how we can foster GWA studies of SCA.g so that we can eventually have polygenic scores that predict genetic profiles of cognitive abilities and disabilities that can help to foster children's strengths and minimise their weaknesses.