Friday, December 6, 2019

Happiness is negatively associated with Belief in Luck, but positively associated with Belief in Personal Luckiness

Do the happy-go-lucky? Edmund R. Thompson, Gerard P.  Prendergast, Gerard H. Dericks. Current Psychology, December 6 2019. https://link.springer.com/article/10.1007/s12144-019-00554-w

Abstract: While popular aphorisms and etymologies across diverse languages suggest an intrinsic association between happiness and luck beliefs, empirically testing the existence of any potential link has historically been constrained by varying and unclear conceptualizations of luck beliefs and by their sub-optimally valid measurement. Employing the Thompson and Prendergast Personality and Individual Differences, 54(4), 501-506, (2013) bi-dimensional refinement of trait luck beliefs into, respectively, ‘Belief in Luck’ and ‘Belief in Personal Luckiness’, we explore the relationship between luck beliefs and a range of trait happiness measures. Our analyses (N = 844) find broadly that happiness is negatively associated with Belief in Luck, but positively associated with Belief in Personal Luckiness, although results differ somewhat depending on which measure of happiness is used. We further explore interrelationships between luck beliefs and the five-factor model of personality, finding this latter fully accounts for Belief in Luck’s negative association with happiness, with additional analyses indicating this is wholly attributable to Neuroticism alone: Neuroticism appears to be a possible mediator of Belief in Luck’s negative association with happiness. We additionally find that the five-factor model only partially attenuates Belief in Personal Luckiness’ positive association with happiness, suggesting that Belief in Personal Luckiness may be either a discrete facet of trait happiness or a personality trait in and of itself.

Keywords: Happiness Belief in luck Belief in personal luckiness Five-factor personality model Irrational beliefs

Belief in Luck and Happiness

The Belief in Luck dimension of Thompson and Prendergast’s () bidimensional model distinguishes between, on one hand, luck believers who irrationally consider luck is a deterministic and external phenomenon with agentic qualities capable of influencing outcomes and, on the other, luck disbelievers who consider luck to be merely the product of purely stochastic and uninfluenceable chance. Thompson and Prendergast () found belief or disbelief in luck is not binary, but rather exists on a unidimensional continuum, substantiating Maltby et al.’s () suspicion that the apparently discrete beliefs they found in, respectively, good and bad luck are the product of scoring artifacts rather than separate underlying constructs.
Research to date on Belief in Luck specifically has been scant and limited to inter-item correlations without controls for possible confounding variables. Nonetheless, such correlations hint that believing in luck may be negatively correlated with affect-related measures. For example, Maltby et al. () find belief in luck correlates positively with a range of irrational beliefs and negative traits such as awfulizing and problem avoidance, and Thompson and Prendergast () find it correlates negatively with well-being. Considerable research has demonstrated more generally that irrational beliefs are linked to negative affect (Bridges and Harnish ; David and Cramer ; David et al. ; Kassinove and Eckhardt ; Rohsenow and Smith ; Smith ). Maltby et al. () also find that belief in luck correlates negatively with internal locus of control, while Thompson and Prendergast () find it correlates positively with the powerful others dimension of Levenson’s (1981) locus of control measure. External locus of control, with which belief in luck is commensurate, has long been empirically associated with negative affect (Abramowitz ; Buddelmeyer and Powdthavee 2016; Houston ; Johnson and Sarason ; Yu and Fan 2016). Taken together, these findings are consonant with Maltby et al.’s () suggestion that belief in luck is a facet of irrationality linked to low personal agency, maladaptivity and the negative affect found to be linked with these. Hence it would seem reasonable to suggest that Belief in Luck may be negatively linked with positive dimensions of affect:
  • H1. Belief in Luck will be negatively associated with happiness.

Belief in Personal Luckiness and Happiness

Thompson and Prendergast () find both luck believers and disbelievers alike make a subconscious semantic differentiation between luck conceived as a deterministic external phenomenon affecting future events, and luck as a descriptive metaphor for how fortunately past events and current circumstances are believed to have turned out for them personally. Like Maltby et al. (), Thompson and Prendergast () find belief in being personally lucky is discrete from and uncorrelated with belief in luck as a deterministic phenomenon. Maltby et al. () find belief in being personally lucky correlates negatively with discomfort-anxiety and with awfulizing, but positively with hope, self-acceptance, positive relations, environmental mastery, and other personality traits associated with positive affect. Similar positive associations between belief in being personally lucky and favorable affective outcomes are reported by Day and Maltby (), André (), and Jiang et al. (). Further mirroring some of Maltby et al.’s () findings, Thompson and Prendergast’s () efforts to establish the nomological validity of the Belief in Personal Luckiness construct find it correlates positively with some affect-related measures, and they speculate it might perhaps constitute a facet of overall well-being. Hence:
  • H2. Belief in Personal Luckiness will be positively associated with happiness.

Discussion


Luck Beliefs and Happiness

Our finding that Belief in Luck is broadly negatively associated with happiness is consonant with Maltby et al.’s () suggestion that Belief in Luck is perhaps a maladaptive trait. Consequently, any notion of happy-go-lucky individuals cheerfully trusting to luck would seem to be inaccurate, at least if those individuals believe in luck as a non-random, deterministic and external phenomenon. Indeed, insofar as such individuals may irrationally trust to luck as a deterministic phenomenon, they would seem to do so unhappily not happily.
However, our finding that Belief in Personal Luckiness is positively associated with happiness tends to suggest the happy may indeed go lucky, in the sense that happiness and believing oneself to be lucky are associated. Of course, the relatively large size of associations we find here suggests that Belief in Personal Luckiness might in fact be a facet of an overall happiness construct. A possible implication of this is that Belief in Personal Luckiness’ association with any particular happiness measure could, perhaps, be fully accounted for by controlling other happiness measures. To investigate this possibility, we separately regressed each of the four measures of happiness on Belief in Personal Luckiness while simultaneously controlling for the three remaining happiness measures in each respective case, to see if Belief in Personal Luckiness maintained a significant beta. Doing so we found Belief in Personal Luckiness is not associated with either Positive or Negative Affect. However, Belief in Personal Luckiness is still significantly associated with Happiness (β = .09, p < .01; ΔR2 = .05, p < .01), and Optimism (β = .09, p < .01; ΔR2 = .06, p < .01). This would seem to support, partly at least, that Belief in Personal Luckiness may represent either a facet of happiness or a discrete personality trait positively associated with happiness.

Luck Beliefs, Five-Factor Model and Happiness

Neither Belief in Luck nor Belief in Personal Luckiness appear from our findings to be mediators of the association between the five-factor model of personality and happiness.
Indeed, our analyses, in part, suggest the contrary: that Neuroticism fully mediates Belief in Luck’s association with happiness. This does not imply that Belief in Luck necessarily ‘causes’ Neuroticism, but it is reasonable to speculate that the underlying irrationality and the lack of both agency and self-determination that would seem to underpin Belief in Luck also to some extent underpin or are facets of Neuroticism. This would be consonant with previous research demonstrating significant relationships between Neuroticism and locus of control (Judge et al. ; Morelli et al. ), self-determination (Elliot and Sheldon ; Elliot et al. ), and irrational beliefs (Davies ; Sava ).
We do not find evidence for any component of the five-factor personality model mediating Belief in Personal Luckiness’ association with happiness, nor do we find evidence of any pronounced confounding effects between Belief in Personal Luckiness and the five-factor model and their respective associations with happiness. Hence, considering Belief in Personal Luckiness to be a trait discrete from fundamental personality models would on the basis of our findings not seem unreasonable. Nor would it seem unreasonable to suggest that Belief in Personal Luckiness might potentially be either a facet of happiness or a personality trait discrete from but associated with not just the five-factor model but also happiness.
Our conclusions here certainly seem to apply with greatest saliency to the most direct measure of trait happiness we used, Lyubomirsky and Lepper’s () Subjective Happiness Scale, and to a lesser extent to Optimism, a measure closely allied with happiness (Brebner et al. ; Chaplin et al. ; Furnham and Cheng ; Salary and Shaieri ). However, while the pattern of relationships is broadly similar for both Positive Affect and Negative Affect, the effect sizes are smaller and either less significant or insignificant. This would suggest that, while both Positive Affect and Negative Affect are often used as proxies for happiness, they might perhaps best be regarded as constructs related to, rather than directly synonyms of, happiness.

Limitations and Further Research

While our research sheds new empirical light on the relationships between luck beliefs, happiness and the five-factor personality model, a number of limitations need to be kept in mind. As with any findings based on cross-sectional data, interpreting our findings in terms of directions of causality would be imprudent and, of course, constrained by the assumption of our research that happiness, luck beliefs, and the five-factor model are all personality traits rather than individual difference states. Personality traits may, of course, be associated in systematic patterns, but the very notion of traits being essentially innate and non-manipulable, unlike individual difference states, intrinsically excludes the possibility that one might be ‘caused’ by another. To take the five-factor model as an example, its five personality traits have a well-established systematic pattern of associations, but it would be implausible to suggest any of the five in any mechanistic sense causes another: they exist together discretely, with none generally argued to be a facet or sub-component or effect of the other. This said, an area for further research might be to examine the effects of trait luck beliefs on state affect that varies temporally and is manipulable, so hence susceptible to theorization and testing using either longitudinal or experimental data.
A further limitation to our study relates to necessary caution in generalizing its findings in view of the deliberately homogeneous population we used. Further research to replicate our findings amongst heterogeneous populations in terms of nationality, occupation, and socio-economic status would be useful as it has been shown across multiple domains that psychological characteristics and their relationships may vary accordingly (Becker et al. ; Boyce and Wood ; John and Thomsen ; Rawwas ; Thompson and Phua , 2005; Winkelmann and Winkelmann ). Furthermore, although each of the happiness and luck measures we employ have been individually validated across internationally diverse samples including Hong Kong Chinese, underlying conceptions of both are known to exhibit nuanced cultural differences (Lu and Gilmour ; Lu and Shih ; Raphals ; Sommer ), which conceivably could modify measured associations between them.
We also note that our study, in common with most research, has limitations due to the limited selection of measures with which we operationalized our investigation. We selected just four measures commonly used in studies of trait happiness, but several others exist, although some, like the Satisfaction with Life Scale (Diener et al. ) can arguably be regarded as assessing state rather than trait happiness. We also selected a five-factor model measure that, while not as potentially prone to poor measurement validity as extremely short measures, is sufficiently brief as to exclude examination of possible relationships of each of the big-five elements on a sub-component basis. Certainly given our findings in relation to Neuroticism, further research using multi-component measures of this dimension of the five-factor model might prove illuminating.
In addition, research examining possible mediation and moderation effects of cognate psychology constructs such as, for example, locus of control (Pannells and Claxton ; Verme ), illusion of control (Larson ; Erez et al. ), and gratitude (Sun and Kong ; Toussaint and Friedman ) might help further the understanding of relationships between luck beliefs, happiness, and the five-factor model.

Predicting the replicability of social science lab experiments

Altmejd A, Dreber A, Forsell E, Huber J, Imai T, Johannesson M, et al. (2019) Predicting the replicability of social science lab experiments. PLoS ONE 14(12): e0225826. Dec 5 2019. https://doi.org/10.1371/journal.pone.0225826

Abstract: We measure how accurately replication of experimental results can be predicted by black-box statistical models. With data from four large-scale replication projects in experimental psychology and economics, and techniques from machine learning, we train predictive models and study which variables drive predictable replication. The models predicts binary replication with a cross-validated accuracy rate of 70% (AUC of 0.77) and estimates of relative effect sizes with a Spearman ρ of 0.38. The accuracy level is similar to market-aggregated beliefs of peer scientists [1, 2]. The predictive power is validated in a pre-registered out of sample test of the outcome of [3], where 71% (AUC of 0.73) of replications are predicted correctly and effect size correlations amount to ρ = 0.25. Basic features such as the sample and effect sizes in original papers, and whether reported effects are single-variable main effects or two-variable interactions, are predictive of successful replication. The models presented in this paper are simple tools to produce cheap, prognostic replicability metrics. These models could be useful in institutionalizing the process of evaluation of new findings and guiding resources to those direct replications that are likely to be most informative.

1 Introduction

Replication lies at the heart of the process by which science accumulates knowledge. The ability of other scientists to replicate an experiment or analysis demonstrates robustness, guards against false positives, puts an appropriate burden on scientists to make replication easy for others to do, and can expose the various “researcher degrees of freedom” like p-hacking or forking [420].
The most basic type of replication is “direct” replication, which strives to reproduce the creation or analysis of data using methods as close to those used in the original science as possible [21].
Direct replication is difficult and sometimes thankless. It requires the original scientists to be crystal clear about details of their scientific protocol, often demanding extra effort years later. Conducting a replication of other scientists’ work takes time and money, and often has less professional reward than original discovery.
Because direct replication requires scarce scientific resources, it is useful to have methods to evaluate which original findings are likely to replicate robustly or not. Moreover, implicit subjective judgments about replicability are made during many types of science evaluations. Replicability beliefs can be influential when giving advice to granting agencies and foundations on what research deserves funding, when reviewing articles which have been submitted to peer-reviewed journals, during hiring and promotion of colleagues, and in a wide range of informal “post-publication review” processes, whether at large international conferences or small kaffeeklatches.
The process of examining and possibly replicating research is long and complicated. For example, the publication of [22] resulted in a series of replications and subsequent replies [2326]. The original findings were scrutinized in a thorough and long process that yielded a better understanding of the results and their limitations. Many more published findings would benefit from such examination. The community is in dire need of tools that can make this work more efficient. Statcheck [27] is one such framework that can automatically identify statistical errors in finished papers. In the same vein, we present here a new tool to automatically evaluate the replicability of laboratory experiments in the social sciences.
There are many potential ways to assess whether results will replicate. We propose a simple, black-box, statistical approach, which is deliberately automated in order to require little subjective peer judgment and to minimize costs. This approach leverages the hard work of several recent multi-investigator teams who performed direct replications of experiments in psychology and economics [272829]. Based on these actual replications, we fit statistical models to predict replication and analyze which objective features of studies are associated with replicability.
We have 131 direct replications in our dataset. Each can be judged categorically by whether it replicated or not, by a pre-announced binary statistical criterion. The degree of replication can also be judged on a continuous numerical scale, by the size of the effect estimated in the replication compared to the size of the effect in the original study. As binary criterion, we call replications with significant (p ≤ 0.05) effects in the same direction as the original study successful. For the continuous measure, we study the ratio of effect sizes, standardized to correlation coefficients. Our method uses machine learning to predict outcomes and identify the characteristics of study-replication pairs that can best explain the observed replication results [3033].
We divide the objective features of the original experiment into two classes. The first contains the statistical design properties and outcomes: among these features we have sample size, the effect size and p-value originally measured, and whether a finding is an effect of one variable or an interaction between multiple variables. The second class is the descriptive aspects of the original study which go beyond statistics: these features include how often a published paper has been cited and the number and past success of authors, but also how subjects were compensated. Furthermore, since our model is designed to predict the outcome of specific replication attempts we also include similar properties about the replication that were known beforehand. We also include variables that characterize the difference between the original and replication experiments—such as whether they were conducted in the same country or used the same pool of subjects. See S1 Table for a complete list of variables, and S2 Table for summary statistics.
The statistical and descriptive features are objective. In addition, for a sample of 55 of the study-replication pairs we also have measures of subjective beliefs of peer scientists about how likely a replication attempt was to result in a categorical Yes/No replication, on a 0-100% scale, based on survey responses and prediction market prices [12]. Market participants in these studies predicted replication with an accuracy of 65.5% (assuming that market prices reflect replication probabilities [34] and using a decision threshold of 0.5).

Our proposed model should be seen as a proof-of-concept. It is fitted on an arguably too small data set with an indiscriminately selected feature set. Still, its performance is on par with the predictions of professionals, hinting at a promising future for the use of statistical tools in the evaluation of replicability.

Thursday, December 5, 2019

Rise & fall of the Chinese state: Instead of the standard explanations– economic development and war– , author believes the civil service transformed the Chinese elite from an encompassing interest group to a narrow interest group


China's State Developement in Comparative Historical Perspective. Y Wang, Nov2019. APSA-CP Newsletter Vol. XXIX, Issue 2, Fall 2019, https://scholar.harvard.edu/files/yuhuawang/files/wang_2019_apsa-cp-newsletter.pdf

Full text, notes, charts, references, at the link above. Excerpts below:

The collapse of the Chinese state in the early twentieth century was surprising. China was a pioneer in state administration: it established one of the world’s most centralized bureaucracies in 221 BCE, two hundred years before the Roman Empire.1 In the seventh century, it produced a quarter of the world’s GDP (Maddison 2007, 381) and became the first country to use a civil service examination to recruit bureaucrats. Max Weber described the Chinese examination in great detail (Weber 1951 [1915], 115), which became an essential part of his definition of a modern bureaucracy – the “Weberian” bureaucracy (Weber 1946 [1918], 241; Evans and Rauch 1999, 751).

At that time, Western Europe was experiencing large-scale dislocation, crisis, and a real break in continuity. The Roman Empire had fallen, and the Carolingian Empire had yet to form. Commerce virtually disappeared, and the ruling dynasties could barely maintain a salaried administration (Barraclough 1976, 10). In the medieval period, elites in Europe obtained their status primarily by inheriting feudal titles, and meritocratic recruitment did not emerge until the nineteenth century. Why, then, did China suffer a dramatic reversal of fortune, given its early bureaucratic development?

Here I document, and then explain, the rise and fall of the Chinese state. I show that two standard explanations for state development – economic development and war – both fall short. I offer my own explanation, which focuses on how the civil service examination transformed the Chinese elite from an encompassing interest group to a narrow interest group. This elite transformation accounts for the initial rise, but the ultimate decline and fall, of China’s state capacity.

I use a historical perspective that allows me to uncover continuities and changes that I would not have observed in a short time frame. States, like most institutions, require time to develop. The Chinese state, for example, took centuries to rise and centuries to fall. Studying a short period will risk missing the forest for the trees. As Daniel Ziblatt argues, temporal distance – moving out from single events and placing them within a longer time frame – can uncover previously undetectable patterns (Ziblatt 2017, 3).

The Chinese case is worth studying on its own merits. [...]



The Rise and Fall of the Chinese State

Figure 1 shows China’s fiscal development from 0 AD to 1900. The upper panel presents the evolution of major fiscal policies. I code each policy according to whether historians consider it to be state strengthening (+1), neutral (0), or state weakening (-1).2 The lower panel presents per capita taxation, based on estimates from archival materials.3 Both graphs demonstrate that China’s fiscal capacity peaked in the eleventh century, started to decline afterwards (with transitory increases), and diminished toward the end of the period.

The comparison with Europe is striking. At its peak, China’s fiscal capacity – proxied by revenue as a fraction of GDP in 1086 – was more than ten times that of England (Stasavage Forthcoming). But by the start of the nineteenth century, England taxed 15–20 percent of its GDP, while China taxed only 1 percent (Guo 2019).4

Another striking comparison is ruler survival. Figure 2, below, presents the duration and probability of deposition for Chinese, European, and Islamic rulers.5 Despite declining state capacity, Chinese rulers enjoyed longer tenures, on a par with European rulers. [...].


Standard Explanations for State Development

Given its early development of statehood, how should we explain the rise and fall of the Chinese state? According to the literature, state institutions tend to evolve in response to either a growing economy or the need to mobilize for war. However, I explain in this section why these standard answers do not fully explain the Chinese case.

- Economic development
Modernization theory predicts that as a country’s economy develops, society will put more demands on the state. State institutions will then evolve in response to these societal demands to provide public goods and services, which requires fiscal extraction and modern public finance.

Yet the historical evidence suggests that China’s economic (under)development was a consequence of state (under)development, rather than the other way around. Scholars of the California School argue that China was the world leader in economics as well as science and technology until about 1500. Before the Renaissance, Europe was far behind and did not catch up to and surpass China until about 1800 (Pomeranz 2000; Wong 1997). Thus, China’s economic decline appears to have occurred after its state decline, which is consistent with the new institutional economics notion that the state needs to provide security and protect property rights in order to promote long-term economic development (North 1981; Acemoglu and Robinson 2012).

- War
External war and internal conflict can both “make” the state. To prepare for external war, which became more expensive in the medieval era, European kings must extract resources from society, establish a centralized bureaucracy to manage state finances, and bring local armed groups under the control of a national army (Tilly 1975). Internal conflict may also promote state development. Mass demands for radical redistribution can induce elites to set aside their narrow interests and form a collective “protection pact”; a broad-based elite coalition that supports greater state strength to safeguard against popular revolt (Slater 2010, 5–7).

But China had fought more wars than Europe; while there were more than 850 major recorded land conflicts in Europe between the years 1000 and 1799, China experienced 1,470 land-based conflicts during this period (Dincecco and Wang 2018: 343).

In addition, if external or internal war explains state development, we should see state strengthening around or after conflicts. Figure3 presents the number of external war battles (upper panel ) and mass rebellion battles (lower panel) in China from 0 AD to 1900.6

The timing of external wars challenges Charles Tilly’s argument that such conflicts force the state to tax its citizens, establish a bureaucracy, and create a national army. [...].


Elite Transformation and State Development

The turning point in China’s rise and fall was in the eleventh century. [...]

Charles Tilly might be wondering: Facing severe external threats, why do elites oppose state building? Traditional, structural factors cannot explain individual-level differences among elites. I offer a new framework.

- To buy, or to make: that is the question
My framework starts with the presumption that elites need protection. Such protection involves a bundle of services, including defense against external and internal violence, insurgance against weather shocks, justice in dispute resolution, and social policies that protect people from risks.

Elites can obtain protection in two ways. They can “buy” public protection from the state by paying taxes. They can also “make” private protection by relying on private order institutions, such as kinship groups.8 Public protection exhibits economies of scale and scope, so the marginal cost of protecting an additional unit is small. If elites need to protect a large area, it is cheaper to “buy” public protection. Private protection has a unit cost, and each unit pays the same price for its own protection because of the rival and excludable nature of private protection. For example, if protecting one unit (e.g., 100 square kilometers) requires one garrison with one unit of labor and capital, then the cost of protecting two units will double to two units of labor and capital (constant return to scale). If elites only need to protect a relatively small area, then private protection is more efficient, because the marginal costs of funding a private army to protect a small area are relatively low compared to the taxes paid to support a national army. “Making” their own protection also gives elites some autonomy from the state.

This simple logic suggests that elites’ level of support for state building depends on the geographic span of their social networks. If they must protect a geographically dispersed network, it is more efficient to support state-strengthening policies. These elites have an encompassing interest (Olson 1982, 48). If they need to protect a geographically concentrated network, it is more efficient to rely on private protection and oppose state strengthening. These elites have a narrow interest (Olson 1982, 48).

- From encompassing interest to narrow interest
Applying the framework to the Chinese case, we can now understand why the state started to decline in the eleventh century.

A hereditary aristocracy ruled China during the medieval period from the seventh to the ninth centuries. The aristocracy consisted of a group of large clans whose genealogies were included in the official clan list approved by the imperial state. The emperors recruited bureaucrats almost exclusively from this list, and men from these clans could inherit their fathers’ positions. Although these clans were located across the country, their core male members formed a national elite coalition by intermarrying their children. During the Tang Dynasty (618–904 AD), this national elite was based in the capital cities and became a self-perpetuating institution (Tackett 2014, 25).

Thus, before the 11th century, a network of national elites ruled China. Since their kinship networks were spread out across the country, they were motivated to build a strong central state so they could protect their kin. These elites constituted an encompassing interest group.

The Huang Chao Rebellion (874–884 AD) captured the capitals and killed most members of the aristocracy (Tackett 2014, 187–234). Local elite gentry families, which traditionally held many lower bureaucratic offices, filled the power vacuum left by the demise of the aristocracy.

After the aristocracy was decimated, the Song emperors introduced the civil service examination as an alternate way to identify bureaucratic talent. During this time, members of the local gentry had to recommend prospective candidates to the local magistrate before they were even eligible to sit the initial exam (Hartwell 1982, 419). The expanded civil service examination system therefore reinforced the gentry’s strategy to contract marriage alliances with wealthy local neighbors, exchanging prestige and political opportunity for economic advantage. The civil service examination then brought many locally embedded elites into the central government. These elites became “local advocates” who, in order to influence the government’s actions, intervened directly and openly with central officials as a native, with a native’s interest in (and knowledge of ) local affairs (Hymes 1986, 127–128).

Locally embedded elites who served in the central government no longer supported a strong central state. They were better off protecting their kin using private organizations. They started to form kinship organizations, uniting their kin members around common ancestors and compiling genealogy books to manage kin membership (Faure 2007, 68). They intervened in national affairs to benefit their hometowns (Beattie 1979, 72). Their relatives became local strongmen who organized defense, repaired dikes, and funded schools (Zheng 2008, 183–194). In the late imperial period, these elites became a narrow interest group.

As the elites’ social networks became localized, they also fragmented; they found it difficult to organize cross regionally. A fragmented elite contributed to a despotic monarchy because it was easier for the ruler to divide and conquer. Historians have noted the shift to imperial despotism during the Song era, as the emperor’s position vis-à-vis his chief advisors was strengthened (Hartwell 1982, 404–405). The trend further deepened when in the late fourteenth century the founding emperor of the Ming Dynasty abolished the entire upper echelon of his central government and concentrated power securely in his own hands (Hucker 1998, 74–75). This explains the increasing security of Chinese rulers.

The despotic monarchy and the narrow interest elite became a self-enforcing equilibrium: the rulers were secure, while the elite used the state to protect their local interests and enjoyed their autonomy. Yet this arrangement led to the gradual decline of the Chinese state.

Lessons for Today

China’s historical experience suggests two important lessons for understanding contemporary China and the developing world more generally. First, it helps us understand how the Chinese Communist Party built a modern state. The key to the party’s success in the mid-twentieth century was that it eliminated or neutralized local elites through a social revolution. The party achieved this mainly through land reforms in which local landed elites were deprived of their land—and sometimes their lives. Meanwhile, a prolonged and hard-fought revolution helped forge a close-knit network of party elites from all over the country. This national team conquered the country and imposed on it a centralized elite structure. Second, many developing nations face a challenge in state building as China did historically: traditional authorities and powerful local families subvert state power. Many of the policy interventions carried out by the international community, such as the World Bank and the International Monetary Fund, focus on strengthening the bureaucracy. But as the Chinese experience demonstrates, state weakness is a socialproblem that cannot be resolved with a bureaucratic solution. When Chinese emperors began using a civil service examination to recruit bureaucrats, the Chinese elites became more fragmented and opposed to state building. This experience shows that building a strong state requires social changes, which are generally missing from today’s international programs.

Human sensorimotor system rapidly localizes touch on a hand-held tool; somatosensory cortex efficiently extracts touch location from the tool’s vibrations, recruiting & repurposing neural processes that map touch on the body

Somatosensory Cortex Efficiently Processes Touch Located Beyond the Body. Luke E. Miller et al. Current Biology, December 5 2019. https://doi.org/10.1016/j.cub.2019.10.043

Highlights
•    Human sensorimotor system rapidly localizes touch on a hand-held tool
•    Brain responses in a deafferented patient suggest vibrations encode touch location
•    Somatosensory cortex efficiently extracts touch location from the tool’s vibrations
•    Somatosensory cortex reuses neural processes devoted to mapping touch on the body

Summary: The extent to which a tool is an extension of its user is a question that has fascinated writers and philosophers for centuries [1]. Despite two decades of research [2, 3, 4, 5, 6, 7], it remains unknown how this could be instantiated at the neural level. To this aim, the present study combined behavior, electrophysiology and neuronal modeling to characterize how the human brain could treat a tool like an extended sensory “organ.” As with the body, participants localize touches on a hand-held tool with near-perfect accuracy [7]. This behavior is owed to the ability of the somatosensory system to rapidly and efficiently use the tool as a tactile extension of the body. Using electroencephalography (EEG), we found that where a hand-held tool was touched was immediately coded in the neural dynamics of primary somatosensory and posterior parietal cortices of healthy participants. We found similar neural responses in a proprioceptively deafferented patient with spared touch perception, suggesting that location information is extracted from the rod’s vibrational patterns. Simulations of mechanoreceptor responses [8] suggested that the speed at which these patterns are processed is highly efficient. A second EEG experiment showed that touches on the tool and arm surfaces were localized by similar stages of cortical processing. Multivariate decoding algorithms and cortical source reconstruction provided further evidence that early limb-based processes were repurposed to map touch on a tool. We propose that an elementary strategy the human brain uses to sense with tools is to recruit primary somatosensory dynamics otherwise devoted to the body.

Data for the EEG experiments and skin-neuron modeling: https://osf.io/c4qmr

Results and Discussion

In somatosensory perception, there is evidence in many species that intermediaries are treated like non-neural sensory extensions of the body [9]. For example, some spiders actively use their web as an extended sensory “organ” to locate prey [10]. Analogously, humans can use tools to sense the properties of objects from a distance [11, 12, 13], such as when a blind person uses a cane to probe the surrounding terrain. This sensorimotor ability is so advanced that humans can almost perfectly localize touch on the surface of a tool [7], suggesting a strong parallel with tactile localization on the body. Characterizing the neural dynamics of tool-extended touch localization provides us with a compelling opportunity to investigate the boundaries of somatosensory processing and hence the question of sensory embodiment: to what extent does the human brain treat a tool like an extended sensory organ?

From a theoretical perspective [7], the sensory embodiment of a tool predicts that—as is the case with biological sense organs—the cerebral cortex (1) rapidly extracts location-based information from changes in a tool’s sensory-relevant mechanical state (e.g., vibrations) and (2) makes this information available to the somatosensory system in an efficient manner. The peripheral code for touch location is likely different for skin (e.g., a place code) and tools (e.g., a temporal code) [7]. Transforming a temporal to a spatial code—a necessary step for tool-extended localization—is a non-trivial task for the brain. We predict that, to do so efficiently, (3) the brain repurposes low-level processing stages dedicated to localizing touch on the body to localize touch on a tool. Direct evidence for sensory embodiment requires understanding how the structural dynamics of tools couple to the neural dynamics of the cortex. Such evidence can be obtained using neuroimaging methods with high temporal resolution. To this aim, we combined electroencephalography (EEG) and computational modeling to test the aforementioned predictions.
The Cerebral Cortex Rapidly Processes Where a Tool Was Touched

In an initial experiment (n = 16), participants localized touches applied on the surface of a 1-m wooden rod (Figure 1A) while we recorded their cortical dynamics using EEG. We designed a delayed match-to-sample task that forced participants to compare the location of two touches (delivered via solenoids; Figure S1A) separated in time (Figure 1B). If the two touches were felt to be in different locations, participants made no overt response. If they were felt to be in the same location, participants used a pedal with their ipsilateral left foot to report whether the touches were close or far from their hand. Participants never used the rod before the experiment and never received performance feedback. As a result, participants had to rely on pre-existing internal models of tool dynamics [14]. Regardless, accuracy was near ceiling for all participants (mean: 96.4% ± 0.71%; range: 89.7%–99.5%), consistent with our prior finding [7].

When a stimulus feature is repeatedly presented to a sensory system, the responses of neural populations representing that feature are suppressed [15]. Effects of repetition are a well-accepted method for timestamping when specific features in a complex input are extracted [16]. Repetition paradigms have previously been used to characterize how sensory signals are mapped by sensorimotor cortices [17, 18, 19, 20]. Our experimental paradigm allowed us to leverage these repetition suppression effects to characterize when the brain extracted where a rod has been touched. Specifically, the amplitude of evoked brain responses reflecting the processing of impact location will be reduced if the rod is hit at the same location twice in a row compared to two distinct locations (Figure 1C).

We first characterized the cortical dynamics of tool-extended touch localization. Touching the surface of the rod led to widespread evoked responses over contralateral regions (Figures S1 and S3), starting ∼24 ms after contact (Figure S1B); this time course is consistent with the known conduction delays between upper limb nerves and primary somatosensory cortex [21]. A nonparametric cluster-based permutation analysis identified significant location-based repetition suppression in a cluster of sensorimotor electrodes between 48 and 108 ms after contact (p = 0.003; Figures 1D–1I, S1C, S1D, and S3; Table S1). This cluster spanned two well-characterized processing stages previously identified for touch on the body: (1) recurrent sensory processing within primary somatosensory (SI) and motor (MI) cortices between 40 and 60 ms after stimulation [22], which has been implicated in spatial processing [23, 24], and (2) feedforward and feedback processing between SI, MI, and posterior parietal regions between 60 and 100 ms after stimulation [25, 26], proposed to contribute to transforming a sensory map into a higher-level spatial representation [18, 27]. This suppression was too quick to reflect signals related to motor preparation/inhibition, which generally occur ∼140 ms after touch [28].
Location-based Repetition Suppression Is Driven by Vibratory Signals

We previously suggested that, during tool-extended sensing, where a rod is touched is encoded pre-neuronally by patterns of vibration (i.e., vibratory motifs; Figures S1E and S1F). When transiently contacting an object, specific resonant modes (100–1,000 Hz for long wooden rods) are selectively excited, giving rise to vibratory motifs that unequivocally encode touch location [7]. These rapid oscillations are superimposed onto a slowly evolving rigid motion that places a load on the participant’s fingers and wrist. Given that the somatosensory system is sensitive to both slow-varying loads (via proprioception) and rapid vibrations imposed to the hand (via touch), these two signals are difficult to disentangle experimentally.

To adjudicate between the contribution of each aspect of the mechanical signal, we repeated experiment 1 with a deafferented participant (DC) who lost proprioception in her right upper limb (33% accuracy in clinical testing) following the resection of a tumor near the right medulla oblongata [29]. Importantly, light touch was largely spared in her right limb (100% accuracy). DC completed the EEG experiment while holding a rod in her deafferented hand and intact left hand (separate blocks). Her behavioral performance was good for both the intact (72%) and deafferented (77%) limbs. Crucially, her neural dynamics exhibited the observed repetition suppression for both limbs, with a magnitude comparable to that of the healthy participants (Figures 1F, 1I, and S2). Though not excluding possible contributions from slow varying rigid motion (when available), this result strongly suggests that the observed suppression was largely driven by information encoded by vibrations.
Processing of Vibratory Motifs Is Temporally Efficient

We used a biologically plausible skin-neuron model [8] to quantify how efficiently the brain extracts touch location on a tool. According to principles of efficient coding, sensory cortices attempt to rapidly and sparsely represent the spatiotemporal statistics of the natural environment with minimal information loss [30]. DC’s results suggest that the brain uses vibratory motifs to extract contact location on a rod. It has been hypothesized that the spiking patterns of Pacinian afferents encode object-to-object contact during tool use [31], a claim that we found model-based evidence for [7]. This temporal code must be decoded in somatosensory processing regions, perhaps as early as the cuneate nucleus [32].

We derived an estimate of “maximal efficiency” by quantifying the time course of location encoding in a simulated population of Pacinian afferents in the hand (Figures 2A and 2B). Support-vector machine (SVM) classification revealed a temporal code that was unambiguous about contact location within 20 ms (Figure 2C). This code was efficient, corresponding to 4.6 ± 1.7 spikes per afferent. Taking into account the known conduction delays between first-order afferents and SI [21], this finding—along with our prior study [7]—suggests that location encoding within 35–40 ms would reflect an efficient representational scheme. The early suppression observed in experiment 1 (Figures 1D–1I) is consistent with this estimate. This suggests that somatosensory cortex views these temporal spike patterns as meaningful tactile features, allowing humans to efficiently use a rod as an extended sensor.

Confirming the earliest work, against the most recent one, we find that same-sex couples are more likely to break up than different-sex couples; the gap in stability is larger for couples with children


Stability Rates of Same-Sex Couples: With and Without Children. Doug Allen & Joseph Price. Marriage & Family Review, Volume 56, 2020 - Issue 1. https://doi.org/10.1080/01494929.2019.1630048

Abstract: In contrast to earlier studies, several recent ones have claimed that stability rates among same-sex couples are similar to those of different-sex couples. This article reexamines these latest accounts and provides new evidence regarding stability rates using three large, nationally representative datasets from the United States and Canada. Confirming the earliest work, we find that same-sex couples are more likely to break up than different-sex couples. We find that the gap in stability is larger for couples with children, the very group for which concerns about stability are the most important.

Keywords: Children, divorce, same-sex relationships, stability of relationships

Conclusion
We use three large, independent datasets to examine the relationship stability of same-sex couples, taking special note of couples with children. Although there may be some concern regarding the use of independent data, the fact that we find consistent results, while holding constant the same variables in three different legal environments and with data sets constructed in different ways, offers rather compelling evidence of robustness.

Dissolution rates of both same- and different-sex couples vary significantly across panels, in part due to how dissolution and same-sex was measured in each panel, and due to the length of time and time periods the data cover. However, the patterns in the results are strikingly similar. We find not only that same-sex couples are significantly more likely to dissolve their relationship than comparable different-sex couples, but also that this effect is larger for same-sex couples with children. This difference is statistically significant in almost all cases and suggests that parental instability is an important factor through which parents’ sexual orientation influences children’s outcomes. This channel may be the driving force behind recent findings of poorer child outcomes in same-sex families.

We do not want to overstate our results. We must be aware that same-sex couples make up a small fraction of couples in both Canada and in the United States, and that of those couples, only a small fraction have children present in the household. This is clearly a fact in our analysis where the fraction of same-sex couples is very small in each dataset. Furthermore, the definitions of same-sex couples vary across datasets, individuals may have changing incentives to self-report over time, and formally recognized same-sex unions are relatively new. This suggests that the reported number of same-sex unions is fluid and thus will continue to fluctuate. These considerations are also relevant, perhaps more so, for studies using small, biased samples.

However, in the case of relationship stability, the evidence seems quite consistent: same-sex unions appear less stable. This result was found in the first study of Andersson et al., but contrasts with some of the other studies that followed. Our findings therefore (like those of Wiik et al., 2012) reaffirm Andersson et al., and expand the literature by studying the difference between couples without children and couples with children.

The relatively stable rates of heterosexuality, bisexuality, and homosexuality observed across nations for both women and men suggest that non-social factors likely may underlie much variation in human sexual orientation

Prevalence of Sexual Orientation Across 28 Nations and Its Association with Gender Equality, Economic Development, and Individualism. Qazi Rahman, Yin Xu, Richard A. Lippa, Paul L. Vasey. Archives of Sexual Behavior, December 3 2019. https://link.springer.com/article/10.1007/s10508-019-01590-0

Abstract: The prevalence of women’s and men’s heterosexuality, bisexuality, and homosexuality was assessed in 28 nations using data from 191,088 participants from a 2005 BBC Internet survey. Sexual orientation was measured in terms of both self-reported sexual identity and self-reported degree of same-sex attraction. Multilevel modeling analyses revealed that nations’ degrees of gender equality, economic development, and individualism were not significantly associated with men’s or women’s sexual orientation rates across nations. These models controlled for individual-level covariates including age and education level, and nation-level covariates including religion and national sex ratios. Robustness checks included inspecting the confidence intervals for meaningful associations, and further analyses using complete-cases and summary scores of the national indices. These analyses produced the same non-significant results. The relatively stable rates of heterosexuality, bisexuality, and homosexuality observed across nations for both women and men suggest that non-social factors likely may underlie much variation in human sexual orientation. These results do not support frequently offered hypotheses that sexual orientation differences are related to gendered social norms across societies.

Keywords: Sexual orientation Homosexuality Culture Gender roles Gender equality Social construction


Discussion

The central question addressed by the current research was:
Are national factors such as gender equality, economic
development, and individualism-collectivism related to the
national prevalence of various sexual orientations, across
28 nations? Our analyses also tested the frequently offered
hypothesis that sexual orientation rates may be associated
with gender norms and social roles (Bearman & Bruckner,
2002; Greenberg, 1988; Terry, 1999). The use of a large international
dataset allowed us to test whether countries that differed
in gender egalitarianism and rigidity of gender roles (as
indexed by national indicators of gender equality and gender
empowerment) also differed in the prevalence of various
sexual orientations. We found no compelling evidence that
this was the case. While the present results were not significant,
they demonstrate that several theoretically important
predictor variables (national levels of gender equality, economic
development, and individualism) were not much associated
with important outcome variables (sexual identity and
same-sex attractions) in a very large sample with sufficient
statistical power. The non-significant results were also inconsistent
with the notion that women’s sexual identities and
same-sex and other-sex attractions are more linked to cultural
and social factors than men’s were (Bailey et al., 2016;
Baumeister, 2000). Furthermore, there was no evidence that
national indices were more strongly related to identity than
to attraction-based measures of sexual orientation. Finally,
the pattern of associations did not seem to result from the
fact that prevalence rates were more variable, in general, for
women than men across nations. Indeed, when assessed in
terms of sexual identity, prevalence rates for male homosexual
identity were more variable than prevalence rates for
lesbian identity were.
Some factors that may be related to the prevalence of men’s
sexual orientation were not assessed in the current study.
One candidate supported by previous research is participants’
average number of older brothers in a given national sample
(and the correlated factor of the average size of participants’
family of rearing in a given national sample). Many studies
have shown that the more older biological brothers a man has,
the more likely he is to be gay (Blanchard, 2018). This “fraternal
birth order effect” is thought to result from biological
processes—each additional male fetus carried by a woman
increases the likelihood of maternal immunological reactions
against male factors in fetal tissue, and these immunological
reactions then influence the development of subsequent male
fetuses (Bogaert et al., 2018). A prediction that follows from
the fraternal birth order effect is that nations with larger mean
family sizes at the time of participants’ births should, on average,
have higher rates of male but not female homosexuality
among adult probands (Bogaert, 2004). Although not tested
in the current study, this hypothesis suggests the possibility
that biological as well social factors could be associated with
the prevalence of heterosexuality, bisexuality, and homosexuality,
across nations, and furthermore that associations with
biological as well as social factors may sometimes differ for
men and women.
The current study had several limitations. One pertains to
the sexual identity categories used. In some cultures, one’s
degree of sexual attraction to men and women is simply not
a basis upon which individuals construct identities. Cultural
variations in the construal of same-sex and other-sex
attractions have also been affected by our use of an English
language survey. While other cultures may sometime use
sexual identity terms that are comparable to those employed
in Western countries, such terms may have different meanings
across cultures, as for example when a man identifies as
“straight,” but nonetheless engages in sexual activity with
same-sex partners (e.g., Petterson et al., 2016). In some
cultures (e.g., those with “third gender” categories), sexual
orientation might be seen as a basis for identity, but at the
same time, some or all of the Western terms that are commonly
used to denote sexual orientation may not be employed
(e.g., Asthana & Oostvogels, 2001; Petterson et al., 2016).
Similar issues can even characterize some subcultures within
Western nations, in which asking members whether they are
“heterosexual,” “homosexual,” or “bisexual” is discouraged
(e.g., Denizet-Lewis, 2010). In the context of the current
study, it is worth noting that all participants, in fact, identified
themselves using one of the provided sexual identity
terms, and thus they seemed willing to use the categories of
“heterosexual,” “bisexual,” and “homosexual” as a basis for
self-classification.
A second limitation is that the national samples in the BBC
survey were not random or representative. Thus, each national
subsample is not necessarily representative of national patterns
overall. As the participants in all countries come from a sample of
BBC consumers, there may be cross-national homogeneity built
into the sampling frame. As noted earlier, participants tended to
be young, affluent, and educated (as well as able to understand the
English language). Compared to other cross-cultural studies on
college student samples, the BBC data included data from noncollege
populations who came from various locations within the
various countries and who varied in age and various demographic
characteristics.
One obvious direction for future research is to replicate
the current findings with data from representative samples of
men and women from diverse nations. Many of the nations
studied in the current study were European with a number of
notable exceptions (e.g., India, Japan, Malaysia, Philippines,
Singapore, Turkey). The unequal sample sizes across nations
(some nations contained more people than others) is unlikely
to bias the estimation of the parameters of interest. One of
the advantages of using multilevel models is their tolerance
of unequal samples and other unbalanced data structures.
Simulation studies suggest that group-level sample size is
somewhat more important than total sample size, and large
individual-level sample sizes can compensate for small numbers
of groups (for review, see Maas & Hox, 2005). Naturally,
any estimates of grand means (e.g., across all nations) will
be more weighted toward countries with larger sample sizes
which is why researchers should use multilevel models when
nesting is inherent in the study design.
It is also important to note that the concept of national
culture (insofar as that is captured by UN indices) has been
questioned by scholars in personality and social psychology.
While the concept of national cultures is disputed, other
research suggests there may be between-nation differences
in average personality traits and that some of these may
correlate with sociopolitical structures (e.g., having democratic
institutions; Barceló, 2017; Schmitt, Allik, McCrae, &
Benet-Martínez 2007). In this context, Hofstede’s measures
of individualism and collectivism have also been criticized.
As cultures (especially those in closer geographic proximity)
tend to become more similar (perhaps due to economic
factors such as globalization), it is possible that consistency
in psychological traits across cultures may also be driven by
globalized sexual norms. While the analysis presented here
accounts for the statistical dependencies introduced by these
issues, the findings are specific to the BBC sample examined.
Social attitudes toward sexual orientation may also have
changed since the BBC survey was taken. Thus, further tests
of these questions will be needed in other, more representative
and recent cross-cultural datasets.
The use of multilevel models allowed us to use nationlevel
data to draw inferences at the individual level. In other
words, it allowed us to test the potential influence of national
gender equality on individuals’ sexual identity and desire.
However, the relationship between variables could theoretically
be different at other levels of analysis. For example,
societal or structural-level gender egalitarianism could influence
intermediate proximate mechanisms, such as parental
gender socialization or internalization of gender stereotypes
(or other gender norms), which then influence the development
of sexual orientation differences. However, the effects
of factors such as parental socialization on sexual orientation
appear to be weak based on existing research evidence
(Bailey et al., 2016). Furthermore, many country-level variables
may be clustered in world regions (e.g., Europe, North
America). While multilevel model can accommodate such
effects (e.g., by simply adding another data level in a hierarchical
model), it is unlikely that levels of gender egalitarianism
differ sufficiently between countries within a world
region (e.g., between all European countries) for us to detect
such associations with sufficient statistical power.
Finally, it is worth noting that although the national samples
of men and women studied in the BBC survey were not
representative of their larger national populations in some
ways, the male and female samples were nonetheless well
matched on demographic factors such as age and education
levels. Thus, the apparent absence of sex differences in
the current study—e.g., there appeared to be no difference
between men and women in the relation between sociocultural
factors and sexual orientation—was present despite
the fact that male and female samples were matched on key
factors.
In conclusion, our analyses did not yield a significant
association between national indicators of gender equality,
economic development, and individualism-collectivism traits
and identity-based or desire-based measures of sexual orientation
across 28 countries in men and women. This provides
new evidence that questions the power of factors such as
gendered norms, gender roles, and gender socialization to
account for variations in the prevalence of sexual orientations
across nations. Future empirical studies are needed to better
test the extent to which national gender norms and economic
factors are related to variations in the expression of sexual
orientation across nations.

New Frontiers in Irritability Research


New Frontiers in Irritability Research—From Cradle to Grave and Bench to Bedside. Neir Eshel, Ellen Leibenluft. JAMA Psychiatry, December 4, 2019. https://doi.org/10.1001/jamapsychiatry.2019.3686

We all know what it’s like to be irritable. Our partners walk on eggshells around us. The slightest trigger sets us off. If there’s a punching bag nearby, it had better watch out. Irritability, defined as a low threshold for experiencing frustration or anger, is common. In the right context, irritability can be adaptive, motivating us to overcome barriers or dominate our environment. When prolonged or disproportionate, however, irritability can be counterproductive, causing us to waste our energy on maladaptive behavior.

In recent years, there has been an increase in research on irritability in childhood, with an emerging literature on its neurobiology, genetics, and epidemiology.1 There is even a new diagnosis focused on this symptom, disruptive mood dysregulation disorder (DMDD). However, there is a dearth of irritability research in adults. This is regrettable, because irritability is an important clinical symptom in multiple mental illnesses throughout the life span. From depression to posttraumatic stress disorder, dementia to premenstrual dysphoric disorder, traumatic brain injury to borderline personality disorder, irritability is associated with extensive burdens on individuals, their families, and the general public.

In this Viewpoint we suggest that studying the brain basis for irritability across development and disorder could have substantial clinical benefits. Furthermore, we propose that irritability, like addiction or anxiety, is an evolutionarily conserved focus ready for translational neuroscience.

Diagnosis and Treatment Across the Life Span

Despite its clinical toll, there are few evidence-based treatments for irritability. The only US Food and Drug Administration–approved medications for irritability are risperidone and aripiprazole, which are approved only in the context of autism and are associated with adverse effects that limit their utility. Stimulants, serotonin reuptake inhibitors, and variants of cognitive behavioral therapy and parent management training show promise for different populations, but overall there is a shortage of options, leading many health care professionals to try off-label drug cocktails with unclear efficacy. This situation results in part from our primitive understanding of the phenomenology and brain mechanisms of irritability throughout the life span.

An emerging body of work focuses on measuring irritability in children and adolescents, determining comorbid disorders, and tracking related functional impairment.1 Multiple studies, for example, report that chronically irritable youth are at elevated risk for suicidality, depression, and anxiety in adulthood.2,3 But what are the clinical characteristics and longitudinal course of irritability in adults? Irritability diminishes from toddlerhood through school age, but does it continue to decrease monotonically with age into adulthood? What about the end of life? Irritability and aggression are common in patients with neurodegenerative disorders, but are these symptoms similar to those in a child with DMDD? There has been limited systematic study of irritability in adulthood, and studies that mention irritability in adulthood operationalize the construct in different ways. One study counted 21 definitions and 11 measures of irritability in the psychiatric literature, all of which overlapped with anger and aggression.4 This lack of clarity diminishes our ability to identify biomarkers or track treatment success. Even studies that use childhood irritability to predict adult impairment do not typically measure irritability in adults, thereby obscuring the natural history of irritability as a symptom.5 For the field to progress, it will be crucial to establish standard definitions and measurements spanning childhood through adulthood.

Beyond phenomenology, we need to identify brain signatures associated with the emergence, recurrence, and remission of irritability across the life span and during treatment. Irritability is a prototypical transdiagnostic symptom, but it remains unclear to what extent its brain mechanisms overlap across disorders. For example, in children, data suggest that the brain mechanisms mediating irritability in DMDD, anxiety disorders, and attention-deficit/hyperactivity disorder are similar but differ from those mediating irritability in childhood bipolar disorder.1,6 The frequency of irritable outbursts appears to diminish in step with the maturity of prefrontal regions during childhood.1 Could degeneration in the same structures predict reemergence of irritable outbursts in patients with dementia? Could developmental differences in these regions increase the likelihood of irritability when individuals are sleep deprived or intoxicated later in adolescence or adulthood? Only through fine-grained neuroscientific studies can we disentangle what is unique to the symptom (ie, irritability) and to the disorder (eg, bipolar disorder vs DMDD vs dementia), and develop treatments tailored to an individual’s brain pathology.


Translational Neuroscience and Irritability

In addition to their clinical relevance, neuroscientific studies of irritability can address fundamental questions about brain dysfunction and recovery. Over the past 2 decades, studies have revealed the circuits underlying reward processing, and in particular prediction error, the mismatch between expected and actual reward.7 The neuroscience of aggression has also advanced through the discovery of cells in the amygdala and hypothalamus that form a final common pathway for aggressive behavior.8 Irritability and the concept of frustrative nonreward can tie these 2 fields together.

Frustrative nonreward is the behavioral and emotional state that occurs in response to a negative prediction error, ie, the failure to receive an expected reward. In the classic study by Azrin et al,9 pigeons were trained to peck a key for food reward. After pigeons learned the task, the experimenters removed the reward; then when the pigeons pecked, nothing happened. For the next several minutes, there were 2 changes in the pigeons’ behavior. First, they pecked the key at a higher rate. Second, they became unusually aggressive, damaging the cage and attacking another pigeon nearby. In other words, a negative prediction error led to a state of frustration, which then induced increased motor activity and aggression. Such responses to frustration have been replicated in many species, including chimpanzees, cockerels, salmon, and human children and adults.10 Frustrative nonreward therefore provides an evolutionarily conserved behavioral association between prediction error and aggression. Apart from studies in children,1,6 however, little has been done to probe the neural circuits of frustrative nonreward or of irritability, which can be defined as a low threshold for experiencing frustrative nonreward.

We know, for example, that negative prediction errors cause phasic decreases in dopamine neuron firing, which help mediate learning by reducing the valuation of a stimulus. Does this dip in dopamine level also increase the likelihood of aggression and if so how? The same optogenetic techniques that have demonstrated a causal role for dopamine prediction errors in reward learning could be used to test their role in aggressive behavior. Likewise, multiple nodes in the reward circuit encode the value of environmental stimuli. Could these values modulate the propensity for aggression? Environments of plenty, for instance, may protect against aggressive outbursts, because if there is always more reward available, the missing out factor may not be salient. Conversely, scarcity could make individuals more likely to be aggressive, because if there are few rewards to be had, achieving dominance may be necessary for survival.

Exploring the bidirectional associations between the reward processing and aggression circuits would help us understand state changes in the brain and how environmental context determines our behavior. At the same time, understanding these circuits will lay the groundwork for mechanism-based treatments for irritability.

Conclusions
The neuroscience of irritability is in its infancy and research has focused almost exclusively on children. We now have an opportunity to expand this field to adults, across disorders, and to animal models for more precise mechanistic studies. Through better measurement, careful experimental design, input from theorists and computational psychiatrists, and coordinated efforts across experts in multiple disorders, we can guide the field to maturity.

Benoit Coeure: No euro area country features in the top 10 of the World Bank’s ease of doing business index; many are not even in the top 30.

The single currency: an unfinished agenda. Speech by Benoît Cœuré, ECB. ECB Representative Office in Brussels,  Dec 3 2019.  https://www.bis.org/review/r191204a.pdf

Excerpts:

Some wounds have still not healed, however. As unsettling as it may sound after so many years of economic hardship, the euro area architecture is still not crisis-proof.

Growth remains cyclically too weak to fully restore fiscal space in countries where public debt is unacceptably high. The profitability of banks remains low and, in many cases, below the cost of equity, reflecting risks to business model sustainability.[3]

And productivity growth, the main component that underpins our living standards and social safety nets, remains low in many Member States. As a consequence, unemployment in some countries, in particular among young people, remains unacceptably high, despite the progress made at the euro area level on average.

True, many other advanced economies are facing similar challenges. But the combination of weak potential growth and high debt is toxic in a monetary union with decentralised fiscal policy and insufficiently integrated financial markets.

It implies that country-specific shocks remain a potential source of instability for the euro area as a whole.

It weakens political support for further integration. And it means that the single monetary policy has to shoulder the burden of macroeconomic stabilisation in the face of adverse shocks.

The arrival of the new European Parliament and Commission provides an important opportunity to address more decisively the remaining vulnerabilities, refocus priorities and sequence actions accordingly. And it presents us with a time frame for achieving these goals.

In my remarks this evening, I will argue that we need to both strengthen the institutional framework to make our currency union more resilient and implement the right policies to boost the growth potential of our economies.

I will argue that flexible and dynamic markets are the first line of defence in the euro area.[4]

They are the key to unlocking sustained productivity growth, and thereby allowing faster normalisation of monetary policy. They also reduce the need for macroeconomic stabilisation and they curb contentious debates about crisis management.

The second line of defence relates to sustainable and growth-enhancing fiscal policies. Countries that have fiscal space should use it to foster investment. Countries where debt is high should calibrate their policies so as to regain fiscal space in the future, limiting the risk they pose to their neighbours. And all countries can improve the quality of their spending.

The third line of defence relates to strengthening our common toolkit – to new policy instruments that are needed to protect the stability of the currency union if shocks are too large to be absorbed by markets or national fiscal policies, and that provide a safety net against poverty and social exclusion.

The first line of defence: integrated and flexible markets

No euro area country features in the top 10 of the World Bank’s ease of doing business index. Many are not even in the top 30.

A consequence of a less business-friendly environment is that business dynamism in Europe is weak.

Compared with the United States, European countries have, on average, larger shares of “static” firms and smaller shares of both growing and shrinking firms.[5]

Low business dynamism feeds and reinforces the misallocation of resources across firms in the euro area. [6]

Empirical evidence shows that an increasing proportion of capital is concentrated in firms that are less productive. In Italy and Spain misallocation is currently higher than at any point in time before the crisis.[7]

The absence of a Schumpeterian process of creative destruction weighs on innovation and growth.

...

There is overwhelming evidence that new firms are more likely to adopt new technologies.

There is a significant link between business entry rates, technology creation and diffusion, and productivity growth.[8]

New and young firms also contribute disproportionally to job creation relative to their share in employment. [9]

...

Several euro area countries lack an effective framework for early private debt restructuring. In Portugal, Greece and Slovakia, for example, it takes more than three years to resolve insolvency. It takes less than one year in Japan, Norway and Canada.[10]

...

But member states don’t walk the talk. The macroeconomic imbalance procedure always lacked teeth and none of the 2018 recommendations for euro area countries have been fully implemented.