Evidence supporting nubility and reproductive value as the key to human female physical attractiveness. William D. Lassek, Steven J. C. Gaulin. Evolution and Human Behavior, May 8 2019. https://doi.org/10.1016/j.evolhumbehav.2019.05.001
Abstract: Selection should favor mating preferences that increase the chooser's reproductive success. Many previous studies have shown that the women men find most attractive in well-nourished populations have low body mass indices (BMIs) and small waist sizes combined with relatively large hips, resulting in low waist-hip ratios (WHRs). A frequently proposed explanation for these preferences is that such women may have enhanced health and fertility; but extensive evidence contradicts this health-and-fertility explanation. An alternative view is that men are attracted to signs of nubility and high reproductive value , i.e., by indicators of physical and sexual maturity in young women who have not been pregnant. Here we provide evidence in support of the view that a small waist size together with a low WHR and BMI is a strong and reliable sign of nubility. Using U.S. data from large national health surveys, we show that WHR, waist/thigh, waist/stature, and BMI are all lower in the age group (15-19) in which women reach physical and sexual maturity, after which all of these anthropometric measures increase. We also show that a smaller waist, in conjunction with relatively larger hips or thighs, is strongly associated with nulligravidity and with higher blood levels of docosahexaenoic acid (DHA), a fatty acid that is probably limiting for infant brain development. Thus, a woman with the small waist and relatively large hips that men find attractive is very likely to be nubile and nulliparous, with maximal bodily stores of key reproductive resources.
---
Introduction
Because differential reproductive success drives adaptive evolution, selection should favor males and females whose mating preferences maximize their numbers of reproductively successful offspring. Thusboth should be attracted to anthropometric traits reliably correlated with the ability of the opposite sex to contribute to this goal. In a landmark bookSymons (1979) argued that the female attributesmost likely to enhance male reproductive success are indicators of nubility and its associated high reproductive value (see also Andrews et al.,2017; Fessleret al, 2005; Marlowe, 1998; Sugiyama, 2005; Symons,1995), and the purpose of this paper is to test whether the availableevidence isconsistent with Symons’view. Such a test is needed because, subsequentto Symons’formulation,a competing hypothesis proposed that men are primarily attuned to indicators of current health and fertilityand that these are the female attributes indicated bythe low WHRs and BMIslinked with high attractiveness(Singh, 1993a; 1993b; Tovée et al., 1998). Theexistence of male preferences for low WHRs and BMIs hasbeen supported by many other studies in industrialized populations,where women are generally well-nourished.Butanyexplanation ofthemmust address whypreferred values are much lower than mean or modal values in typical young women. In a recent study (Lassek&Gaulin, 2016), the mean WHR of PlayboyPlaymates (0.68) was 2 standard deviations below the means for typical college undergraduates (0.74) and the mean WHR (0.55) of imaginary females chosen for maximal attractiveness from comics, cartoons, animatedfilms, graphic-novels, or video-games was 5 standard deviations below the undergraduate mean. Jessica Rabbit, the most popular imaginary female, has a WHR of 0.42. Preferred values of BMI are also in the negative tail of the actual female distribution: the mean BMI of Playmates (18.5) was 2 SD lower than the mean for college undergraduates (22.2). A recent study of 323female comic book characters from the Marvel universefound that the mean WHR was 0.60±0.07 and the modal BMI was 17; WHR was two SD lower in 34 characters(0.61) than in the actressesportraying themin films (0.72)(Burch & Johnsen, 2019).
1.1 Health and fertility as the basis for female attractiveness?
Singh (1993a, 1993b) suggested that men are attracted to low WHRs and BMIs because they are signs of enhanced female health and fertility, and this idea has been widely accepted (e.g., Grammeret al.,2003; Marloweet al.,2005; Pawlowski&Dunbar,2005; Singh &Singh 2011; Sugiyama, 2005; Tovée et al.,1998; Weeden & Sabini,2005). But thisargument seems inconsistent with the extremity of the preferred values(above). As a result of stabilizing selection, phenotypes associated with optimal health and fertility should, and do,lie at the center—not the extreme—of the female distribution. Given such stabilizing selectionon females, male preferences for traits associated with health and fertility should then target modal female values. Based on a review of a large number of relevant studies and on new analyses, it has been recently shown thatWHRs and BMIs in the negative tails of their distributions—the values rated most attractive in well-nourished populations—usually indicate poorerrather than enhanced health (Lassek&Gaulin,2018a) and lowerfertility (Lassek&Gaulin,2018b)(although lower BMI’sin younger primigravidasmay reduce risksof obstructed labor and hypertension). Given that the predictions of the health-and-fertility hypothesis are not well supported, the main goal of this article is to evaluate theprior hypothesis that maybetter explain why males in well-nourished populations prefer female phenotypes at the negative extreme of their distributions: an evolvedpreference for nubility (Symons 1979, 1995) and its demographic correlate, maximal reproductive value (Andrews et al.,2017; Fessler et al., 2005).
1.2 Nubility as the basis for female attractiveness?
Despite a lack of empirical support, the health-and-fertility hypothesis has largely eclipsed Donald Symons’s earlier proposal thatmen are attracted to nubility—to indicators of recent physical and sexual maturity in young nulligravidas (never pregnant women) (Fessler et al., 2005; Symons,1979;1995; Sugiyama, 2005). Symons defined the nubile phase as 3-5 years after menarche when young women are “just beginning ovulatorymenstrual cycles” but have not been pregnant (Symons.1995, p. 88). This corresponds to age 15-19 in well-nourished populations, but sexual maturityin some subsistence groupsmay be delayed (Ellis,2004; Eveleth&Tanner,1990; Symons 1979). Symons suggested that the female characteristics men find attractive—such as a low WHR—are indicatorsof nubility. And he stressedthat any preference for nubility inevitably contrasts with a preference for current fertility, because the teen years of peak nubility are a well-documented period of lowfertilitydue to a decreased frequency of ovulation,with 40-90% of cycles anovulatory, while maximum fertility is not reached until the mid to late 20’s (Apter,1980; Ashley-Montague,1939; Ellisonet al., 1987; Larsen&Yan,2000; Loucks,2006; Metcalf&Mackenzie,1980; Talwar, 1965; Weinsteinet al., 1990; Wesselink et al., 2017). Thus,if the nubility hypothesis is correct, the fertility hypothesis must be incorrect. Nubility is closely linked to a woman’s maximum reproductive value(RV), her age-specific expectation of future offspring, given the underlying fertility and mortality curves of her population (Fisher,1930). The peak of RV is defined by survival to sexual maturity with all reproductive resources intact. The age of peak RV depends in part on the average ages of menarche and marriage, but typically ranges from 14to 18 in human populations (Fisher,1930;Keyfitz&Caswell, 2005; Keyfitz&Flieger,1971). This corresponds to Symons’ age of nubility. Calculations of reproductive value in the !Kung (Daly & Wilson, 1988) and in South Africa (Bowles & Wilson, 2005) both found the peak age to be 15.Symons argued that the attractiveness of the nubile age group is supported by the finding that this is the age groupwhen marriage and first pregnancies typically occur in subsistence culturesdespite reduced fecundability.For example, in the Yanomamo(polygynous hunter-horticulturalists of southern Venezuela), menarche is typically at age 12, marriage at 14, and the first birth at 16 (Symons,1995). Asimilar relationship between nubility and first reproduction characterizesother subsistence populations(Table 1), wherethe mean age of first birth is typically under age 20 and averages 3.9±1.1years after menarche.Inpopulationswith access toeffectivecontraception,the onset of sexual activity may be a better indicator of nubility than the age of marriage or first birth, although multiple factors may influence these ages. In a 2005 survey of women in 46 countries, the average age of first intercourse ranged from 15 to 19 with a mean of 17.2 (Durex, 2009) and was 17.1 in a recent sample of American women (Finer,2007).
Prior studies suggest that attractive females exhibit the phenotypic correlates of nubility. In the Dobe !Kung, four photographed young women considered “beautiful girls” by the !Kung were all age 17 (Howell, 2000, p. 38).In samplesfrom more developed countries, youthful bodies are also considered attractive. In a study of males viewing nude photos of female infants, children (mean age 7.7), pubescent (mean 13.5), and young adult females (mean 23.1), both viewing duration and ratings of physical attractiveness were highest for pubescent females (Quinseyet al., 1996). Marlowe(1998) has suggested thatan evolved attractionto nubility explains men’s preference for relatively large, firm, non-sagging female breasts, and this view is supported by a study in the Hewa (Coe &Steadman, 1995). Of particular relevance are two studies that directly explore the relationship of attractiveness to age. A recent study using body scans with raters from 10 countries found that BMI was inversely related to both rated attractiveness and to estimated age (Wang et al., 2015). In another recent study, age estimated from neck-down photographs of females in bathing suits had a strong negative relationship with attractiveness and a strong positive relationship with WHR, BMI, and especially waist/stature ratio (Andrews et al., 2017).Symons (1995) suggests several adaptive reasons why selection might favor men preferringnubile females over older females who have higher current fertility: 1) A male who pairs with a nubile female is likely to have the maximum opportunity to sire her offspring during her subsequent most fecund years. A nubile woman is also 2) likely to have more living relatives to assist her than an older woman, and 3) to survive long enough for her children to be independent before her death. 4) A male choosing a nubile female avoids investing in children siredby other men and possible conflict with the mother (his mate) over allocation of her parental effort among his children and the children of her prior mates. By definition, a nubile woman is not investing time and energy in other men’s children because sheis nulliparous.Moreover, in a wide array of competitive situations, those who stake an early claim are likely to have an advantage over those who wait until the contested resource is fully available(e.g., lining up the day before concert ticketsgo on sale; Roth&Xing,1994). Thus,the men who were most strongly attracted to signs of nubility would minimize their chances of being shut out of reproductive opportunities. This dynamic would generate selection on men to seek commitment offemale reproductive potential at younger ages. In such an environment, males without a preference for signs of nubility would be at a disadvantage in mating competition, and those who preferred women at the age of peak fertility (in the mid to late 20s) would likely find few available mates. In subsistence cultures, post-nubile women are very likely to be married and to have children; they are usually either pregnant or nursing and so ovulate infrequently due to ongoing reproductive commitments (Marlowe,2005; Strassman, 1997;Symons,1995).
1.3 External signs of nubility
Following Symons (1979;1995), we consider a woman to be nubile when she has menstrual cycles, has attained maximal skeletal growth, is sexually mature based on Tanner stages (see below), but has not beenpregnant. Maximal skeletal growth and stature are usually attained two to three years after the onset of menstrual periods, the latter typically occurring at ages 12-13 in well nourished populations (Eveleth&Tanner,1990; Table 1). In a representative American sample, completed skeletal growth resulting in maximal stature was attained by age 15-16 (Hamill et al., 1973). The two widely-accepted indicators of female sexual maturity in postmenarcheal women are the attainment of 1)adult breast sizeandconfiguration of the areola and nipple, and 2) an adult pattern of pubic hair (Tanner,1962, Marshall&Tanner,1969). In a sample of 192 English female adolescents, the average age for attaining adult (stage 5) pubic hair was 14.4±1.1 and for adult (stage 5)breasts was 15.3±1.7. More recent samples show similar ages for attainment of breast and pubic hair maturity (Beunenet al., 2006). In other studies, puberty was judged complete by age 16-17 in American, Asian, and Swiss samples (Huanget al., 2004;Largo&Prader,1983;Leeet al., 1980), based on completed skeletal growth and presence of adult secondary sexual characteristics. The timing of these developmental markers supports Symons’ (1979) suggestion that nubility occurs 3 to 5 years after menarche. We will assessthe timing of these developmental indicators in a large U.S. sample.
Little attractiveness research has focused on these features of the developing female phenotype, butSingh (1993b) and Symons (1995) separately suggested that both a low BMIand a low WHR are also indicators of nubility. If so, this developmental pattern would explain the male preference for low BMI and low WHR in populations where both measures increase after the nubile period. Available evidence suggests threeways that low WHRs, BMIs, and small waists may indicate that young women in well-nourished populations are at peak nubility and reproductive value: 1) these measuresare lower in the nubile age group (where nubility is defined based on completed stature growth and attainment of Tanner stage5) than they are in older women, 2) they show that a woman is unlikely to have been pregnant, a requirement for nubility, and 3) they indicate that resources crucial for reproduction are maximal (untapped). Published evidence relevant to these points is reviewed immediately below,and we will offer newevidence that the anthropomorphic values associated with attractiveness and reproductive resources are most likely to occur in the 15-20 age group.
1.3.1 Low WHR and BMI as indicatorsof attainment of sexual maturity
WHR may be a particularly good indicator of nubility because evidence suggests that it reaches a minimum during the nubile period. During female puberty, typically occurringbetween theages 10-18, there is a marked increase in the amount of adipose tissue, e.g., from 12-15% to 25-26% of body weight (Boot et al., 1997, Lim et al., 2009; Taylor et al., 2010), a percentage of body fat far greater than that seen in most other mammals, including other primates (Pond, 1998; Pitts&Bullard, 1968). Under the influence of gonadal hormones, mostof this added adipose is deposited in the gluteofemoral depot (hips, buttocks, and thighs), a highly derived human traitthat may haveevolved to store rare fatty acids critical for the development of the large human brain (Lassek &Gaulin, 2006; 2007;2008). Thishormonally driven emphasis on gluteofemoral vs. abdominal fatstores lowers WHR, which decreases during childhood and early adolescence, reaches a minimum at ages 15 to 18, and then tends to increase (Al-Sendi et al., 2003; Bacopoulou et al., 2015; Casey et al., 1994; de Ridder et al., 1990;1992; Fredriks et al., 2005; Gillum,1999; Haas et al., 2011; Kelishadi et al., 2007; Martinez et al., 1994; Moreno et al., 2007; Taylor et al., 2010; Westrate et al., 1989). This developmental pattern supports the idea that a low WHR is a relatively conspicuous marker of nubility(in addition to other signs of sexual maturity which may be less readily assessable, such as menstruation, breast andpubic-hair development, and attainment of maximal stature).In well-nourished populationsBMIs are also lower in nubile adolescents than in older women. In a longitudinal study of American women that began in the 1930’s, the mean BMI increased from 16.7 kg/m2in early adolescence to 18.9 in late adolescence, 22.1 at age 30, 24.1 at age 40, and 26.1 at age 50 (Casey et al., 1994). Cross-sectional female samples show parallel age-related weight increases (highly correlated with BMI) (Abraham et al., 1979;Burke et al., 1996; Schutz et al., 2002; Stoudt et al., 1965). Controlling for social class and parity, age was a significant predictor of BMI in a large United Kingdom sample (Gulliford et al., 1992). In a study in which males estimated the age of femalefigures varying in BMI and WHR (Singh, 1995), they judged the figures with the lowest BMIs (15) to be the youngest,with an estimated age of 17-19.We will explore the relationship of WHR and BMI with age in a large American sample.However, in contrast to women in well-nourished groups, in subsistence populations women’s BMIs may peak at the age of nubility and subsequently decreasewith age and parity (see Lassek &Gaulin,2006). Notably, men in such populations often prefer the higher BMIs which in these cultures indicate nubility (Sherry &Marlowe 2007, Sugiyama,2005; Yu &Shepard,1998). Thus, a consistent preference for the BMIs most strongly associated with nubility could explain an apparent cross-cultural inconsistency in body-shape preferences,which is difficult to explain using the health-and-fertility hypothesis.
1.3.2 Low WHRs and BMIs and smaller waist sizes indicate a lower likelihood of previous pregnancy
An essential part of Symons’ (1979, 1995) definition of nubility (and the high reproductive value it represents) is the lack of any previous pregnancy(i.e., nulligravidity); nubile womenhaveattained physical and sexual maturity without yet expending any reproductive potential.Priorevidence suggests that a low WHR (or small waist size) is also a strong indicator of nulliparity (Bjorkelund et al., 1996; Gunderson et al., 2004; 2008; 2009; Lanska et al., 1985; Lassek &Gaulin,2006; Lewis et al., 1994; Luoto et al., 2011; Mansour et al., 2009; Rodrigues &Da Costa,2001; Smith et al., 1994; Tonkelaar et al., 1989; Wells et al., 2010). Similarly, a recent study (Butovskya et al., 2017) found a strong positive relationship between WHR and parity in seven traditional societies. Like WHR, BMI also increases with parity in wellnourished populations (Abrams et al., 2013; Bastianet al., 2005;Bobrowet al., 2013; Rodrigues &Da Costa,2001 Kochet al., 2008; Nenkoet al., 2009). Some studies have suggested that BMI may be more strongly related to parity than it is to age (Koch et al., 2008, Nenko et al., 2009), although this may be less true inolder women (Trikudanathanet al., 2013).We will explore the relationships of WHR and BMI to age and parity in a large American sample.In two studies of men’s perceptions, higher WHRs were judged to strongly increase the likelihood of a previous pregnancy (Andrewset al., 2017; Furnham &Reeves, 2006). Thus, anthropometric data suggest that a low WHR and BMI may indicate nulliparity,as well as a young age, and psychological data suggest that men interpret these female features as carrying this information.
1.4 Smaller WHRs and waist sizes indicate greater availability of reproductive resources
Because they have reached sexual maturity but have not yet been pregnant, nubile women should have maximum supplies of reproductive resources that are depleted by pregnancy and nursing, such as the omega-3 fatty acid docosahexaenoic acid (DHA). Many studies have shown that DHA is an important resource supporting neuro-cognitive development ininfants and children(Janssen & Killiaan, 2014; Joffre et al., 2014; Lassek &Gaulin,2014;2015), andDHA stored in adiposeisdepleted by successive pregnancies (Dutta-Roy,2000, Hanebutt et al., 2008; Hornstra,2000; Lassek &Gaulin,2006;2008; Min et al., 2000). Stores of DHA would likely have beenan increasingly important aspect of mate value in the homininlineage as itexperienced dramatic brain expansion.Most of the DHA used for fetal and infant brain development is stored in gluteofemoralfatuntil it is mobilized from this depot during pregnancy and nursing(Lassek &Gaulin, 2006; Rebuffe-Scrive et al., 1985;1987). Indeed,a low WHR is associated with higher circulating levels of DHA (Harris et al., 2012; Karlsson et al., 2006; Micallef et al., 2009; Wang et al., 2008), as isa smaller waist size and lower levels ofabdominal fat(Alsahari et al, 2017; Bender et al., 2014; Howe et al., 2014; Karlsson et al., 2006; Wagner et al., 2015). Thus, young women with smaller waists and WHRs are likely to have higher levels of DHA in their stored fat and so can provide more DHA to their children during pregnancy and nursing which may result in enhanced cognitive ability in their offspring. Consistent with thepossibilitythat female body shape reveals stored neuro-cognitive resources, in a large sample of American mothers those with lower WHRs had children who scored higher on cognitive tests (controlling for other relevant factors, including income and education variables) (Lassek &Gaulin,2008). Moreover, the children ofteenage mothers, at particular risk for cognitive deficits, scoredsignificantly better on cognitive tests when their mothers had lower WHRs. To further examine the reproductive role of the gluteofemoral depot,we will assess the relationship of the waist/thigh ratio to plasma levels of DHA.
Wednesday, May 8, 2019
Around 75 pct of the minimum wage increase in Hungary was paid by consumers and 25 pct by firm owners; disemployment effects were greater in industries where passing the wage costs to consumers is more difficult
Who Pays for the Minimum Wage? Péter Harasztosi, Attila Lindner. American Economic Review, forthcoming, https://www.aeaweb.org/articles?id=10.1257/aer.20171445&&from=f
Abstract: This paper provides a comprehensive assessment of the margins along which firms responded to a large and persistent minimum wage increase in Hungary. We show that employment elasticities are negative but small even four years after the reform; that around 75 percent of the minimum wage increase was paid by consumers and 25 percent by firm owners; that firms responded to the minimum wage by substituting labor with capital; and that dis-employment effects were greater in industries where passing the wage costs to consumers is more difficult. We estimate a model with monopolistic competition to explain these findings.
Abstract: This paper provides a comprehensive assessment of the margins along which firms responded to a large and persistent minimum wage increase in Hungary. We show that employment elasticities are negative but small even four years after the reform; that around 75 percent of the minimum wage increase was paid by consumers and 25 percent by firm owners; that firms responded to the minimum wage by substituting labor with capital; and that dis-employment effects were greater in industries where passing the wage costs to consumers is more difficult. We estimate a model with monopolistic competition to explain these findings.
Why "surprising"? --- The Economist: Global meat-eating is on the rise, bringing surprising benefits
All the year round The Economist behaves like the lawmaker, that never speaks without detracting from human knowledge. Now, benefits of eating meat that are surprising:
The way of more flesh
https://www.economist.com/international/2019/05/04/global-meat-eating-is-on-the-rise-bringing-surprising-benefits
Global meat-eating is on the rise, bringing surprising benefits
The Economist, May 2nd 2019| BEIJING, DAKAR AND MUMBAI
As Africans get richer, they will eat more meat and live longer, healthier lives
THINGS WERE different 28 years ago, when Zhou Xueyu and her husband moved from the coastal province of Shandong to Beijing and began selling fresh pork. The Xinfadi agricultural market where they opened their stall was then a small outpost of the capital. Only at the busiest times of year, around holidays, might the couple sell more than 100kg of meat in a day. With China’s economic boom just beginning, pork was still a luxury for most people.
Ms Zhou now sells about two tonnes of meat a day. In between expert whacks of her heavy cleaver, she explains how her business has grown. She used to rely on a few suppliers in nearby provinces. Now the meat travels along China’s excellent motorway network from as far away as Heilongjiang, in the far north-east, and Sichuan, in the south-west. The Xinfadi market has changed, too. It is 100 times larger than when it opened in 1988, and now lies within Beijing, which has sprawled around it.
Between 1961 and 2013 the average Chinese person went from eating 4kg of meat a year to 62kg. Half of the world’s pork is eaten in the country. More liberal agricultural policies have allowed farms to produce more—in 1961 China was suffering under the awful experiment in collectivisation known as the “great leap forward”. But the main reason the Chinese are eating more meat is simply that they are wealthier.
[https://www.economist.com/sites/default/files/imagecache/640-width/images/print-edition/20190504_IRC006.png]
In rich countries people go vegan for January and pour oat milk over their breakfast cereal. In the world as a whole, the trend is the other way. In the decade to 2017 global meat consumption rose by an average of 1.9% a year and fresh dairy consumption by 2.1%—both about twice as fast as population growth. Almost four-fifths of all agricultural land is dedicated to feeding livestock, if you count not just pasture but also cropland used to grow animal feed. Humans have bred so many animals for food that Earth’s mammalian biomass is thought to have quadrupled since the stone age (see chart).
Barring a big leap forward in laboratory-grown meat, this is likely to continue. The Food and Agriculture Organisation (FAO), an agency of the UN, estimates that the global number of ruminant livestock (that is, cattle, buffalo, sheep and goats) will rise from 4.1bn to 5.8bn between 2015 and 2050 under a business-as-usual scenario. The population of chickens is expected to grow even faster. The chicken is already by far the most common bird in the world, with about 23bn alive at the moment compared with 500m house sparrows.
[https://www.economist.com/sites/default/files/imagecache/640-width/20190504_IRC637.png]
Meanwhile the geography of meat-eating is changing. The countries that drove the global rise in the consumption of animal products over the past few decades are not the ones that will do so in future. Tastes in meat are changing, too. In some countries people are moving from pork or mutton to beef, whereas in others beef is giving way to chicken. These shifts from meat to meat and from country to country are just as important as the overall pattern of growth. They are also more cheering. On a planetary scale, the rise of meat- and dairy-eating is a giant environmental problem. Locally, however, it can be a boon.
Over the past few decades no animal has bulked up faster than the Chinese pig. Annual pork production in that country has grown more than 30-fold since the early 1960s, to 55m tonnes. It is mostly to feed the legions of porkers that China imports 100m tonnes of soybeans every year—two-thirds of trade in that commodity. It is largely through eating more pork and dairy that Chinese diets have come to resemble Western ones, rich in protein and fat. And it is mostly because their diets have altered that Chinese people have changed shape. The average 12-year-old urban boy was nine centimetres taller in 2010 than in 1985, the average girl seven centimetres taller. Boys in particular have also grown fatter.
China’s pork suppliers are swelling, too. Three-fifths of pigs already come from farms that produce more than 500 a year, and Wan Hongjian, vice-president of WH Group Ltd, China’s largest pork producer, thinks the proportion will rise. Disease is one reason. African swine fever, a viral disease fatal to pigs though harmless to people, has swept China and has led to the culling of about 1m hogs. The virus is tough, and can be eradicated only if farms maintain excellent hygiene. Bigger producers are likely to prove better at that.
High on the hog
Yet China’s pork companies are grabbing larger shares of a market that appears almost to have stopped growing. The OECD, a club of mostly rich countries, estimates that pork consumption in China has been more or less flat since 2014. It predicts growth of just under 1% a year over the next decade. If a country that eats so much of the stuff is indeed approaching peak pork, it hints at a big shift in global animal populations. Pigs will become a smaller presence on the global farm.
In 2015 animal products supplied 22% of the average Chinese person’s calorie intake, according to the FAO. That is only a shade below the average in rich countries (24%). “Unlike decades ago, there are no longer large chunks of the population out there that are not yet eating meat,” says Joel Haggard of the US Meat Export Federation, an industry group. And demography is beginning to prove a drag on demand. China’s population will start falling in about ten years’ time. The country is already ageing, which suppresses food consumption because old people eat less than young people do. UN demographers project that, between 2015 and 2050, the number of Chinese in their 20s will crash from 231m to 139m.
Besides, pork has strong competitors. “All over China there are people eating beef at McDonald’s and chicken at KFC,” says Mr Wan. Another fashion—hotpot restaurants where patrons cook meat in boiling pots of broth at the table—is boosting consumption of beef and lamb. Last year China overtook Brazil to become the world’s second-biggest beef market after America, according to the United States Department of Agriculture. Australia exports so much beef to China that the Global Times, a pugnacious state-owned newspaper, has suggested crimping the trade to punish Australia for various provocations.
[https://www.economist.com/sites/default/files/imagecache/640-width/images/print-edition/20190504_IRC942.png]
The shift from pork to beef in the world’s most populous country is bad news for the environment. Because pigs require no pasture, and are efficient at converting feed into flesh, pork is among the greenest of meats. Cattle are usually much less efficient, although they can be farmed in different ways. And because cows are ruminants, they belch methane, a powerful greenhouse gas. A study of American farm data in 2014 estimated that, calorie for calorie, beef production requires three times as much animal feed as pork production and produces almost five times as much greenhouse gases. Other estimates suggest it uses two and a half times as much water.
Fortunately, even as the Chinese develop the taste for beef, Americans are losing it. Consumption per head peaked in 1976; around 1990 beef was overtaken by chicken as America’s favourite meat. Academics at Kansas State University linked that to the rise of women’s paid work. Between 1982 and 2007 a 1% increase in the female employment rate was associated with a 0.6% drop in demand for beef and a similar rise in demand for chicken. Perhaps working women think beef is more trouble to cook. Beef-eating has risen a little recently, probably because Americans are feeling wealthier. But chicken remains king.
Shifts like that are probably the most that can be expected in rich countries over the next few years. Despite eager predictions of a “second nutrition transition” to diets lower in meat and higher in grains and vegetables, Western diets are so far changing only in the details. Beef is a little less popular in some countries, but chicken is more so; people are drinking less milk but eating more cheese. The EU expects only a tiny decline in meat-eating, from 69.3kg per person to 68.7kg, between 2018 and 2030. Collectively, Europeans and Americans seem to desire neither more animal proteins nor fewer.
If the West is sated, and China is getting there, where is the growth coming from? One answer is India. Although Indians still eat astonishingly little meat—just 4kg a year—they are drinking far more milk, eating more cheese and cooking with more ghee (clarified butter) than before. In the 1970s India embarked on a top-down “white revolution” to match the green one. Dairy farmers were organised into co-operatives and encouraged to bring their milk to collection centres with refrigerated tanks. Milk production shot up from 20m tonnes in 1970 to 174m tonnes in 2018, making India the world’s biggest milk producer. The OECD expects India will produce 244m tonnes of milk in 2027.
All that dairy is both a source of national pride and a problem in a country governed by Hindu nationalists. Hindus hold cows to be sacred. Through laws, hectoring and “cow protection” squads, zealots have tried to prevent all Indians from eating beef or even exporting it to other countries. When cows grow too old to produce much milk, farmers are supposed to send them to bovine retirement homes. In fact, Indian dairy farmers seem to be ditching the holy cows for water buffalo. When these stop producing milk, they are killed and their rather stringy meat is eaten or exported. Much of it goes to Vietnam, then to China (often illegally, because of fears of foot-and-mouth disease).
But neither an Indian milk co-operative nor a large Chinese pig farm really represents the future of food. Look instead to a small, scruffy chicken farm just east of Dakar, the capital of Senegal. Some 2,000 birds squeeze into a simple concrete shed with large openings in the walls, which are covered with wire mesh. Though breezes blow through the building, the chickens’ droppings emit an ammoniac reek that clings to the nostrils. A few steps outside, the ground is brown with blood. Chickens have been stuffed into a makeshift apparatus of steel cones to protect their wings, and their necks cut with a knife.
Though it looks primitive, this represents a great advance over traditional west African farming methods. The chickens in the shed hardly resemble the variegated brown birds that can be seen pecking at the ground in any number of villages. They are commercial broilers—white creatures with big appetites that grow to 2kg in weight after just 35 days. All have been vaccinated against two widespread chicken-killers—Newcastle disease and infectious bursal disease. A vet, Mamadou Diouf, checks on them regularly (and chastises the farmers for killing too close to the shed). Mr Diouf says that when he started working in the district, in 2013, many farmers refused to let him in.
Official statistics suggest that the number of chickens in Senegal has increased from 24m to 60m since 2000. As people move from villages to cities, they have less time to make traditional stews—which might involve fish, mutton or beef as well as vegetables and spices, and are delicious. Instead they eat in cafés, or buy food that they can cook quickly. By the roads into Dakar posters advertise “le poulet prêt à cuire”, wrapped in plastic. Broiler farms are so productive that supermarket chickens are not just convenient but cheap.
Economic vegetarians
Many sub-Saharan Africans still eat almost no meat, dairy or fish. The FAO estimates that just 7% of people’s dietary energy comes from animal products, one-third of the proportion in China. This is seldom the result of religious or cultural prohibitions. If animal foods were cheaper, or if people had more money, they would eat more of them. Richard Waite of the World Resources Institute, an American think-tank, points out that when Africans move to rich countries and open restaurants, they tend to write meat-heavy menus.
Yet this frugal continent is beginning to sway the global food system. The UN thinks that the population of sub-Saharan Africa will reach 2bn in the mid-2040s, up from 1.1bn today. That would lead to a huge increase in meat- and dairy-eating even if people’s diets stayed the same. But they will not. The population of Kenya has grown by 58% since 2000, while the output of beef has more than doubled.
Africa already imports more meat each year than does China, and the OECD’s forecasters expect imports to keep growing by more than 3% a year. But most of the continent’s meat will probably be home-grown. The FAO predicts that in 2050 almost two out of every five ruminant livestock animals in the world will be African. The number of chickens in Africa is projected to quadruple, to 7bn.
This will strain the environment. Although African broilers and battery hens are more or less as productive as chickens anywhere, African cattle are the world’s feeblest. Not only are they poorly fed and seldom visited by vets; in many areas they are treated more as stores of wealth than producers of food. Africa has 23% of the world’s cattle but produces 10% of the world’s beef and just 5% of its milk.
Lorenzo Bellù of the FAO points out that herders routinely encroach on national parks and private lands in east Africa. He finds it hard to imagine that the continent’s hunger for meat will be supplied entirely by making farming more efficient. Almost certainly, much forest will be cut down. Other consequences will be global. Sub-Saharan Africans currently have tiny carbon footprints because they use so little energy—excluding South Africa, the entire continent produces about as much electricity as France. The armies of cattle, goats and sheep will raise Africans’ collective contribution to global climate change, though not to near Western or Chinese levels.
The low-productivity horns of Africa
People will probably become healthier, though. Many African children are stunted (notably small for their age) partly because they do not get enough micronutrients such as Vitamin A. Iron deficiency is startlingly common. In Senegal a health survey in 2017 found that 42% of young children and 14% of women are moderately or severely anaemic. Poor nutrition stunts brains as well as bodies.
Animal products are excellent sources of essential vitamins and minerals. Studies in several developing countries have shown that giving milk to schoolchildren makes them taller. Recent research in rural western Kenya found that children who regularly ate eggs grew 5% faster than children who did not; cow’s milk had a smaller effect. But meat—or, rather, animals—can be dangerous, too. In Africa chickens are often allowed to run in and out of people’s homes. Their eggs and flesh seem to improve human health; their droppings do not. One study of Ghana finds that childhood anaemia is more common in chicken-owning households, perhaps because the nippers caught more diseases.
Africans’ changing diets also create opportunities for local businesses. As cities grow, and as people in those cities demand more animal protein, national supply chains become bigger and more sophisticated. Animal breeders, hatcheries, vets and trucking companies multiply. People stop feeding kitchen scraps to animals and start using commercial feed. In Nigeria the amount of maize used for animal-feed shot up from 300,000 tonnes to 1.8m tonnes between 2003 and 2015.
You can see this on the outskirts of Dakar—indeed, the building is so big that you can hardly miss it. NMA Sanders, a feed-mill, turned out some 140,000 tonnes of chicken feed last year, up from 122,000 the year before, according to its director of quality, Cheikh Alioune Konaté. The warehouse floor is piled high with raw ingredients: maize from Morocco, Egypt and Brazil; soya cake from Mali; fishmeal from local suppliers. The mill has created many jobs, from the labourers who fill bags with pelleted feed to the technicians who run the computer system, and managers like Mr Konaté. Lorries come and go.
It is often said that sub-Saharan Africa lacks an industrial base, and this is true. Just one car in every 85 is made in Africa, according to the International Organisation of Motor Vehicle Manufacturers. But to look only for high-tech, export-oriented industries risks overlooking the continent’s increasingly sophisticated food-producers, who are responding to urban demand. Ideally, Africa would learn to fill shipping containers with clothes and gadgets. For now, there are some jobs to be had filling bellies with meat.
This article appeared in the International section of the print edition under the headline "A meaty planet"
The way of more flesh
https://www.economist.com/international/2019/05/04/global-meat-eating-is-on-the-rise-bringing-surprising-benefits
Global meat-eating is on the rise, bringing surprising benefits
The Economist, May 2nd 2019| BEIJING, DAKAR AND MUMBAI
As Africans get richer, they will eat more meat and live longer, healthier lives
THINGS WERE different 28 years ago, when Zhou Xueyu and her husband moved from the coastal province of Shandong to Beijing and began selling fresh pork. The Xinfadi agricultural market where they opened their stall was then a small outpost of the capital. Only at the busiest times of year, around holidays, might the couple sell more than 100kg of meat in a day. With China’s economic boom just beginning, pork was still a luxury for most people.
Ms Zhou now sells about two tonnes of meat a day. In between expert whacks of her heavy cleaver, she explains how her business has grown. She used to rely on a few suppliers in nearby provinces. Now the meat travels along China’s excellent motorway network from as far away as Heilongjiang, in the far north-east, and Sichuan, in the south-west. The Xinfadi market has changed, too. It is 100 times larger than when it opened in 1988, and now lies within Beijing, which has sprawled around it.
Between 1961 and 2013 the average Chinese person went from eating 4kg of meat a year to 62kg. Half of the world’s pork is eaten in the country. More liberal agricultural policies have allowed farms to produce more—in 1961 China was suffering under the awful experiment in collectivisation known as the “great leap forward”. But the main reason the Chinese are eating more meat is simply that they are wealthier.
[https://www.economist.com/sites/default/files/imagecache/640-width/images/print-edition/20190504_IRC006.png]
In rich countries people go vegan for January and pour oat milk over their breakfast cereal. In the world as a whole, the trend is the other way. In the decade to 2017 global meat consumption rose by an average of 1.9% a year and fresh dairy consumption by 2.1%—both about twice as fast as population growth. Almost four-fifths of all agricultural land is dedicated to feeding livestock, if you count not just pasture but also cropland used to grow animal feed. Humans have bred so many animals for food that Earth’s mammalian biomass is thought to have quadrupled since the stone age (see chart).
Barring a big leap forward in laboratory-grown meat, this is likely to continue. The Food and Agriculture Organisation (FAO), an agency of the UN, estimates that the global number of ruminant livestock (that is, cattle, buffalo, sheep and goats) will rise from 4.1bn to 5.8bn between 2015 and 2050 under a business-as-usual scenario. The population of chickens is expected to grow even faster. The chicken is already by far the most common bird in the world, with about 23bn alive at the moment compared with 500m house sparrows.
[https://www.economist.com/sites/default/files/imagecache/640-width/20190504_IRC637.png]
Meanwhile the geography of meat-eating is changing. The countries that drove the global rise in the consumption of animal products over the past few decades are not the ones that will do so in future. Tastes in meat are changing, too. In some countries people are moving from pork or mutton to beef, whereas in others beef is giving way to chicken. These shifts from meat to meat and from country to country are just as important as the overall pattern of growth. They are also more cheering. On a planetary scale, the rise of meat- and dairy-eating is a giant environmental problem. Locally, however, it can be a boon.
Over the past few decades no animal has bulked up faster than the Chinese pig. Annual pork production in that country has grown more than 30-fold since the early 1960s, to 55m tonnes. It is mostly to feed the legions of porkers that China imports 100m tonnes of soybeans every year—two-thirds of trade in that commodity. It is largely through eating more pork and dairy that Chinese diets have come to resemble Western ones, rich in protein and fat. And it is mostly because their diets have altered that Chinese people have changed shape. The average 12-year-old urban boy was nine centimetres taller in 2010 than in 1985, the average girl seven centimetres taller. Boys in particular have also grown fatter.
China’s pork suppliers are swelling, too. Three-fifths of pigs already come from farms that produce more than 500 a year, and Wan Hongjian, vice-president of WH Group Ltd, China’s largest pork producer, thinks the proportion will rise. Disease is one reason. African swine fever, a viral disease fatal to pigs though harmless to people, has swept China and has led to the culling of about 1m hogs. The virus is tough, and can be eradicated only if farms maintain excellent hygiene. Bigger producers are likely to prove better at that.
High on the hog
Yet China’s pork companies are grabbing larger shares of a market that appears almost to have stopped growing. The OECD, a club of mostly rich countries, estimates that pork consumption in China has been more or less flat since 2014. It predicts growth of just under 1% a year over the next decade. If a country that eats so much of the stuff is indeed approaching peak pork, it hints at a big shift in global animal populations. Pigs will become a smaller presence on the global farm.
In 2015 animal products supplied 22% of the average Chinese person’s calorie intake, according to the FAO. That is only a shade below the average in rich countries (24%). “Unlike decades ago, there are no longer large chunks of the population out there that are not yet eating meat,” says Joel Haggard of the US Meat Export Federation, an industry group. And demography is beginning to prove a drag on demand. China’s population will start falling in about ten years’ time. The country is already ageing, which suppresses food consumption because old people eat less than young people do. UN demographers project that, between 2015 and 2050, the number of Chinese in their 20s will crash from 231m to 139m.
Besides, pork has strong competitors. “All over China there are people eating beef at McDonald’s and chicken at KFC,” says Mr Wan. Another fashion—hotpot restaurants where patrons cook meat in boiling pots of broth at the table—is boosting consumption of beef and lamb. Last year China overtook Brazil to become the world’s second-biggest beef market after America, according to the United States Department of Agriculture. Australia exports so much beef to China that the Global Times, a pugnacious state-owned newspaper, has suggested crimping the trade to punish Australia for various provocations.
[https://www.economist.com/sites/default/files/imagecache/640-width/images/print-edition/20190504_IRC942.png]
The shift from pork to beef in the world’s most populous country is bad news for the environment. Because pigs require no pasture, and are efficient at converting feed into flesh, pork is among the greenest of meats. Cattle are usually much less efficient, although they can be farmed in different ways. And because cows are ruminants, they belch methane, a powerful greenhouse gas. A study of American farm data in 2014 estimated that, calorie for calorie, beef production requires three times as much animal feed as pork production and produces almost five times as much greenhouse gases. Other estimates suggest it uses two and a half times as much water.
Fortunately, even as the Chinese develop the taste for beef, Americans are losing it. Consumption per head peaked in 1976; around 1990 beef was overtaken by chicken as America’s favourite meat. Academics at Kansas State University linked that to the rise of women’s paid work. Between 1982 and 2007 a 1% increase in the female employment rate was associated with a 0.6% drop in demand for beef and a similar rise in demand for chicken. Perhaps working women think beef is more trouble to cook. Beef-eating has risen a little recently, probably because Americans are feeling wealthier. But chicken remains king.
Shifts like that are probably the most that can be expected in rich countries over the next few years. Despite eager predictions of a “second nutrition transition” to diets lower in meat and higher in grains and vegetables, Western diets are so far changing only in the details. Beef is a little less popular in some countries, but chicken is more so; people are drinking less milk but eating more cheese. The EU expects only a tiny decline in meat-eating, from 69.3kg per person to 68.7kg, between 2018 and 2030. Collectively, Europeans and Americans seem to desire neither more animal proteins nor fewer.
If the West is sated, and China is getting there, where is the growth coming from? One answer is India. Although Indians still eat astonishingly little meat—just 4kg a year—they are drinking far more milk, eating more cheese and cooking with more ghee (clarified butter) than before. In the 1970s India embarked on a top-down “white revolution” to match the green one. Dairy farmers were organised into co-operatives and encouraged to bring their milk to collection centres with refrigerated tanks. Milk production shot up from 20m tonnes in 1970 to 174m tonnes in 2018, making India the world’s biggest milk producer. The OECD expects India will produce 244m tonnes of milk in 2027.
All that dairy is both a source of national pride and a problem in a country governed by Hindu nationalists. Hindus hold cows to be sacred. Through laws, hectoring and “cow protection” squads, zealots have tried to prevent all Indians from eating beef or even exporting it to other countries. When cows grow too old to produce much milk, farmers are supposed to send them to bovine retirement homes. In fact, Indian dairy farmers seem to be ditching the holy cows for water buffalo. When these stop producing milk, they are killed and their rather stringy meat is eaten or exported. Much of it goes to Vietnam, then to China (often illegally, because of fears of foot-and-mouth disease).
But neither an Indian milk co-operative nor a large Chinese pig farm really represents the future of food. Look instead to a small, scruffy chicken farm just east of Dakar, the capital of Senegal. Some 2,000 birds squeeze into a simple concrete shed with large openings in the walls, which are covered with wire mesh. Though breezes blow through the building, the chickens’ droppings emit an ammoniac reek that clings to the nostrils. A few steps outside, the ground is brown with blood. Chickens have been stuffed into a makeshift apparatus of steel cones to protect their wings, and their necks cut with a knife.
Though it looks primitive, this represents a great advance over traditional west African farming methods. The chickens in the shed hardly resemble the variegated brown birds that can be seen pecking at the ground in any number of villages. They are commercial broilers—white creatures with big appetites that grow to 2kg in weight after just 35 days. All have been vaccinated against two widespread chicken-killers—Newcastle disease and infectious bursal disease. A vet, Mamadou Diouf, checks on them regularly (and chastises the farmers for killing too close to the shed). Mr Diouf says that when he started working in the district, in 2013, many farmers refused to let him in.
Official statistics suggest that the number of chickens in Senegal has increased from 24m to 60m since 2000. As people move from villages to cities, they have less time to make traditional stews—which might involve fish, mutton or beef as well as vegetables and spices, and are delicious. Instead they eat in cafés, or buy food that they can cook quickly. By the roads into Dakar posters advertise “le poulet prêt à cuire”, wrapped in plastic. Broiler farms are so productive that supermarket chickens are not just convenient but cheap.
Economic vegetarians
Many sub-Saharan Africans still eat almost no meat, dairy or fish. The FAO estimates that just 7% of people’s dietary energy comes from animal products, one-third of the proportion in China. This is seldom the result of religious or cultural prohibitions. If animal foods were cheaper, or if people had more money, they would eat more of them. Richard Waite of the World Resources Institute, an American think-tank, points out that when Africans move to rich countries and open restaurants, they tend to write meat-heavy menus.
Yet this frugal continent is beginning to sway the global food system. The UN thinks that the population of sub-Saharan Africa will reach 2bn in the mid-2040s, up from 1.1bn today. That would lead to a huge increase in meat- and dairy-eating even if people’s diets stayed the same. But they will not. The population of Kenya has grown by 58% since 2000, while the output of beef has more than doubled.
Africa already imports more meat each year than does China, and the OECD’s forecasters expect imports to keep growing by more than 3% a year. But most of the continent’s meat will probably be home-grown. The FAO predicts that in 2050 almost two out of every five ruminant livestock animals in the world will be African. The number of chickens in Africa is projected to quadruple, to 7bn.
This will strain the environment. Although African broilers and battery hens are more or less as productive as chickens anywhere, African cattle are the world’s feeblest. Not only are they poorly fed and seldom visited by vets; in many areas they are treated more as stores of wealth than producers of food. Africa has 23% of the world’s cattle but produces 10% of the world’s beef and just 5% of its milk.
Lorenzo Bellù of the FAO points out that herders routinely encroach on national parks and private lands in east Africa. He finds it hard to imagine that the continent’s hunger for meat will be supplied entirely by making farming more efficient. Almost certainly, much forest will be cut down. Other consequences will be global. Sub-Saharan Africans currently have tiny carbon footprints because they use so little energy—excluding South Africa, the entire continent produces about as much electricity as France. The armies of cattle, goats and sheep will raise Africans’ collective contribution to global climate change, though not to near Western or Chinese levels.
The low-productivity horns of Africa
People will probably become healthier, though. Many African children are stunted (notably small for their age) partly because they do not get enough micronutrients such as Vitamin A. Iron deficiency is startlingly common. In Senegal a health survey in 2017 found that 42% of young children and 14% of women are moderately or severely anaemic. Poor nutrition stunts brains as well as bodies.
Animal products are excellent sources of essential vitamins and minerals. Studies in several developing countries have shown that giving milk to schoolchildren makes them taller. Recent research in rural western Kenya found that children who regularly ate eggs grew 5% faster than children who did not; cow’s milk had a smaller effect. But meat—or, rather, animals—can be dangerous, too. In Africa chickens are often allowed to run in and out of people’s homes. Their eggs and flesh seem to improve human health; their droppings do not. One study of Ghana finds that childhood anaemia is more common in chicken-owning households, perhaps because the nippers caught more diseases.
Africans’ changing diets also create opportunities for local businesses. As cities grow, and as people in those cities demand more animal protein, national supply chains become bigger and more sophisticated. Animal breeders, hatcheries, vets and trucking companies multiply. People stop feeding kitchen scraps to animals and start using commercial feed. In Nigeria the amount of maize used for animal-feed shot up from 300,000 tonnes to 1.8m tonnes between 2003 and 2015.
You can see this on the outskirts of Dakar—indeed, the building is so big that you can hardly miss it. NMA Sanders, a feed-mill, turned out some 140,000 tonnes of chicken feed last year, up from 122,000 the year before, according to its director of quality, Cheikh Alioune Konaté. The warehouse floor is piled high with raw ingredients: maize from Morocco, Egypt and Brazil; soya cake from Mali; fishmeal from local suppliers. The mill has created many jobs, from the labourers who fill bags with pelleted feed to the technicians who run the computer system, and managers like Mr Konaté. Lorries come and go.
It is often said that sub-Saharan Africa lacks an industrial base, and this is true. Just one car in every 85 is made in Africa, according to the International Organisation of Motor Vehicle Manufacturers. But to look only for high-tech, export-oriented industries risks overlooking the continent’s increasingly sophisticated food-producers, who are responding to urban demand. Ideally, Africa would learn to fill shipping containers with clothes and gadgets. For now, there are some jobs to be had filling bellies with meat.
This article appeared in the International section of the print edition under the headline "A meaty planet"
When choosing among an overabundance of alternatives, participants express more positive feelings (i.e., higher satisfaction/confidence, lower regret & difficulty) if all the options of the choice set are associated with familiar brands
The Role of the Brand on Choice Overload. Raffaella Misuraca. Mind & Society, May 8 2019. https://link.springer.com/article/10.1007/s11299-019-00210-7
Abstract: Current research on choice overload has been mainly conducted with choice options not associated with specific brands. This study investigates whether the presence of brand names in the choice set affects the occurrence of choice overload. Across four studies, we find that when choosing among an overabundance of alternatives, participants express more positive feelings (i.e., higher satisfaction/confidence, lower regret and difficulty) when all the options of the choice set are associated with familiar brands, rather than unfamiliar brands or no brand at all. We also find that choice overload only appears in the absence of brand names, but disappears when all options contain brand names—either familiar or unfamiliar. Theoretical and practical implications are discussed.
Keywords: Choice overload Brand Consumer decisions Decision-making
Abstract: Current research on choice overload has been mainly conducted with choice options not associated with specific brands. This study investigates whether the presence of brand names in the choice set affects the occurrence of choice overload. Across four studies, we find that when choosing among an overabundance of alternatives, participants express more positive feelings (i.e., higher satisfaction/confidence, lower regret and difficulty) when all the options of the choice set are associated with familiar brands, rather than unfamiliar brands or no brand at all. We also find that choice overload only appears in the absence of brand names, but disappears when all options contain brand names—either familiar or unfamiliar. Theoretical and practical implications are discussed.
Keywords: Choice overload Brand Consumer decisions Decision-making
Retrofitting the 29 mn UK homes would cost £4.3 tn; if the energy bill of £2000 per year were to be halved, savings would be £29 bn/year; payback time would be 150 years
Decarbonisation and the Command Economy. Michael Kelly. GWPF, May 8 2019. https://www.thegwpf.com/decarbonisation-and-the-command-economy
The costs of retrofitting exiting domestic buildings to improve energy efficiency and reduce CO2 emissions, compared with the savings on energy bills, represent a wholly unsatisfactory return on investment from a family perspective. A command economy would be required to make any serious inroads on the challenge as proposed by the Committee on Climate Change.
In its recent (February 2019) report, ‘UK Housing: Fit for the Future?’, the 29 million existing homes must be made low-carbon, low-energy and resilient to climate change. This note is an abbreviated update of a study[1] I prepared subsequent to a three-year appointment as Chief Scientific Adviser to the Department for Communities and Local Government during 2006–9. I also delivered an ‘amateur’ prospectus to the Council, University, Business and Entrepreneurial sectors of the City of Cambridge, with an estimated bill of £0.7–1 billion to retrofit the 49,000 houses and 5500 other buildings within the city boundaries to halve the net CO2 emissions.
On the basis of a presentation I made to the then Science Minister, Lord Drayson, in 2008, the Government launched a pilot ‘Retrofit for the Future’ programme, with £150,000 devoted to over 100 houses in the housing association sector. This programme, and its outcomes[2], did not rate a mention in the recent CCC report. However, I have visited one of these, and seen a 60% (the target was 80%) reduction in CO2 emissions after the retrofit: full wall insulation, underfloor insulation, use of the newest appliances etc. At this rate of spend, the 29 million existing homes across the UK would cost £4.3 trillion to retrofit. If the typical energy bill of £2000 per year were to be halved, the saving would be £29 billion per year and the payback time would be 150 years! Who would lend/invest on that basis?
In fact, the £150,000 limit was set to ensure that the end target of 80% CO2 emissions could be met[3], on the understanding that economies of scale and learning by doing would reduce the cost per household by at most 3–5-fold. However, how much reduction in cost is required before private individuals would invest in improving the energy efficiency of their home? This would be limited by the conditions set by lenders, and they want a payback of 3-4 years on most investments, stretching to say 7-8 years on infrastructure investments in the home. The implied ceiling of lending of £10,000 per house goes nowhere on energy efficiency measures and would not give a 50%, let alone 80%, energy reduction.
Only if there is a Government direction to spend this scale of money on this issue will any significant inroads be made in energy reductions in existing houses. No political party would commit to this level of spend on a national retrofit programme until the need is pressing and urgent, not on a distant horizon. There is no ducking or diving from this conclusion.
The progress since the 2010 CCC report on housing[4] is nugatory, and a third report will be rewritten again in 10 years, with similar pleas.
Michael Kelly is Prince Philip Professor of Technology (emeritus) at the University of Cambridge and a former chief scientist at the Department of Communities and Local Government.
---
[1] See original article at the link above
[2] Rajat Gupta, Matt Gregg, Stephen Passmore & Geoffrey Stevens:, ‘Intent and outcomes from the Retrofit for the Future programme: key lessons’, Building Research & Information, 43:4, 435-451, 2015. DOI: 10.1080/09613218.2015.1024042. See . https://www.tandfonline.com/doi/pdf/10.1080/09613218.2015.1024042
[3] . Note that only 3 of 45 projects, where the full data was available, actually met the 80% reduction target.
[4] https://www.theccc.org.uk/archive/aws2/0610/pr_meeting_carbon_budgets_chapter3_progress_reducing_emmissions_buildings_industry.pdf
The costs of retrofitting exiting domestic buildings to improve energy efficiency and reduce CO2 emissions, compared with the savings on energy bills, represent a wholly unsatisfactory return on investment from a family perspective. A command economy would be required to make any serious inroads on the challenge as proposed by the Committee on Climate Change.
In its recent (February 2019) report, ‘UK Housing: Fit for the Future?’, the 29 million existing homes must be made low-carbon, low-energy and resilient to climate change. This note is an abbreviated update of a study[1] I prepared subsequent to a three-year appointment as Chief Scientific Adviser to the Department for Communities and Local Government during 2006–9. I also delivered an ‘amateur’ prospectus to the Council, University, Business and Entrepreneurial sectors of the City of Cambridge, with an estimated bill of £0.7–1 billion to retrofit the 49,000 houses and 5500 other buildings within the city boundaries to halve the net CO2 emissions.
On the basis of a presentation I made to the then Science Minister, Lord Drayson, in 2008, the Government launched a pilot ‘Retrofit for the Future’ programme, with £150,000 devoted to over 100 houses in the housing association sector. This programme, and its outcomes[2], did not rate a mention in the recent CCC report. However, I have visited one of these, and seen a 60% (the target was 80%) reduction in CO2 emissions after the retrofit: full wall insulation, underfloor insulation, use of the newest appliances etc. At this rate of spend, the 29 million existing homes across the UK would cost £4.3 trillion to retrofit. If the typical energy bill of £2000 per year were to be halved, the saving would be £29 billion per year and the payback time would be 150 years! Who would lend/invest on that basis?
In fact, the £150,000 limit was set to ensure that the end target of 80% CO2 emissions could be met[3], on the understanding that economies of scale and learning by doing would reduce the cost per household by at most 3–5-fold. However, how much reduction in cost is required before private individuals would invest in improving the energy efficiency of their home? This would be limited by the conditions set by lenders, and they want a payback of 3-4 years on most investments, stretching to say 7-8 years on infrastructure investments in the home. The implied ceiling of lending of £10,000 per house goes nowhere on energy efficiency measures and would not give a 50%, let alone 80%, energy reduction.
Only if there is a Government direction to spend this scale of money on this issue will any significant inroads be made in energy reductions in existing houses. No political party would commit to this level of spend on a national retrofit programme until the need is pressing and urgent, not on a distant horizon. There is no ducking or diving from this conclusion.
The progress since the 2010 CCC report on housing[4] is nugatory, and a third report will be rewritten again in 10 years, with similar pleas.
Michael Kelly is Prince Philip Professor of Technology (emeritus) at the University of Cambridge and a former chief scientist at the Department of Communities and Local Government.
---
[1] See original article at the link above
[2] Rajat Gupta, Matt Gregg, Stephen Passmore & Geoffrey Stevens:, ‘Intent and outcomes from the Retrofit for the Future programme: key lessons’, Building Research & Information, 43:4, 435-451, 2015. DOI: 10.1080/09613218.2015.1024042. See . https://www.tandfonline.com/doi/pdf/10.1080/09613218.2015.1024042
[3] . Note that only 3 of 45 projects, where the full data was available, actually met the 80% reduction target.
[4] https://www.theccc.org.uk/archive/aws2/0610/pr_meeting_carbon_budgets_chapter3_progress_reducing_emmissions_buildings_industry.pdf
Ray of hope: Hopelessness Increases Preferences for Brighter Lighting
Francis, G., & Thunell, E. (2019). Excess Success in “Ray of hope: Hopelessness Increases Preferences for Brighter Lighting”. Collabra: Psychology, 5(1), 22. DOI: http://doi.org/10.1525/collabra.213
Abstract: Dong, Huang, and Zhong (2015) report five successful experiments linking brightness perception with the feeling of hopelessness. They argue that a gloomy future is psychologically represented as darkness, not just metaphorically but as an actual perceptual bias. Based on multiple results, they conclude that people who feel hopeless perceive their environment as darker and therefore prefer brighter lighting than controls. Reversely, dim lighting caused participants to feel more hopeless. However, the experiments succeed at a rate much higher than predicted by the magnitude of the reported effects. Based on the reported statistics, the estimated probability of all five experiments being fully successful, if replicated with the same sample sizes, is less than 0.016. This low rate suggests that the original findings are (perhaps unintentionally) the result of questionable research practices or publication bias. Readers should therefore be skeptical about the original results and conclusions. Finally, we discuss how to design future studies to investigate the relationship between hopelessness and brightness.
Keywords: Excess success , publication bias , brightness perception , perceptual bias , statistics
Abstract: Dong, Huang, and Zhong (2015) report five successful experiments linking brightness perception with the feeling of hopelessness. They argue that a gloomy future is psychologically represented as darkness, not just metaphorically but as an actual perceptual bias. Based on multiple results, they conclude that people who feel hopeless perceive their environment as darker and therefore prefer brighter lighting than controls. Reversely, dim lighting caused participants to feel more hopeless. However, the experiments succeed at a rate much higher than predicted by the magnitude of the reported effects. Based on the reported statistics, the estimated probability of all five experiments being fully successful, if replicated with the same sample sizes, is less than 0.016. This low rate suggests that the original findings are (perhaps unintentionally) the result of questionable research practices or publication bias. Readers should therefore be skeptical about the original results and conclusions. Finally, we discuss how to design future studies to investigate the relationship between hopelessness and brightness.
Keywords: Excess success , publication bias , brightness perception , perceptual bias , statistics
Differences in how men and women describe their traits are typically larger in highly gender egalitarian cultures; replicated in one of the largest number of cultures yet investigated—58 nations of the ISDP-2 Project
Why Sometimes a Man is more like a Woman. David P Schmitt. Chapter 12 of In Praise of An Inquisitive Mind. Anu Realo, Ed. Univ. of Tartu Press, 2019. https://warwick.ac.uk/fac/sci/psych/people/arealo/arealo/publications/allik_festschrift_2019.pdf
Among his many achievements, Jüri Allik and his colleagues were among the first to document a cross-cultural “gender paradox” in people’s self-reported personality traits. Namely, differences in how men and women describe their traits are typically larger and more conspicuous in highly gender egalitarian cultures (e.g., across Scandinavia where women and men experience more similar gender roles, sex role socialization, and sociopolitical gender equity) compared to less gender egalitarian cultures (e.g., across Africa or South/Southeast Asia). It is my honor to celebrate Jüri Allik’s sterling career with this chapter on sex differences in personality traits across one of the largest number of cultures yet investigated—58 nations of the International Sexuality Description Project-2 (ISDP-2). In this dataset, the gender paradoxical findings were replicated, with sex differences in Big Five personality traits being demonstrably larger in more gender egalitarian cultures. In our current era of most findings from classic psychological science failing to replicate, this successful replication serves as a testament to Jüri Allik’s status as among the most rigorous and prescient scientists within the field of personality psychology.
Politically incorrect paper: Given that sex egalitarian countries tend to have the greatest sex differences in personality & occupational choices, sex specific policies (increasing vacancies for the sex with lower hire proportion) may not be effective:
Among his many achievements, Jüri Allik and his colleagues were among the first to document a cross-cultural “gender paradox” in people’s self-reported personality traits. Namely, differences in how men and women describe their traits are typically larger and more conspicuous in highly gender egalitarian cultures (e.g., across Scandinavia where women and men experience more similar gender roles, sex role socialization, and sociopolitical gender equity) compared to less gender egalitarian cultures (e.g., across Africa or South/Southeast Asia). It is my honor to celebrate Jüri Allik’s sterling career with this chapter on sex differences in personality traits across one of the largest number of cultures yet investigated—58 nations of the International Sexuality Description Project-2 (ISDP-2). In this dataset, the gender paradoxical findings were replicated, with sex differences in Big Five personality traits being demonstrably larger in more gender egalitarian cultures. In our current era of most findings from classic psychological science failing to replicate, this successful replication serves as a testament to Jüri Allik’s status as among the most rigorous and prescient scientists within the field of personality psychology.
Politically incorrect paper: Given that sex egalitarian countries tend to have the greatest sex differences in personality & occupational choices, sex specific policies (increasing vacancies for the sex with lower hire proportion) may not be effective:
Sex and Care: The Evolutionary Psychological Explanations for Sex Differences in Formal Care Occupations. Peter Kay Chai Tay, Yi Yuan Ting and Kok Yang Tan. Front. Psychol., April 17 2019. https://www.bipartisanalliance.com/2019/04/incorrect-paper-given-that-sex.html
2002-2016: Binge drinking decreased substantially among US adolescents across time, age, gender, and race/ethnicity; alcohol abstention increased among US adolescents over the past 15 years
Trends in binge drinking and alcohol abstention among adolescents in the US, 2002-2016. Trenette Clark Goings et al. Drug and Alcohol Dependence, May 8 2019. https://doi.org/10.1016/j.drugalcdep.2019.02.034
Highlights
• Binge drinking decreased substantially among US adolescents across time
• Binge drinking decreased across age, gender, and race/ethnicity
• Alcohol abstention increased among US adolescents over the past 15 years
Abstract
Background: Binge drinking accounts for several adverse health, social, legal, and academic outcomes among adolescents. Understanding trends and correlates of binge drinking and alcohol abstention has important implications for policy and programs and was the aim of this study. The current study examined trends in adolescent binge drinking and alcohol abstention by age, gender, and race/ethnicity over a 15-year period.
Methods: Respondents between the ages of 12 and 17 years who participated in the National Survey on Drug Use and Health (NSDUH) between 2002 and 2016 were included in the sample of 258,309. Measures included binge drinking, alcohol abstention, and co-morbid factors (e.g., marijuana, other illicit drugs), and demographic factors.
Results: Logistic regression analyses were conducted to examine the significance of trend changes by sub-groups while controlling for co-morbid and demographic factors. Findings indicated that binge drinking decreased substantially among adolescents in the US over the last 15 years. This decrease was shown among all age, gender, and racial/ethnic groups. In 2002, Year 1 of the study, 26% of 17-year-olds reported past-month binge drinking; in 2016, past-month binge drinking dropped to 12%. Findings also indicated comparable increases in the proportion of youth reporting abstention from alcohol consumption across all subgroups. Black youth reported substantially lower levels of binge alcohol use and higher levels of abstention, although the gap between Black, Hispanic and White youth narrowed substantially between 2002 and 2016.
Conclusion: Study findings are consistent with those of other research showing declines in problem alcohol- use behavior among youth.
Highlights
• Binge drinking decreased substantially among US adolescents across time
• Binge drinking decreased across age, gender, and race/ethnicity
• Alcohol abstention increased among US adolescents over the past 15 years
Abstract
Background: Binge drinking accounts for several adverse health, social, legal, and academic outcomes among adolescents. Understanding trends and correlates of binge drinking and alcohol abstention has important implications for policy and programs and was the aim of this study. The current study examined trends in adolescent binge drinking and alcohol abstention by age, gender, and race/ethnicity over a 15-year period.
Methods: Respondents between the ages of 12 and 17 years who participated in the National Survey on Drug Use and Health (NSDUH) between 2002 and 2016 were included in the sample of 258,309. Measures included binge drinking, alcohol abstention, and co-morbid factors (e.g., marijuana, other illicit drugs), and demographic factors.
Results: Logistic regression analyses were conducted to examine the significance of trend changes by sub-groups while controlling for co-morbid and demographic factors. Findings indicated that binge drinking decreased substantially among adolescents in the US over the last 15 years. This decrease was shown among all age, gender, and racial/ethnic groups. In 2002, Year 1 of the study, 26% of 17-year-olds reported past-month binge drinking; in 2016, past-month binge drinking dropped to 12%. Findings also indicated comparable increases in the proportion of youth reporting abstention from alcohol consumption across all subgroups. Black youth reported substantially lower levels of binge alcohol use and higher levels of abstention, although the gap between Black, Hispanic and White youth narrowed substantially between 2002 and 2016.
Conclusion: Study findings are consistent with those of other research showing declines in problem alcohol- use behavior among youth.
Voice of Authority: Professionals Lower Their Vocal Frequencies When Giving Expert Advice
Voice of Authority: Professionals Lower Their Vocal Frequencies When Giving Expert Advice. Piotr Sorokowski et al. Journal of Nonverbal Behavior, May 7 2019. https://link.springer.com/article/10.1007/s10919-019-00307-0
Abstract: Acoustic analysis and playback studies have greatly advanced our understanding of between-individual differences in nonverbal communication. Yet, researchers have only recently begun to investigate within-individual variation in the voice, particularly how people modulate key vocal parameters across various social contexts, with most of this research focusing on mating contexts. Here, we investigated whether men and women modulate the frequency components of their voices in a professional context, and how this voice modulation affects listeners’ assessments of the speakers’ competence and authority. Research assistants engaged scientists working as faculty members at various universities in two types of speech conditions: (1) Control speech, wherein the subjects were asked how to get to the administrative offices on that given campus; and (2) Authority speech, wherein the same subjects were asked to provide commentary for a radio program for young scholars titled, “How to become a scientist, and is it worth it?”. Our results show that male (n = 27) and female (n = 24) faculty members lowered their mean voice pitch (measured as fundamental frequency, F0) and vocal tract resonances (measured as formant position, Pf) when asked to provide their expert opinion compared to when giving directions. Notably, women lowered their mean voice pitch more than did men (by 33 Hz vs. 14 Hz) when giving expert advice. The results of a playback experiment further indicated that foreign-speaking listeners judged the voices of faculty members as relatively more competent and more authoritative based on authority speech than control speech, indicating that the observed nonverbal voice modulation effectively altered listeners’ perceptions. Our results support the prediction that people modulate their voices in social contexts in ways that are likely to elicit favorable social appraisals.
Keywords: Authority Fundamental frequency Voice pitch Formant frequencies Voice modulation
Abstract: Acoustic analysis and playback studies have greatly advanced our understanding of between-individual differences in nonverbal communication. Yet, researchers have only recently begun to investigate within-individual variation in the voice, particularly how people modulate key vocal parameters across various social contexts, with most of this research focusing on mating contexts. Here, we investigated whether men and women modulate the frequency components of their voices in a professional context, and how this voice modulation affects listeners’ assessments of the speakers’ competence and authority. Research assistants engaged scientists working as faculty members at various universities in two types of speech conditions: (1) Control speech, wherein the subjects were asked how to get to the administrative offices on that given campus; and (2) Authority speech, wherein the same subjects were asked to provide commentary for a radio program for young scholars titled, “How to become a scientist, and is it worth it?”. Our results show that male (n = 27) and female (n = 24) faculty members lowered their mean voice pitch (measured as fundamental frequency, F0) and vocal tract resonances (measured as formant position, Pf) when asked to provide their expert opinion compared to when giving directions. Notably, women lowered their mean voice pitch more than did men (by 33 Hz vs. 14 Hz) when giving expert advice. The results of a playback experiment further indicated that foreign-speaking listeners judged the voices of faculty members as relatively more competent and more authoritative based on authority speech than control speech, indicating that the observed nonverbal voice modulation effectively altered listeners’ perceptions. Our results support the prediction that people modulate their voices in social contexts in ways that are likely to elicit favorable social appraisals.
Keywords: Authority Fundamental frequency Voice pitch Formant frequencies Voice modulation
Spouses' Faces Are Similar but Do Not Become More Similar with Time
Tea-mangkornpan, Pin Pin, and Michal Kosinski. 2019. “Spouses' Faces Are Similar but Do Not Become More Similar with Time.” PsyArXiv. May 8. doi:10.31234/osf.io/d7hpj
Abstract: The convergence in physical appearance hypothesis posits that long-term partners’ faces become more similar with time as a function of the shared environment, diet, and synchronized facial expressions. While this hypothesis has been widely disseminated in psychological literature, it is supported by a single study of 12 married couples. Here, we examine this hypothesis using the facial images of 517 couples taken at the beginning of their marriage and 20 or more years later. Their facial similarity is estimated using two independent methods: human judgments and a facial recognition algorithm. The results show that while spouses’ faces tend to be similar at marriage, they do not converge over time. In fact, they become slightly less similar. These findings bring facial appearance in line with other personal characteristics—such as personality, intelligence, interests, attitudes, values, and well-being—through which spouses show initial similarity but no convergence over time.
Abstract: The convergence in physical appearance hypothesis posits that long-term partners’ faces become more similar with time as a function of the shared environment, diet, and synchronized facial expressions. While this hypothesis has been widely disseminated in psychological literature, it is supported by a single study of 12 married couples. Here, we examine this hypothesis using the facial images of 517 couples taken at the beginning of their marriage and 20 or more years later. Their facial similarity is estimated using two independent methods: human judgments and a facial recognition algorithm. The results show that while spouses’ faces tend to be similar at marriage, they do not converge over time. In fact, they become slightly less similar. These findings bring facial appearance in line with other personal characteristics—such as personality, intelligence, interests, attitudes, values, and well-being—through which spouses show initial similarity but no convergence over time.
Tuesday, May 7, 2019
Is Technology Widening the Gender Gap? Female workers are at a significantly higher risk for displacement by automation than male workers; probability of automation is lower for younger cohorts of women, and for managers
Is Technology Widening the Gender Gap? Automation and the Future of Female Employment. Mariya Brussevich, Era Dabla-Norris, Salma Khalid. IMF Working Paper No. 19/91, May 2019. https://www.imf.org/en/Publications/WP/Issues/2019/05/06/Is-Technology-Widening-the-Gender-Gap-Automation-and-the-Future-of-Female-Employment-46684
Summary: Using individual level data on task composition at work for 30 advanced and emerging economies, we find that women, on average, perform more routine tasks than men's tasks that are more prone to automation. To quantify the impact on jobs, we relate data on task composition at work to occupation level estimates of probability of automation, controlling for a rich set of individual characteristics (e.g., education, age, literacy and numeracy skills). Our results indicate that female workers are at a significantly higher risk for displacement by automation than male workers, with 11 percent of the female workforce at high risk of being automated given the current state of technology, albeit with significant cross-country heterogeneity. The probability of automation is lower for younger cohorts of women, and for those in managerial positions.
Summary: Using individual level data on task composition at work for 30 advanced and emerging economies, we find that women, on average, perform more routine tasks than men's tasks that are more prone to automation. To quantify the impact on jobs, we relate data on task composition at work to occupation level estimates of probability of automation, controlling for a rich set of individual characteristics (e.g., education, age, literacy and numeracy skills). Our results indicate that female workers are at a significantly higher risk for displacement by automation than male workers, with 11 percent of the female workforce at high risk of being automated given the current state of technology, albeit with significant cross-country heterogeneity. The probability of automation is lower for younger cohorts of women, and for those in managerial positions.
Can Successful Schools Replicate? It seems they can: Replication charter schools generate large achievement gains on par with those produced by their parent campuses; highly standardized practices in place seem crucial
Can Successful Schools Replicate? Scaling Up Boston's Charter School Sector. Sarah Cohodes, Elizabeth Setren, Christopher R. Walters. NBER Working Paper No. 25796, May 2019. https://www.nber.org/papers/w25796
Abstract: Can schools that boost student outcomes reproduce their success at new campuses? We study a policy reform that allowed effective charter schools in Boston, Massachusetts to replicate their school models at new locations. Estimates based on randomized admission lotteries show that replication charter schools generate large achievement gains on par with those produced by their parent campuses. The average effectiveness of Boston’s charter middle school sector increased after the reform despite a doubling of charter market share. An exploration of mechanisms shows that Boston charter schools reduce the returns to teacher experience and compress the distribution of teacher effectiveness, suggesting the highly standardized practices in place at charter schools may facilitate replicability.
Abstract: Can schools that boost student outcomes reproduce their success at new campuses? We study a policy reform that allowed effective charter schools in Boston, Massachusetts to replicate their school models at new locations. Estimates based on randomized admission lotteries show that replication charter schools generate large achievement gains on par with those produced by their parent campuses. The average effectiveness of Boston’s charter middle school sector increased after the reform despite a doubling of charter market share. An exploration of mechanisms shows that Boston charter schools reduce the returns to teacher experience and compress the distribution of teacher effectiveness, suggesting the highly standardized practices in place at charter schools may facilitate replicability.
Decreased female fidelity alters male behavior in a feral horse population; the stallions engage in more frequent contests, in more escalated contests, and spend more time vigilant
Decreased female fidelity alters male behavior in a feral horse population managed with immunocontraception. Maggie M. Jones, Cassandra M. V. Nuñez. Applied Animal Behaviour Science, Volume 214, May 2019, Pages 34-41, https://doi.org/10.1016/j.applanim.2019.03.005
Highlights
• Stallions experiencing increased female turnover engage in more frequent contests.
• These stallions engage in more escalated contests and spend more time vigilant.
• Habitat visibility but not female turnover influenced male-female aggression.
• Home range overlap also influenced male-male interactions.
• Immunocontraception management indirectly affects stallion behavior.
Abstract: In social species like the feral horse (Equus caballus), changes in individual behavior are likely to affect associated animals. On Shackleford Banks, North Carolina, USA, mares treated with the contraceptive agent porcine zona pellucida (PZP) demonstrate decreased fidelity to their band stallions. Here, we assess the effects of such decreased mare fidelity on male behavior and address potential interactions with habitat visibility, a component of the environment shown to significantly affect feral horse behavior. We compared the frequency and escalation of male-male contests, rates of aggressive and reproductive behaviors directed toward females, and the percentage of time spent vigilant among males experiencing varying levels of mare group changing behavior. We found that regardless of habitat visibility, males experiencing more female group changes engaged in contests at a higher rate (P = 0.003) and escalation (P = 0.029) and spent more time vigilant (P = 0.014) than males experiencing fewer group changes. However, while visibility had a positive effect on aggression directed by stallions toward mares (P = 0.013), female group changing behavior did not influence male-female aggressive or reproductive behaviors (P > 0.1), showing that decreases in mare fidelity altered male-male but not male-female interactions. These results have important implications for feral horse management; PZP-contracepted mares demonstrating prolonged decreases in stallion fidelity may have a disproportionate effect on male behavior. Moreover, our results shed light on the relative influences of female behavior and environmental factors like habitat visibility on male behavior. Such findings can ultimately improve our understanding of how the social and physical environments interact to shape male-male and male-female interactions.
Highlights
• Stallions experiencing increased female turnover engage in more frequent contests.
• These stallions engage in more escalated contests and spend more time vigilant.
• Habitat visibility but not female turnover influenced male-female aggression.
• Home range overlap also influenced male-male interactions.
• Immunocontraception management indirectly affects stallion behavior.
Abstract: In social species like the feral horse (Equus caballus), changes in individual behavior are likely to affect associated animals. On Shackleford Banks, North Carolina, USA, mares treated with the contraceptive agent porcine zona pellucida (PZP) demonstrate decreased fidelity to their band stallions. Here, we assess the effects of such decreased mare fidelity on male behavior and address potential interactions with habitat visibility, a component of the environment shown to significantly affect feral horse behavior. We compared the frequency and escalation of male-male contests, rates of aggressive and reproductive behaviors directed toward females, and the percentage of time spent vigilant among males experiencing varying levels of mare group changing behavior. We found that regardless of habitat visibility, males experiencing more female group changes engaged in contests at a higher rate (P = 0.003) and escalation (P = 0.029) and spent more time vigilant (P = 0.014) than males experiencing fewer group changes. However, while visibility had a positive effect on aggression directed by stallions toward mares (P = 0.013), female group changing behavior did not influence male-female aggressive or reproductive behaviors (P > 0.1), showing that decreases in mare fidelity altered male-male but not male-female interactions. These results have important implications for feral horse management; PZP-contracepted mares demonstrating prolonged decreases in stallion fidelity may have a disproportionate effect on male behavior. Moreover, our results shed light on the relative influences of female behavior and environmental factors like habitat visibility on male behavior. Such findings can ultimately improve our understanding of how the social and physical environments interact to shape male-male and male-female interactions.
Smokers self-directing their investment trade more frequently, exhibit more biases & achieve lower portfolio returns; those aware of their limited levels of self-control delegate decision making to professional advisors & fund managers
Smoking hot portfolios? self-control and investor decisions. Charline Uhr, Steffen Meyer, Andreas Hackethal. Goethe Universitat's SAFE working paper series; No. 245, March 2019. https://ssrn.com/abstract=3347625
Self-control failure is among the major pathologies (Baumeister et al. (1994)) affecting individual investment decisions which has hardly been measurable in empirical research. We use cigarette addiction identified from checking account transactions to proxy for low self-control and compare over 5,000 smokers to 14,000 nonsmokers. Smokers self-directing their investment trade more frequently, exhibit more biases and achieve lower portfolio returns. We also find that smokers, some of which might be aware of their limited levels of self-control, exhibit a higher propensity than nonsmokers to delegate decision making to professional advisors and fund managers. We document that such precommitments work successfully.
Check alsoWho trades cryptocurrencies, how do they trade it, and how do they perform? Evidence from brokerage accounts. Tim Hasso, Matthias Pelster, Bastian Breitmayer. Journal of Behavioral and Experimental Finance, May 7 2019. https://www.bipartisanalliance.com/2019/05/men-are-more-likely-to-engage-in.html
Self-control failure is among the major pathologies (Baumeister et al. (1994)) affecting individual investment decisions which has hardly been measurable in empirical research. We use cigarette addiction identified from checking account transactions to proxy for low self-control and compare over 5,000 smokers to 14,000 nonsmokers. Smokers self-directing their investment trade more frequently, exhibit more biases and achieve lower portfolio returns. We also find that smokers, some of which might be aware of their limited levels of self-control, exhibit a higher propensity than nonsmokers to delegate decision making to professional advisors and fund managers. We document that such precommitments work successfully.
Check alsoWho trades cryptocurrencies, how do they trade it, and how do they perform? Evidence from brokerage accounts. Tim Hasso, Matthias Pelster, Bastian Breitmayer. Journal of Behavioral and Experimental Finance, May 7 2019. https://www.bipartisanalliance.com/2019/05/men-are-more-likely-to-engage-in.html
Neural correlates of trait-like well-being (i.e., the propensity to live according to one’s true nature): Activation & volume of anterior cingulate cortex, orbitofrontal cortex, posterior cingulate cortex, superior temporal gyrus, & thalamus
The neural correlates of well-being: A systematic review of the human neuroimaging and neuropsychological literature. Marcie L. King. Cognitive, Affective, & Behavioral Neuroscience, May 6 2019. https://link.springer.com/article/10.3758/s13415-019-00720-4
Abstract: What it means to be well and to achieve well-being is fundamental to the human condition. Scholars of many disciplines have attempted to define well-being and to investigate the behavioral and neural correlates of well-being. Despite many decades of inquiry into well-being, much remains unknown. The study of well-being has evolved over time, shifting in focus and methodology. Many recent investigations into well-being have taken a neuroscientific approach to try to bolster understanding of this complex construct. A growing body of literature has directly examined the association between well-being and the brain. The current review synthesizes the extant literature regarding the neural correlates of trait-like well-being (i.e., the propensity to live according to one’s true nature). Although reported associations between well-being and the brain varied, some notable patterns were evidenced in the literature. In particular, the strongest and most consistent association emerged between well-being and the anterior cingulate cortex. In addition, patterns of association between well-being and the orbitofrontal cortex, posterior cingulate cortex, superior temporal gyrus, and thalamus emerged. These regions largely comprise the salience and default mode networks, suggesting a possible relationship between well-being and brain networks involved in the integration of relevant and significant stimuli. Various methodological concerns are addressed and recommendations for future research are discussed.
Keywords: Well-being Neural correlates Neuroimaging
Abstract: What it means to be well and to achieve well-being is fundamental to the human condition. Scholars of many disciplines have attempted to define well-being and to investigate the behavioral and neural correlates of well-being. Despite many decades of inquiry into well-being, much remains unknown. The study of well-being has evolved over time, shifting in focus and methodology. Many recent investigations into well-being have taken a neuroscientific approach to try to bolster understanding of this complex construct. A growing body of literature has directly examined the association between well-being and the brain. The current review synthesizes the extant literature regarding the neural correlates of trait-like well-being (i.e., the propensity to live according to one’s true nature). Although reported associations between well-being and the brain varied, some notable patterns were evidenced in the literature. In particular, the strongest and most consistent association emerged between well-being and the anterior cingulate cortex. In addition, patterns of association between well-being and the orbitofrontal cortex, posterior cingulate cortex, superior temporal gyrus, and thalamus emerged. These regions largely comprise the salience and default mode networks, suggesting a possible relationship between well-being and brain networks involved in the integration of relevant and significant stimuli. Various methodological concerns are addressed and recommendations for future research are discussed.
Keywords: Well-being Neural correlates Neuroimaging
Men are more likely to engage in cryptocurrency trading, trade more frequently, and more speculative, respectively; as a result, men realize lower returns than women
Who trades cryptocurrencies, how do they trade it, and how do they perform? Evidence from brokerage accounts. Tim Hasso, Matthias Pelster, Bastian Breitmayer. Journal of Behavioral and Experimental Finance, May 7 2019. https://doi.org/10.1016/j.jbef.2019.04.009
Abstract: We investigate the demographic characteristics, trading patterns, and performance of 465.926 brokerage accounts with respect to cryptocurrency trading. We find that cryptocurrency trading became increasingly popular across individuals of all different groups of age, gender, and trading patterns. Yet, men are more likely to engage in cryptocurrency trading, trade more frequently, and more speculative, respectively. As a result, men realize lower returns. Furthermore, we find that investors vary their trading patterns across different asset classes.
Abstract: We investigate the demographic characteristics, trading patterns, and performance of 465.926 brokerage accounts with respect to cryptocurrency trading. We find that cryptocurrency trading became increasingly popular across individuals of all different groups of age, gender, and trading patterns. Yet, men are more likely to engage in cryptocurrency trading, trade more frequently, and more speculative, respectively. As a result, men realize lower returns. Furthermore, we find that investors vary their trading patterns across different asset classes.
Social media’s enduring effect on adolescent life satisfaction
Social media’s enduring effect on adolescent life satisfaction. Amy Orben, Tobias Dienlin, and Andrew K. Przybylski. Proceedings of the National Academy of Sciences, May 6, 2019 https://doi.org/10.1073/pnas.1902058116
Abstract: In this study, we used large-scale representative panel data to disentangle the between-person and within-person relations linking adolescent social media use and well-being. We found that social media use is not, in and of itself, a strong predictor of life satisfaction across the adolescent population. Instead, social media effects are nuanced, small at best, reciprocal over time, gender specific, and contingent on analytic methods.
Keywords: social mediaadolescentslife satisfactionlongitudinalrandom-intercept cross-lagged panel models
Does the increasing amount of time adolescents devote to social media negatively affect their satisfaction with life? Set against the rapid pace of technological innovation, this simple question has grown into a pressing concern for scientists, caregivers, and policymakers. Research, however, has not kept pace (1). Focused on cross-sectional relations, scientists have few means of parsing longitudinal effects from artifacts introduced by common statistical modeling methodologies (2). Furthermore, the volume of data under analysis, paired with unchecked analytical flexibility, enables selective research reporting, biasing the literature toward statistically significant effects (3, 4). Nevertheless, trivial trends are routinely overinterpreted by those under increasing pressure to rapidly craft evidence-based policies.
Our understanding of social media effects is predominately shaped by analyses of cross-sectional associations between social media use measures and self-reported youth outcomes. Studies highlight modest negative correlations (3), but many of their conclusions are problematic. It is not tenable to assume that observations of between-person associations—comparing different people at the same time point—translate into within-person effects—tracking an individual, and what affects them, over time (2). Drawing this flawed inference risks misinforming the public or shaping policy on the basis of unsuitable evidence.
To disentangle between-person associations from within-person effects, we analyzed an eight-wave, large-scale, and nationally representative panel dataset (Understanding Society, the UK Household Longitudinal Study, 2009–2016) using random-intercept cross-lagged panel models (2). We adopted a specification curve analysis framework (3, 5)—a computational method which minimizes the risk that a specific profile of analytical decisions yields false-positive results. In place of a single model, we tested a wide range of theoretically grounded analysis options [data is available on the UK data service (6); code is available on the Open Science Framework (7)]. The University of Essex Ethics Committee has approved all data collection on Understanding Society main study and innovation panel waves, including asking consent for all data linkages except to health records not used in this study.
While 12,672 10- to 15-y-olds took part, the precise number of participants for any analysis varied by age and whether full or imputed data were used (range, n = 539 to 5,492; median, n = 1,699). Variables included (i) a social media use measure: “How many hours do you spend chatting or interacting with friends through a social website like [Bebo, Facebook, Myspace] on a normal school day?” (5-point scale); (ii) six statements reflecting different life satisfaction domains (7-point visual analog scale); and (iii) seven child-, caregiver-, and household-level control variables used in prior work (3). We report standardized coefficients for all 2,268 distinct analysis options considered.
We first examined between-person associations (Fig. 1, Left), addressing the question Do adolescents using more social media show different levels of life satisfaction compared with adolescents using less? Across all operationalizations, the median cross-sectional correlation was negative (ψ = −0.13), an effect judged as small by behavioral scientists (8). Next, we examined the within-person effects of social media use on life satisfaction (Fig. 1, Center) and of life satisfaction on social media use (Fig. 1, Right), asking the questions Does an adolescent using social media more than they do on average drive subsequent changes in life satisfaction? and To what extent is the relation reciprocal? Both median longitudinal effects were trivial in size (social media predicting life satisfaction, β = −0.05; life satisfaction predicting social media use, β = −0.02).
Abstract: In this study, we used large-scale representative panel data to disentangle the between-person and within-person relations linking adolescent social media use and well-being. We found that social media use is not, in and of itself, a strong predictor of life satisfaction across the adolescent population. Instead, social media effects are nuanced, small at best, reciprocal over time, gender specific, and contingent on analytic methods.
Keywords: social mediaadolescentslife satisfactionlongitudinalrandom-intercept cross-lagged panel models
Does the increasing amount of time adolescents devote to social media negatively affect their satisfaction with life? Set against the rapid pace of technological innovation, this simple question has grown into a pressing concern for scientists, caregivers, and policymakers. Research, however, has not kept pace (1). Focused on cross-sectional relations, scientists have few means of parsing longitudinal effects from artifacts introduced by common statistical modeling methodologies (2). Furthermore, the volume of data under analysis, paired with unchecked analytical flexibility, enables selective research reporting, biasing the literature toward statistically significant effects (3, 4). Nevertheless, trivial trends are routinely overinterpreted by those under increasing pressure to rapidly craft evidence-based policies.
Our understanding of social media effects is predominately shaped by analyses of cross-sectional associations between social media use measures and self-reported youth outcomes. Studies highlight modest negative correlations (3), but many of their conclusions are problematic. It is not tenable to assume that observations of between-person associations—comparing different people at the same time point—translate into within-person effects—tracking an individual, and what affects them, over time (2). Drawing this flawed inference risks misinforming the public or shaping policy on the basis of unsuitable evidence.
To disentangle between-person associations from within-person effects, we analyzed an eight-wave, large-scale, and nationally representative panel dataset (Understanding Society, the UK Household Longitudinal Study, 2009–2016) using random-intercept cross-lagged panel models (2). We adopted a specification curve analysis framework (3, 5)—a computational method which minimizes the risk that a specific profile of analytical decisions yields false-positive results. In place of a single model, we tested a wide range of theoretically grounded analysis options [data is available on the UK data service (6); code is available on the Open Science Framework (7)]. The University of Essex Ethics Committee has approved all data collection on Understanding Society main study and innovation panel waves, including asking consent for all data linkages except to health records not used in this study.
While 12,672 10- to 15-y-olds took part, the precise number of participants for any analysis varied by age and whether full or imputed data were used (range, n = 539 to 5,492; median, n = 1,699). Variables included (i) a social media use measure: “How many hours do you spend chatting or interacting with friends through a social website like [Bebo, Facebook, Myspace] on a normal school day?” (5-point scale); (ii) six statements reflecting different life satisfaction domains (7-point visual analog scale); and (iii) seven child-, caregiver-, and household-level control variables used in prior work (3). We report standardized coefficients for all 2,268 distinct analysis options considered.
We first examined between-person associations (Fig. 1, Left), addressing the question Do adolescents using more social media show different levels of life satisfaction compared with adolescents using less? Across all operationalizations, the median cross-sectional correlation was negative (ψ = −0.13), an effect judged as small by behavioral scientists (8). Next, we examined the within-person effects of social media use on life satisfaction (Fig. 1, Center) and of life satisfaction on social media use (Fig. 1, Right), asking the questions Does an adolescent using social media more than they do on average drive subsequent changes in life satisfaction? and To what extent is the relation reciprocal? Both median longitudinal effects were trivial in size (social media predicting life satisfaction, β = −0.05; life satisfaction predicting social media use, β = −0.02).
Who is susceptible in three false memory tasks? No one type of person seems especially prone, or especially resilient, to the ubiquity of memory distortion
Who is susceptible in three false memory tasks? Rebecca M. Nichols & Elizabeth F. Loftus. Memory, May 2 2019. https://doi.org/10.1080/09658211.2019.1611862
ABSTRACT: Decades of research show that people are susceptible to developing false memories. But if they do so in one task, are they likely to do so in a different one? The answer: “No”. In the current research, a large number of participants took part in three well-established false memory paradigms (a misinformation task, the Deese-Roediger-McDermott [DRM] list learning paradigm, and an imagination inflation exercise) as well as completed several individual difference measures. Results indicate that many correlations between false memory variables in all three inter-paradigm comparisons are null, though some small, positive, significant correlations emerged. Moreover, very few individual difference variables significantly correlated with false memories, and any significant correlations were rather small. It seems likely, therefore, that there is no false memory “trait”. In other words, no one type of person seems especially prone, or especially resilient, to the ubiquity of memory distortion.
KEYWORDS: False memory, memory distortion, misinformation, DRM, imagination inflation, individual differences, false memory susceptibility
ABSTRACT: Decades of research show that people are susceptible to developing false memories. But if they do so in one task, are they likely to do so in a different one? The answer: “No”. In the current research, a large number of participants took part in three well-established false memory paradigms (a misinformation task, the Deese-Roediger-McDermott [DRM] list learning paradigm, and an imagination inflation exercise) as well as completed several individual difference measures. Results indicate that many correlations between false memory variables in all three inter-paradigm comparisons are null, though some small, positive, significant correlations emerged. Moreover, very few individual difference variables significantly correlated with false memories, and any significant correlations were rather small. It seems likely, therefore, that there is no false memory “trait”. In other words, no one type of person seems especially prone, or especially resilient, to the ubiquity of memory distortion.
KEYWORDS: False memory, memory distortion, misinformation, DRM, imagination inflation, individual differences, false memory susceptibility
Is the Global Prevalence Rate of Adult Mental Illness Increasing over Time? No, the prevalence increase of adult mental illness is small, mainly related to demographic changes
Richter, Dirk, Abbie Wall, Ashley Bruen, and Richard Whittington. 2019. “Is the Global Prevalence Rate of Adult Mental Illness Increasing over Time? Systematic Review and Meta-analysis of Repeated Cross-sectional Population Surveys.” PsyArXiv. May 6. psyarxiv.com/5a7ye
Abstract
Objectives: The question whether mental illness prevalence rates are increasing is a controversially debated topic. Epidemiological articles and review publications that look into this research issue are often compromised by methodological problems. The present study aimed at using a meta-analysis technique that is usually applied for the analysis of intervention studies to achieve more transparency and statistical precision.
Methods: We searched Pubmed, PsycInfo, CINAHL, Google Scholar and reference lists for repeated cross-sectional population studies on prevalence rates of adult mental illness based on ICD- or DSM-based diagnoses, symptom scales and distress scales that used the same methodological approach at least twice in the same geographical region. The study is registered with PROSPERO (CRD42018090959).
Results: We included 44 samples from 42 publications, representing 1,035,697 primary observations for the first time point and 783,897 primary observations for the second and last time point. Controlling for a hierarchical data structure, we found an overall global prevalence increase odds ratio of 1.179 (95%-CI: 1.065 – 1.305). A multivariate meta-regression suggested relevant associations with methodological characteristics of included studies.
Conclusions: We conclude that the prevalence increase of adult mental illness is small and we assume that this increase is mainly related to demographic changes.
Abstract
Objectives: The question whether mental illness prevalence rates are increasing is a controversially debated topic. Epidemiological articles and review publications that look into this research issue are often compromised by methodological problems. The present study aimed at using a meta-analysis technique that is usually applied for the analysis of intervention studies to achieve more transparency and statistical precision.
Methods: We searched Pubmed, PsycInfo, CINAHL, Google Scholar and reference lists for repeated cross-sectional population studies on prevalence rates of adult mental illness based on ICD- or DSM-based diagnoses, symptom scales and distress scales that used the same methodological approach at least twice in the same geographical region. The study is registered with PROSPERO (CRD42018090959).
Results: We included 44 samples from 42 publications, representing 1,035,697 primary observations for the first time point and 783,897 primary observations for the second and last time point. Controlling for a hierarchical data structure, we found an overall global prevalence increase odds ratio of 1.179 (95%-CI: 1.065 – 1.305). A multivariate meta-regression suggested relevant associations with methodological characteristics of included studies.
Conclusions: We conclude that the prevalence increase of adult mental illness is small and we assume that this increase is mainly related to demographic changes.
Monday, May 6, 2019
Crime can be successfully reduced by changing the situational environment that potential victims and offenders face: They report the first experimental evidence on the effect of street lighting on crime
Reducing Crime Through Environmental Design: Evidence from a Randomized Experiment of Street Lighting in New York City. Aaron Chalfin, Benjamin Hansen, Jason Lerner, Lucie Parker. NBER Working Paper No. 25798, May 2019. https://www.nber.org/papers/w25798
Abstract: This paper offers experimental evidence that crime can be successfully reduced by changing the situational environment that potential victims and offenders face. We focus on a ubiquitous but surprisingly understudied feature of the urban landscape – street lighting – and report the first experimental evidence on the effect of street lighting on crime. Through a unique public partnership in New York City, temporary streetlights were randomly allocated to public housing developments from March through August 2016. We find evidence that communities that were assigned more lighting experienced sizable reductions in crime. After accounting for potential spatial spillovers, we find that the provision of street lights led, at a minimum, to a 36 percent reduction in nighttime outdoor index crimes.
Abstract: This paper offers experimental evidence that crime can be successfully reduced by changing the situational environment that potential victims and offenders face. We focus on a ubiquitous but surprisingly understudied feature of the urban landscape – street lighting – and report the first experimental evidence on the effect of street lighting on crime. Through a unique public partnership in New York City, temporary streetlights were randomly allocated to public housing developments from March through August 2016. We find evidence that communities that were assigned more lighting experienced sizable reductions in crime. After accounting for potential spatial spillovers, we find that the provision of street lights led, at a minimum, to a 36 percent reduction in nighttime outdoor index crimes.
As experiences of pleasure & displeasure, hedonics are omnipresent in daily life; as core processes, they accompany emotions, motivation, bodily states, &c; optimal hedonic functioning seems the basis of well-being & aesthetic experiences
The Role of Hedonics in the Human Affectome. Susanne Becker et al. Neuroscience & Biobehavioral Reviews, May 6 2019. https://doi.org/10.1016/j.neubiorev.2019.05.003
Highlights
• As experiences of pleasure and displeasure, hedonics are omnipresent in daily life.
• As core processes, hedonics accompany emotions, motivation, bodily states, etc.
• Orbitofrontal cortex and nucleus accumbens appear to be hedonic brain hubs.
• Several mental illnesses are characterized by altered hedonic experiences.
• Optimal hedonic functioning seems the basis of well-being and aesthetic experiences.
Abstract: Experiencing pleasure and displeasure is a fundamental part of life. Hedonics guide behavior, affect decision-making, induce learning, and much more. As the positive and negative valence of feelings, hedonics are core processes that accompany emotion, motivation, and bodily states. Here, the affective neuroscience of pleasure and displeasure that has largely focused on the investigation of reward and pain processing, is reviewed. We describe the neurobiological systems of hedonics and factors that modulate hedonic experiences (e.g., cognition, learning, sensory input). Further, we review maladaptive and adaptive pleasure and displeasure functions in mental disorders and well-being, as well as the experience of aesthetics. As a centerpiece of the Human Affectome Project, language used to express pleasure and displeasure was also analyzed, and showed that most of these analyzed words overlap with expressions of emotions, actions, and bodily states. Our review shows that hedonics are typically investigated as processes that accompany other functions, but the mechanisms of hedonics (as core processes) have not been fully elucidated.
---
2.2.1.1 Animal work
Early investigations of the functional neuroanatomy of pleasure and reward in mammals stemmed from the seminal work by Olds and Milner (Olds & Milner, 1954). A series of pioneering experiments showed that rodents tend to increase instrumental lever-pressing to deliver brief, direct intracranial electrical stimulation of septal nuclei. Interestingly, rodents and other non human animals would maintain this type of self-stimulation for hours, working until reaching complete physical exhaustion (Olds, 1958). This work led to the popular description of the neurotransmitter dopamine as the ‘happy hormone’.
However, subsequent electrophysiological and voltammetric assessments as well as microdialysis clearly show that dopamine does not drive the hedonic experience of reward (liking), but rather the motivation to obtain such reward (wanting), that is the instrumental behavior of reward-driven actions (Berridge & Kringelbach, 2015; Wise, 1978). Strong causal evidence for this idea has emerged from rodent studies, including pharmacologically blocking of dopamine receptors or using genetic knockdown mutations in rodents. When dopamine is depleted or dopamine neurons destroyed, reward related instrumental behavior significantly decreases with animals becoming oblivious to previously rewarding stimuli (Baik, 2013; Schultz, 1998). In contrast, hyperdopaminergic mice with dopamine transporter knockdown mutations exhibit largely enhanced acquisition and greater incentive performance for rewards (Pecina et al., 2003). These studies show that phasic release of dopamine specifically acts as a signal of incentive salience, which underlies reinforcement learning (Salamone & Correa, 2012; Schultz, 2013). Such dopaminergic functions have been related to the mesocorticolimbic circuitry: Microinjections to pharmacologically stimulate dopaminergic neurons in specific sub-regions of the nucleus accumbens (NA) selectively enhance wanting with no effects on liking. However, microinjections to stimulate opioidergic neurons increase the hedonic impact of sucrose reward and wanting responses, likely caused by opioid-induced dopamine release (Johnson & North, 1992). Importantly, different populations of neurons in the ventral pallidum (as part of the mesocorticolimbic circuitry) track specifically the pharmacologically induced enhancements of hedonic and motivational signals (Smith et al., 2011).
The double dissociation of the neural systems underlying wanting and liking has been confirmed many times (Laurent et al., 2012), leading to the concept that positive hedonic responses (liking) are specifically mediated in the brain by endogenous opioids in ‘hedonic hot-spots’ (Pecina et al., 2006). The existence of such hedonic hot-spots has been confirmed in the NA, ventral pallidum, and parabrachial nucleus of the pons (Berridge & Kringelbach, 2015). In addition, some evidence suggests further hot-spots in the insula and orbitofrontal cortex (OFC; Castro & Berridge, 2017).
Hedonic hot-spots in the brain might be important not only to generate the feeling of pleasure, but also to maintain a certain level of pleasure. In line with this assumption, damage to hedonic hot-spots in the ventral pallidum can transform pleasure into displeasure, illustrating that there is no clear-cut border between neurobiological mechanisms of pleasure and displeasure but rather many intersections. For example sweet sucrose taste, normally inducing strong liking responses, elicits negative and disgust reactions in rats after the damage of a hedonic hot-spot in the ventral pallidum (Ho & Berridge, 2014). In addition to hot-spots that might be essential in maintaining a certain pleasure level, ‘cold spots’ have been found in the NA, ventral pallidum, OFC, and insula. In such cold-spots, opioidergic stimulation suppresses liking responses, which in hot spots causes a stark increase in liking responses (Castro & Berridge, 2014, 2017). A balanced interplay between cold- and hot-spots within the same brain regions such as the NA, ventral pallidum, OFC, and insula may allow for a sophisticated control of positive and negative hedonic responses (see ‘affective keyboard’ in Section 2.2.3). In line with such an assumed sophisticated control, it has to be noted that hedonic hot- and cold-spots are not to be hardwired in the brain. Depending, for example, on external factors creating stressful or pleasant, relaxed environments, the coding of valence can change in such hot-spots from positive to negative and vice versa (Berridge, 2019). Such phenomena have been observed in the NA (Richard & Berridge, 2011) and amygdala (Flandreau et al., 2012; Warlow et al., 2017), likely contributing to a fine-tuned control of hedonic responses dependent on environmental factors.
2.2.1.2. Human work
Confirming results from animal research, a brain network termed the ‘reward circuit’ has been described in human research, which includes the cortico-ventral basal ganglia system, including the ventral striatum (VS) and midbrain (i.e., the ventral tegmental area; Gottfried, 2011; Richards et al., 2013). Within the reward circuit, reward-linked information is processed across a circuit that involves glutamatergic projections from the OFC and anterior cingulate cortex (ACC), as well as dopaminergic projections from the midbrain into the VS (Richards et al., 2013).
However, as previously described, reward cannot be equated with pleasure, given that reward processing comprises wanting and liking (Berridge et al., 2009; Reynolds & Berridge, 2008). Further, reward processing is modulated by subjective value and utility, which is formed by individual needs, desires, homeostatic states, and situational influences (Rangel et al., 2008). As such, pleasure as a core process is most closely related to ‘liking’ expressed during reward consumption. During such reward consumption, human neuroimaging studies have consistently noted a central role of the VS (including the NA) corresponding to results from animal research. The VS is consistently activated during the anticipation and consumption of reward (Liu et al., 2011). Interestingly, the VS is also activated during the imagery of pleasant experiences, including drug use in substance abusers, pleasant sexual encounters, and athletic success (Costa et al., 2010). Despite a vast literature emphasizing that the VS is implicated in the processing of hedonic aspects of reward in humans, this brain area has not been well parcellated into functional sub-regions (primarily because of limited resolution in human neuroimaging). Nevertheless, using an anatomical definition of the core and shell of the NA, one study successfully described differential encoding of the valence of reward and pain in separable structural and functional brain networks with sources in the core and shell of the NA (Baliki et al., 2013). This finding again highlights the overlaps of pleasure and displeasure systems, rendering the separated investigation of pleasure and displeasure functions somewhat artificial.
In addition to the VS, the OFC has received much attention in human research on reward and hedonic experiences (Berridge & Kringelbach, 2015). Much of the current knowledge on the functions of the OFC in hedonic experiences is based on human neuroimaging, because the translation from animal work has proven to be challenging because of differences in the prefrontal cortex (PFC; Wallis, 2011). The OFC has been described in numerous human functional magnetic resonance imaging (fMRI) studies to represent the subjective value of rewarding stimuli (Grabenhorst & Rolls, 2011). More specifically, the OFC has been described as the first stage of cortical processing, in which the value and pleasure of reward are explicitly represented. With its many reciprocal anatomical connections to other brain regions important in reward processing, the OFC is in an optimal position to distribute information on subjective value and pleasure in order to optimize different behavioral strategies. For example, the OFC is well connected to the ACC, insular cortex, somatosensory areas, amygdala, and striatum (Carmichael & Price, 1995; Cavada et al., 2000; Mufson & Mesulam, 1982).
Besides the VS and the OFC, multiple other brain regions are involved in reward processing, including the caudate, putamen, thalamus, amygdala, anterior insula, ACC, posterior cingulate cortex, inferior parietal lobule, and sub-regions of the PFC other than the OFC (Liu et al., 2011). Reward involves processing of complex stimuli that involve many more components beyond wanting and liking, such as attention, arousal, evaluation, memory, learning, decision-making, etc.
In addition to higher-level cortical representations, pleasure also appears to be coded at very low levels of peripheral sensory processing. As an illustration, hedonic representations of smells are already present in peripheral sensory cells. There are differences in electrical activity of the human olfactory epithelium in response to pleasant vs. unpleasant odors (Lapid et al., 2011). Further, responses to the hedonic valence of odors involve differential activation of the autonomic nervous system (e.g., fluctuations in heart rate and skin conductance; Joussain et al., 2017). Together with the above-described results on central processing of pleasure, these findings highlight that extensive neurobiological systems are implicated in the processing of positive hedonic feelings including peripheral and autonomic components. In line with findings from the animal work, it can be assumed that environmental factors such as perceived stress affect these neurobiological systems leading to plastic changes (Juarez & Han, 2016; Li, 2013) and thus a sophisticated control of hedonic feelings adapted to situational factors.
2.2.2 Displeasure and pain—from animal models to human models
Highlights
• As experiences of pleasure and displeasure, hedonics are omnipresent in daily life.
• As core processes, hedonics accompany emotions, motivation, bodily states, etc.
• Orbitofrontal cortex and nucleus accumbens appear to be hedonic brain hubs.
• Several mental illnesses are characterized by altered hedonic experiences.
• Optimal hedonic functioning seems the basis of well-being and aesthetic experiences.
Abstract: Experiencing pleasure and displeasure is a fundamental part of life. Hedonics guide behavior, affect decision-making, induce learning, and much more. As the positive and negative valence of feelings, hedonics are core processes that accompany emotion, motivation, and bodily states. Here, the affective neuroscience of pleasure and displeasure that has largely focused on the investigation of reward and pain processing, is reviewed. We describe the neurobiological systems of hedonics and factors that modulate hedonic experiences (e.g., cognition, learning, sensory input). Further, we review maladaptive and adaptive pleasure and displeasure functions in mental disorders and well-being, as well as the experience of aesthetics. As a centerpiece of the Human Affectome Project, language used to express pleasure and displeasure was also analyzed, and showed that most of these analyzed words overlap with expressions of emotions, actions, and bodily states. Our review shows that hedonics are typically investigated as processes that accompany other functions, but the mechanisms of hedonics (as core processes) have not been fully elucidated.
---
2.2.1.1 Animal work
Early investigations of the functional neuroanatomy of pleasure and reward in mammals stemmed from the seminal work by Olds and Milner (Olds & Milner, 1954). A series of pioneering experiments showed that rodents tend to increase instrumental lever-pressing to deliver brief, direct intracranial electrical stimulation of septal nuclei. Interestingly, rodents and other non human animals would maintain this type of self-stimulation for hours, working until reaching complete physical exhaustion (Olds, 1958). This work led to the popular description of the neurotransmitter dopamine as the ‘happy hormone’.
However, subsequent electrophysiological and voltammetric assessments as well as microdialysis clearly show that dopamine does not drive the hedonic experience of reward (liking), but rather the motivation to obtain such reward (wanting), that is the instrumental behavior of reward-driven actions (Berridge & Kringelbach, 2015; Wise, 1978). Strong causal evidence for this idea has emerged from rodent studies, including pharmacologically blocking of dopamine receptors or using genetic knockdown mutations in rodents. When dopamine is depleted or dopamine neurons destroyed, reward related instrumental behavior significantly decreases with animals becoming oblivious to previously rewarding stimuli (Baik, 2013; Schultz, 1998). In contrast, hyperdopaminergic mice with dopamine transporter knockdown mutations exhibit largely enhanced acquisition and greater incentive performance for rewards (Pecina et al., 2003). These studies show that phasic release of dopamine specifically acts as a signal of incentive salience, which underlies reinforcement learning (Salamone & Correa, 2012; Schultz, 2013). Such dopaminergic functions have been related to the mesocorticolimbic circuitry: Microinjections to pharmacologically stimulate dopaminergic neurons in specific sub-regions of the nucleus accumbens (NA) selectively enhance wanting with no effects on liking. However, microinjections to stimulate opioidergic neurons increase the hedonic impact of sucrose reward and wanting responses, likely caused by opioid-induced dopamine release (Johnson & North, 1992). Importantly, different populations of neurons in the ventral pallidum (as part of the mesocorticolimbic circuitry) track specifically the pharmacologically induced enhancements of hedonic and motivational signals (Smith et al., 2011).
The double dissociation of the neural systems underlying wanting and liking has been confirmed many times (Laurent et al., 2012), leading to the concept that positive hedonic responses (liking) are specifically mediated in the brain by endogenous opioids in ‘hedonic hot-spots’ (Pecina et al., 2006). The existence of such hedonic hot-spots has been confirmed in the NA, ventral pallidum, and parabrachial nucleus of the pons (Berridge & Kringelbach, 2015). In addition, some evidence suggests further hot-spots in the insula and orbitofrontal cortex (OFC; Castro & Berridge, 2017).
Hedonic hot-spots in the brain might be important not only to generate the feeling of pleasure, but also to maintain a certain level of pleasure. In line with this assumption, damage to hedonic hot-spots in the ventral pallidum can transform pleasure into displeasure, illustrating that there is no clear-cut border between neurobiological mechanisms of pleasure and displeasure but rather many intersections. For example sweet sucrose taste, normally inducing strong liking responses, elicits negative and disgust reactions in rats after the damage of a hedonic hot-spot in the ventral pallidum (Ho & Berridge, 2014). In addition to hot-spots that might be essential in maintaining a certain pleasure level, ‘cold spots’ have been found in the NA, ventral pallidum, OFC, and insula. In such cold-spots, opioidergic stimulation suppresses liking responses, which in hot spots causes a stark increase in liking responses (Castro & Berridge, 2014, 2017). A balanced interplay between cold- and hot-spots within the same brain regions such as the NA, ventral pallidum, OFC, and insula may allow for a sophisticated control of positive and negative hedonic responses (see ‘affective keyboard’ in Section 2.2.3). In line with such an assumed sophisticated control, it has to be noted that hedonic hot- and cold-spots are not to be hardwired in the brain. Depending, for example, on external factors creating stressful or pleasant, relaxed environments, the coding of valence can change in such hot-spots from positive to negative and vice versa (Berridge, 2019). Such phenomena have been observed in the NA (Richard & Berridge, 2011) and amygdala (Flandreau et al., 2012; Warlow et al., 2017), likely contributing to a fine-tuned control of hedonic responses dependent on environmental factors.
2.2.1.2. Human work
Confirming results from animal research, a brain network termed the ‘reward circuit’ has been described in human research, which includes the cortico-ventral basal ganglia system, including the ventral striatum (VS) and midbrain (i.e., the ventral tegmental area; Gottfried, 2011; Richards et al., 2013). Within the reward circuit, reward-linked information is processed across a circuit that involves glutamatergic projections from the OFC and anterior cingulate cortex (ACC), as well as dopaminergic projections from the midbrain into the VS (Richards et al., 2013).
However, as previously described, reward cannot be equated with pleasure, given that reward processing comprises wanting and liking (Berridge et al., 2009; Reynolds & Berridge, 2008). Further, reward processing is modulated by subjective value and utility, which is formed by individual needs, desires, homeostatic states, and situational influences (Rangel et al., 2008). As such, pleasure as a core process is most closely related to ‘liking’ expressed during reward consumption. During such reward consumption, human neuroimaging studies have consistently noted a central role of the VS (including the NA) corresponding to results from animal research. The VS is consistently activated during the anticipation and consumption of reward (Liu et al., 2011). Interestingly, the VS is also activated during the imagery of pleasant experiences, including drug use in substance abusers, pleasant sexual encounters, and athletic success (Costa et al., 2010). Despite a vast literature emphasizing that the VS is implicated in the processing of hedonic aspects of reward in humans, this brain area has not been well parcellated into functional sub-regions (primarily because of limited resolution in human neuroimaging). Nevertheless, using an anatomical definition of the core and shell of the NA, one study successfully described differential encoding of the valence of reward and pain in separable structural and functional brain networks with sources in the core and shell of the NA (Baliki et al., 2013). This finding again highlights the overlaps of pleasure and displeasure systems, rendering the separated investigation of pleasure and displeasure functions somewhat artificial.
In addition to the VS, the OFC has received much attention in human research on reward and hedonic experiences (Berridge & Kringelbach, 2015). Much of the current knowledge on the functions of the OFC in hedonic experiences is based on human neuroimaging, because the translation from animal work has proven to be challenging because of differences in the prefrontal cortex (PFC; Wallis, 2011). The OFC has been described in numerous human functional magnetic resonance imaging (fMRI) studies to represent the subjective value of rewarding stimuli (Grabenhorst & Rolls, 2011). More specifically, the OFC has been described as the first stage of cortical processing, in which the value and pleasure of reward are explicitly represented. With its many reciprocal anatomical connections to other brain regions important in reward processing, the OFC is in an optimal position to distribute information on subjective value and pleasure in order to optimize different behavioral strategies. For example, the OFC is well connected to the ACC, insular cortex, somatosensory areas, amygdala, and striatum (Carmichael & Price, 1995; Cavada et al., 2000; Mufson & Mesulam, 1982).
Besides the VS and the OFC, multiple other brain regions are involved in reward processing, including the caudate, putamen, thalamus, amygdala, anterior insula, ACC, posterior cingulate cortex, inferior parietal lobule, and sub-regions of the PFC other than the OFC (Liu et al., 2011). Reward involves processing of complex stimuli that involve many more components beyond wanting and liking, such as attention, arousal, evaluation, memory, learning, decision-making, etc.
In addition to higher-level cortical representations, pleasure also appears to be coded at very low levels of peripheral sensory processing. As an illustration, hedonic representations of smells are already present in peripheral sensory cells. There are differences in electrical activity of the human olfactory epithelium in response to pleasant vs. unpleasant odors (Lapid et al., 2011). Further, responses to the hedonic valence of odors involve differential activation of the autonomic nervous system (e.g., fluctuations in heart rate and skin conductance; Joussain et al., 2017). Together with the above-described results on central processing of pleasure, these findings highlight that extensive neurobiological systems are implicated in the processing of positive hedonic feelings including peripheral and autonomic components. In line with findings from the animal work, it can be assumed that environmental factors such as perceived stress affect these neurobiological systems leading to plastic changes (Juarez & Han, 2016; Li, 2013) and thus a sophisticated control of hedonic feelings adapted to situational factors.
2.2.2 Displeasure and pain—from animal models to human models
After the Protestant Reformation allowed some usury degree, the Jews lost the money lending advantages in regions that became Protestant & entered into competition with the Christian majority, leading to an increase in anti-Semitism
Becker, Sascha O. and Luigi Pascali. 2019. "Religion, Division of Labor, and Conflict: Anti-semitism in Germany over 600 Years." American Economic Review, 109(5):1764-1804. DOI: 10.1257/aer.20170279
Abstract: We study the role of economic incentives in shaping the coexistence of Jews, Catholics, and Protestants, using novel data from Germany for 1,000+ cities. The Catholic usury ban and higher literacy rates gave Jews a specific advantage in the moneylending sector. Following the Protestant Reformation (1517), the Jews lost these advantages in regions that became Protestant. We show (i) a change in the geography of anti-Semitism with persecutions of Jews and anti-Jewish publications becoming more common in Protestant areas relative to Catholic areas; (ii) a more pronounced change in cities where Jews had already established themselves as moneylenders. These findings are consistent with the interpretation that, following the Protestant Reformation, Jews living in Protestant regions were exposed to competition with the Christian majority, especially in moneylending, leading to an increase in anti-Semitism.
Religion, Division of Labor, and Conflict: Anti-semitism in Germany over 600 Years
Abstract: We study the role of economic incentives in shaping the coexistence of Jews, Catholics, and Protestants, using novel data from Germany for 1,000+ cities. The Catholic usury ban and higher literacy rates gave Jews a specific advantage in the moneylending sector. Following the Protestant Reformation (1517), the Jews lost these advantages in regions that became Protestant. We show (i) a change in the geography of anti-Semitism with persecutions of Jews and anti-Jewish publications becoming more common in Protestant areas relative to Catholic areas; (ii) a more pronounced change in cities where Jews had already established themselves as moneylenders. These findings are consistent with the interpretation that, following the Protestant Reformation, Jews living in Protestant regions were exposed to competition with the Christian majority, especially in moneylending, leading to an increase in anti-Semitism.
Religion, Division of Labor, and Conflict: Anti-semitism in Germany over 600 Years
Individuals with low navigation ability use GPS often; high navigation ability is more predictive of learning a new environment than GPS-use; GPS use independently affects spatial transformation skills, affects ability to learn environments
GPS-use negatively affects environmental learning through spatial transformation abilities. Ian T. Ruginski et al. Journal of Environmental Psychology, May 4 2019. https://doi.org/10.1016/j.jenvp.2019.05.001
Highlights
• Individuals with low navigation ability use GPS often.
• High navigation ability is more predictive of learning a new environment than GPS-use.
• However, GPS use still independently affects spatial transformation skills.
• Overall, GPS use affects ability to learn environments through transformation skills.
Abstract: Research has established that GPS use negatively affects environmental learning and navigation in laboratory studies. Furthermore, the ability to mentally rotate objects and imagine locations from other perspectives (both known as spatial transformations) is positively related to environmental learning. Using previously validated spatial transformation and environmental learning tasks, the current study assessed a theoretical model where long-term GPS use is associated with worse mental rotation and perspective-taking spatial transformation abilities, which then predicts decreased ability to learn novel environments. We expected this prediction to hold even after controlling for self-reported navigation ability, which is also associated with better spatial transformation and environmental learning capabilities. We found that mental rotation and perspective-taking ability fully account for the effect of GPS use on learning of a virtual environment. This relationship remained after controlling for existing navigation ability. Specifically, GPS use is negatively associated with perspective-taking indirectly through mental rotation; we propose that GPS use affects the transformation ability common to mental rotation and perspective-taking.
GPS-use negatively affects environmental learning through spatial transformation abilities
Highlights
• Individuals with low navigation ability use GPS often.
• High navigation ability is more predictive of learning a new environment than GPS-use.
• However, GPS use still independently affects spatial transformation skills.
• Overall, GPS use affects ability to learn environments through transformation skills.
Abstract: Research has established that GPS use negatively affects environmental learning and navigation in laboratory studies. Furthermore, the ability to mentally rotate objects and imagine locations from other perspectives (both known as spatial transformations) is positively related to environmental learning. Using previously validated spatial transformation and environmental learning tasks, the current study assessed a theoretical model where long-term GPS use is associated with worse mental rotation and perspective-taking spatial transformation abilities, which then predicts decreased ability to learn novel environments. We expected this prediction to hold even after controlling for self-reported navigation ability, which is also associated with better spatial transformation and environmental learning capabilities. We found that mental rotation and perspective-taking ability fully account for the effect of GPS use on learning of a virtual environment. This relationship remained after controlling for existing navigation ability. Specifically, GPS use is negatively associated with perspective-taking indirectly through mental rotation; we propose that GPS use affects the transformation ability common to mental rotation and perspective-taking.
GPS-use negatively affects environmental learning through spatial transformation abilities
Results on the link between shyness and social media use had been inconclusive; new work shows that the shy has less contacts and interactions not only in real life, but in social networks too
Shyness and social media use: A meta-analytic summary of moderating and mediating effects. Markus Appel, Timo Gnambs. Computers in Human Behavior, May 6 2019. https://doi.org/10.1016/j.chb.2019.04.018
Highlights
• Results on the link between shyness and social media use had been inconclusive.
• A three-level, random effects meta-analysis was conducted.
• The kind of usage variable investigated moderated the relationship.
• Shyness was negatively associated with active use and with the number of contacts.
• A meta-analytic mediation model connected shyness, SNS contacts, and well-being.
Abstract: Since the advent of social networking sites (SNSs) such as Facebook and Twitter (often called social media), the link between shyness and using these platforms has received substantial scholarly attention. We assumed that the diverging findings could be explained by the patterns of use examined in the primary studies. A three-level, random effects meta-analysis was conducted (50 effect sizes, total N = 6989). Shyness and SNS use across all available indicators were unrelated. As predicted, the association was moderated by the specific SNS use pattern. Shyness was negatively associated with active use (e.g., posting photos), ρ = −0.11, 95% CI [-0.20, −0.03], and with the number of SNS contacts (i.e., online network size), ρ = −0.26, 95% CI [-0.34, −0.17]. Negligible or no associations were found for general use (e.g., daily logins), ρ = 0.07, 95% CI [0.02, 0.13], or passive use (reading others’ posts), ρ = 0.07, 95% CI [-0.01, 0.14]. A meta-analytic mediation model suggests that the number of SNS contacts can partially explain the previously identified negative association between shyness and well-being.
Highlights
• Results on the link between shyness and social media use had been inconclusive.
• A three-level, random effects meta-analysis was conducted.
• The kind of usage variable investigated moderated the relationship.
• Shyness was negatively associated with active use and with the number of contacts.
• A meta-analytic mediation model connected shyness, SNS contacts, and well-being.
Abstract: Since the advent of social networking sites (SNSs) such as Facebook and Twitter (often called social media), the link between shyness and using these platforms has received substantial scholarly attention. We assumed that the diverging findings could be explained by the patterns of use examined in the primary studies. A three-level, random effects meta-analysis was conducted (50 effect sizes, total N = 6989). Shyness and SNS use across all available indicators were unrelated. As predicted, the association was moderated by the specific SNS use pattern. Shyness was negatively associated with active use (e.g., posting photos), ρ = −0.11, 95% CI [-0.20, −0.03], and with the number of SNS contacts (i.e., online network size), ρ = −0.26, 95% CI [-0.34, −0.17]. Negligible or no associations were found for general use (e.g., daily logins), ρ = 0.07, 95% CI [0.02, 0.13], or passive use (reading others’ posts), ρ = 0.07, 95% CI [-0.01, 0.14]. A meta-analytic mediation model suggests that the number of SNS contacts can partially explain the previously identified negative association between shyness and well-being.
The agreeable ones cooperate more at first, but don't have the strategic ability & consistency of those of high IQ; conscientiousness errs in caution, which deters cooperation
Intelligence, Personality, and Gains from Cooperation in Repeated Interactions. Eugenio Proto, Aldo Rustichini, Andis Sofianos. Journal of Political Economy, Apr 10, 2019. https://www.journals.uchicago.edu/doi/pdfplus/10.1086/701355
Abstract: We study how intelligence and personality affect the outcomes of groups, focusing on repeated interactions that provide the opportunity for profitable cooperation. Our experimental method creates two groups of subjects who have different levels of certain traits, such as higher or lower levels of Intelligence, Conscientiousness, and Agreeableness, but who are very similar otherwise. Intelligence has a large and positive long-run effect on cooperative behavior. The effect is strong when at the equilibrium of the repeated game there is a trade-off between short-run gains and long-run losses. Conscientiousness and Agreeableness have a natural, significant but transitory effect on cooperation rates.
Intelligence, Personality, and Gains from Cooperation in Repeated Interactions. Eugenio Proto, Aldo Rustichini, Andis Sofianos. Journal of Political Economy, Apr 10, 2019. https://www.journals.uchicago.edu/doi/pdfplus/10.1086/701355
We study how intelligence and personality affect the outcomes of groups, focusing on repeated interactions that provide the opportunity for profitable cooperation. Our experimental method creates two groups of subjects who have different levels of certain traits, such as higher or lower levels of Intelligence, Conscientiousness, and Agreeableness, but who are very similar otherwise. Intelligence has a large and positive long-run effect on cooperative behavior. The effect is strong when at the equilibrium of the repeated game there is a trade-off between short-run gains and long-run losses. Conscientiousness and Agreeableness have a natural, significant but transitory effect on cooperation rates.
Abstract: We study how intelligence and personality affect the outcomes of groups, focusing on repeated interactions that provide the opportunity for profitable cooperation. Our experimental method creates two groups of subjects who have different levels of certain traits, such as higher or lower levels of Intelligence, Conscientiousness, and Agreeableness, but who are very similar otherwise. Intelligence has a large and positive long-run effect on cooperative behavior. The effect is strong when at the equilibrium of the repeated game there is a trade-off between short-run gains and long-run losses. Conscientiousness and Agreeableness have a natural, significant but transitory effect on cooperation rates.
Intelligence, Personality, and Gains from Cooperation in Repeated Interactions. Eugenio Proto, Aldo Rustichini, Andis Sofianos. Journal of Political Economy, Apr 10, 2019. https://www.journals.uchicago.edu/doi/pdfplus/10.1086/701355
We study how intelligence and personality affect the outcomes of groups, focusing on repeated interactions that provide the opportunity for profitable cooperation. Our experimental method creates two groups of subjects who have different levels of certain traits, such as higher or lower levels of Intelligence, Conscientiousness, and Agreeableness, but who are very similar otherwise. Intelligence has a large and positive long-run effect on cooperative behavior. The effect is strong when at the equilibrium of the repeated game there is a trade-off between short-run gains and long-run losses. Conscientiousness and Agreeableness have a natural, significant but transitory effect on cooperation rates.
‘Concept creep’ (that harm-related concepts of abuse, bullying, prejudice, have expanded their meanings recently): Those with broader concepts endorse harm-based morality, liberal political attitudes
Concept creepers: Individual differences in harm-related concepts and their correlates. Melanie J. McGrath et al. Personality and Individual Differences, Volume 147, 1 September 2019, Pages 79-84. https://doi.org/10.1016/j.paid.2019.04.015
Abstract: Research on ‘concept creep’ argues that harm-related concepts such as abuse, bullying, prejudice, and trauma have expanded their meanings in recent decades. Theorists have suggested that this semantic expansion may have mixed implications. Broadened concepts might problematize harmful behavior that was previously tolerated but might also make people over-sensitive and fragile. Two studies using American MTurk samples (Ns = 276, 309) examined individual differences in the breadth of people's concepts of harm and explored their correlates. Study 1 found reliable variations in concept breadth that were consistent across four disparate harm-related concepts. As predicted, people with broader concepts tended to endorse harm-based morality, liberal political attitudes, and high empathic concern. Contrary to prediction, younger people did not have broader concepts. Study 2 replicated the association between concept breadth and liberalism and extended the empathy finding by showing that concept breadth was associated with sensitivity to injustice toward others but not the self. In this study, people holding broader concepts were younger and tended to feel more vulnerable and entitled. These findings indicate that holding broader concepts of harm may have mixed implications.
Abstract: Research on ‘concept creep’ argues that harm-related concepts such as abuse, bullying, prejudice, and trauma have expanded their meanings in recent decades. Theorists have suggested that this semantic expansion may have mixed implications. Broadened concepts might problematize harmful behavior that was previously tolerated but might also make people over-sensitive and fragile. Two studies using American MTurk samples (Ns = 276, 309) examined individual differences in the breadth of people's concepts of harm and explored their correlates. Study 1 found reliable variations in concept breadth that were consistent across four disparate harm-related concepts. As predicted, people with broader concepts tended to endorse harm-based morality, liberal political attitudes, and high empathic concern. Contrary to prediction, younger people did not have broader concepts. Study 2 replicated the association between concept breadth and liberalism and extended the empathy finding by showing that concept breadth was associated with sensitivity to injustice toward others but not the self. In this study, people holding broader concepts were younger and tended to feel more vulnerable and entitled. These findings indicate that holding broader concepts of harm may have mixed implications.
Advice to my younger self if I knew then what I know now: Most of the advice fell into the domains of relationships, education, & selfhood
If I knew then what I know now: Advice to my younger self. Robin M. Kowalski & Annie McCord. The Journal of Social Psychology, May 5 2019. https://www.tandfonline.com/doi/abs/10.1080/00224545.2019.1609401
ABSTRACT: If we could go back and give ourselves advice to keep from making a mistake, most of us would probably take that opportunity. Using self-discrepancy theory as a theoretical framework, US workers on Amazon’s Mechanical Turk, who were at least 30 years of age, indicated in two studies what their advice to their younger selves would be, what pivotal event was influential for them, if they had regrets, and if following this advice would bring them closer to their ideal or ought self. Across both studies, most of the advice fell into the domains of relationships, education, and selfhood. Participants said following the advice would bring them more in line with their ideal than their ought self. Following the advice also led to more positive perceptions of the current self by the high school self. Ages at which pivotal events occurred provided strong support for the reminiscence bump.
KEYWORDS: Advice, regret, self-discrepancy, counterfactual thinking
ABSTRACT: If we could go back and give ourselves advice to keep from making a mistake, most of us would probably take that opportunity. Using self-discrepancy theory as a theoretical framework, US workers on Amazon’s Mechanical Turk, who were at least 30 years of age, indicated in two studies what their advice to their younger selves would be, what pivotal event was influential for them, if they had regrets, and if following this advice would bring them closer to their ideal or ought self. Across both studies, most of the advice fell into the domains of relationships, education, and selfhood. Participants said following the advice would bring them more in line with their ideal than their ought self. Following the advice also led to more positive perceptions of the current self by the high school self. Ages at which pivotal events occurred provided strong support for the reminiscence bump.
KEYWORDS: Advice, regret, self-discrepancy, counterfactual thinking
Sunday, May 5, 2019
Children in daycare experience fewer one-to-one interactions with adults, which is negative for IQ in families where such interactions are of higher quality; reduction in IQ increases with family income
Ichino, A., Fort, M., & Zanella, G. (2019). Cognitive and Non-Cognitive Costs of Daycare 0-2 for Children in Advantaged Families. Journal of Political Economy, May 2019. doi:10.1086/704075
Abstract: Exploiting admission thresholds to the Bologna daycare system, we show using RDD that one additional daycare month at age 0–2 reduces IQ by 0.5% (4.7% of a s.d.) at age8–14 in a relatively affluent population. The magnitude of this negative effect increases with family income. Similar negative impacts are found for personality traits. These findings are consistent with the hypothesis from psychology that children in daycare experience fewer one-to-one interactions with adults, with negative effects in families where such interactions are of higher quality. We embed this hypothesis in a model that lends structure to our RDD.
JEL-Code: J13, I20, I28, H75
Keywords: daycare, childcare, child development, cognitive skills, personality
Abstract: Exploiting admission thresholds to the Bologna daycare system, we show using RDD that one additional daycare month at age 0–2 reduces IQ by 0.5% (4.7% of a s.d.) at age8–14 in a relatively affluent population. The magnitude of this negative effect increases with family income. Similar negative impacts are found for personality traits. These findings are consistent with the hypothesis from psychology that children in daycare experience fewer one-to-one interactions with adults, with negative effects in families where such interactions are of higher quality. We embed this hypothesis in a model that lends structure to our RDD.
JEL-Code: J13, I20, I28, H75
Keywords: daycare, childcare, child development, cognitive skills, personality
Big Business Isn’t Big Politics. Essay by Tyler Cowen // Fears of crony capitalism in the U.S. are misplaced
Big Business Isn’t Big Politics. Tyler Cowen. Foreign Policy, May 3 2019. https://foreignpolicy.com/2019/05/03/big-business-isnt-big-politics/
Fears of crony capitalism in the United States are misplaced
he basic view that big business is pulling the strings in Washington is one of the major myths of our time. Most American political decisions are not in fact shaped by big business, even though business does control numerous pieces of specialist legislation. Even in 2019, big business is hardly dominating the agenda. U.S. corporate leaders often promote ideas of fiscal responsibility, free trade, robust trade agreements, predictable government, multilateral foreign policy, higher immigration, and a certain degree of political correctness in government—all ideas that are ailing rather badly right now.
To be sure, there is plenty of crony capitalism in the United States today. For instance, the Export-Import Bank subsidizes U.S. exports with guaranteed loans or low-interest loans. The biggest American beneficiary is Boeing, by far, and the biggest foreign beneficiaries are large and sometimes state-owned companies, such as Pemex, the national fossil fuel company of the Mexican government. The Small Business Administration subsidizes small business start-ups, the procurement cycle for defense caters to corporate interests, and the sugar and dairy lobbies still pull in outrageous subsidies and price protection programs, mostly at the expense of ordinary American consumers, including low-income consumers.
...overall, lobbyists are not running the show. The average big company has only 3.4 lobbyists in Washington, and for medium-size companies that number is only 1.42. For major companies, the average is 13.9, and the vast majority of companies spend less than $250,000 a year on lobbying. Furthermore, a systematic study shows that business lobbying does not increase the chance of favorable legislation being passed for that business, nor do those businesses receive more government contracts; contributions to political action committees are ineffective too.
Full text at the link above
Fears of crony capitalism in the United States are misplaced
he basic view that big business is pulling the strings in Washington is one of the major myths of our time. Most American political decisions are not in fact shaped by big business, even though business does control numerous pieces of specialist legislation. Even in 2019, big business is hardly dominating the agenda. U.S. corporate leaders often promote ideas of fiscal responsibility, free trade, robust trade agreements, predictable government, multilateral foreign policy, higher immigration, and a certain degree of political correctness in government—all ideas that are ailing rather badly right now.
To be sure, there is plenty of crony capitalism in the United States today. For instance, the Export-Import Bank subsidizes U.S. exports with guaranteed loans or low-interest loans. The biggest American beneficiary is Boeing, by far, and the biggest foreign beneficiaries are large and sometimes state-owned companies, such as Pemex, the national fossil fuel company of the Mexican government. The Small Business Administration subsidizes small business start-ups, the procurement cycle for defense caters to corporate interests, and the sugar and dairy lobbies still pull in outrageous subsidies and price protection programs, mostly at the expense of ordinary American consumers, including low-income consumers.
...overall, lobbyists are not running the show. The average big company has only 3.4 lobbyists in Washington, and for medium-size companies that number is only 1.42. For major companies, the average is 13.9, and the vast majority of companies spend less than $250,000 a year on lobbying. Furthermore, a systematic study shows that business lobbying does not increase the chance of favorable legislation being passed for that business, nor do those businesses receive more government contracts; contributions to political action committees are ineffective too.
Full text at the link above
Support for hate crime grows when men fear that refugees' influx makes it difficult to mate, even when controlling for anti-refugee views, perceived job competition, general frustration & aggressiveness
Dancygier, Rafaela M. and Egami, Naoki and Jamal, Amaney and Rischke, Ramona, Hating and Mating: Fears over Mate Competition and Violent Hate Crime against Refugees (March 23, 2019). SSRN: http://dx.doi.org/10.2139/ssrn.3358780
Abstract: As the number of refugees rises across the world, anti-refugee violence has become a pressing concern. What explains the incidence and support of such hate crime? We argue that fears among native men that refugees pose a threat in the competition for female partners is a critical but understudied factor driving hate crime. Employing a comprehensive dataset on the incidence of hate crime across Germany, we first demonstrate that hate crime rises where men face disadvantages in local mating markets. Next, we deploy an original four-wave panel survey to confirm that support for hate crime increases when men fear that the inflow of refugees makes it more difficult to find female partners. Mate competition concerns remain a robust predictor even when controlling for anti-refugee views, perceived job competition, general frustration, and aggressiveness. We conclude that a more complete understanding of hate crime must incorporate mating markets and mate competition.
Keywords: hate crime, refugees, immigration, ethnocentrism, inter-group conflict, sex ratios, marriage markets
JEL Classification: D74, J11, J12, J15, N34
Abstract: As the number of refugees rises across the world, anti-refugee violence has become a pressing concern. What explains the incidence and support of such hate crime? We argue that fears among native men that refugees pose a threat in the competition for female partners is a critical but understudied factor driving hate crime. Employing a comprehensive dataset on the incidence of hate crime across Germany, we first demonstrate that hate crime rises where men face disadvantages in local mating markets. Next, we deploy an original four-wave panel survey to confirm that support for hate crime increases when men fear that the inflow of refugees makes it more difficult to find female partners. Mate competition concerns remain a robust predictor even when controlling for anti-refugee views, perceived job competition, general frustration, and aggressiveness. We conclude that a more complete understanding of hate crime must incorporate mating markets and mate competition.
Keywords: hate crime, refugees, immigration, ethnocentrism, inter-group conflict, sex ratios, marriage markets
JEL Classification: D74, J11, J12, J15, N34
Subscribe to:
Posts (Atom)