How Empathic Concern Fuels Political Polarization. Elizabeth N Simas, Scott Clifford, and Justin H Kirkland. Anerican Political Science Review, October 31 2019. https://doi.org/10.1017/S0003055419000534
Abstract: Over the past two decades, there has been a marked increase in partisan social polarization, leaving scholars in search of solutions to partisan conflict. The psychology of intergroup relations identifies empathy as one of the key mechanisms that reduces intergroup conflict, and some have suggested that a lack of empathy has contributed to partisan polarization. Yet, empathy may not always live up to this promise. We argue that, in practice, the experience of empathy is biased toward one’s ingroup and can actually exacerbate political polarization. First, using a large, national sample, we demonstrate that higher levels of dispositional empathic concern are associated with higher levels of affective polarization. Second, using an experimental design, we show that individuals high in empathic concern show greater partisan bias in evaluating contentious political events. Taken together, our results suggest that, contrary to popular views, higher levels of dispositional empathy actually facilitate partisan polarization.
Rolf Degen summarizing: Individuals high in empatic concern are prone to even greater political polarization and show greater partisan bias in the censorship of ideas and feelings of schadenfreude over others' misfortune.
Check also Schoenmueller, Verena and Netzer, Oded and Stahl, Florian, Polarized America: From Political Partisanship to Preference Partisanship (October 17, 2019). SSRN. https://www.bipartisanalliance.com/2019/11/brand-preferences-increasing.html
Sunday, November 3, 2019
We conclude that many species possess the psychological processes to show some form of reciprocity; it might be a widespread phenomenon that varies in terms of strategies and mechanisms
Reciprocity: Different behavioural strategies, cognitive mechanisms and psychological processes. Manon K. Schweinfurth, Josep Call. Learning & Behavior, November 1 2019. https://link.springer.com/article/10.3758/s13420-019-00394-5
Abstract: Reciprocity is probably one of the most debated theories in evolutionary research. After more than 40 years of research, some scientists conclude that reciprocity is an almost uniquely human trait mainly because it is cognitively demanding. Others, however, conclude that reciprocity is widespread and of great importance to many species. Yet, it is unclear how these species reciprocate, given its apparent cognitive complexity. Therefore, our aim was to unravel the psychological processes underlying reciprocity. By bringing together findings from studies investigating different aspects of reciprocity, we show that reciprocity is a rich concept with different behavioural strategies and cognitive mechanisms that require very different psychological processes. We reviewed evidence from three textbook examples, i.e. the Norway rat, common vampire bat and brown capuchin monkey, and show that the species use different strategies and mechanisms to reciprocate. We continue by examining the psychological processes of reciprocity. We show that the cognitive load varies between different forms of reciprocity. Several factors can lower the memory demands of reciprocity such as distinctiveness of encounters, memory of details and network size. Furthermore, there are different information operation systems in place, which also vary in their cognitive load due to assessing the number of encounters and the quality and quantity of help. We conclude that many species possess the psychological processes to show some form of reciprocity. Hence, reciprocity might be a widespread phenomenon that varies in terms of strategies and mechanisms.
Keywords: Cooperation Reciprocity Cognition Emotion Norway rat Vampire bat Capuchin monkey
Introduction
The theory of natural selection predicts that only those behaviours evolve that increase the actor’s own survival and reproductive success (Darwin, 1859). Paradoxically, many species provide benefits to others, for instance, by providing care, food, information and support to con- and heterospecifics (Dugatkin, 1997). Cooperation is such a widespread phenomenon that we find evidence across the animal kingdom, ranging from bacteria (Crespi, 2001) to humans (Fehr & Fischbacher, 2003). The paradox of the evolution of cooperation was resolved for interactions between related individuals, i.e. by helping kin, shared genes are more likely to be transmitted to the next generations, which is in the interest of the helper (Hamilton, 1964). Still, the kin selection theory cannot explain the frequent occurrence of cooperation among unrelated individuals. Trivers (1971) offered a solution: reciprocal cooperation, i.e. helping those that were cooperative before. While several theoretical models have shown that cooperation can evolve via reciprocity (reviewed in Nowak, 2012), the theory has faced considerable resistance (e.g. Clutton-Brock, 2009; Connor, 2010; Hammerstein, 2003; Russell & Wright, 2009; Sánchez-Amaro & Amici, 2015; Stevens, Cushman, & Hauser, 2005; Stevens & Hauser, 2004, 2005; West, Griffin, & Gardner, 2007).
It is a central problem of the theory of reciprocity, and explaining cooperation more generally (especially among non-kin), that the extent to which behavioural exchanges are based on reciprocity is unknown. This problem is tightly linked to the cognitive underpinnings of reciprocity. Researchers, who assume reciprocity to be highly cognitively demanding, came to the conclusion that reciprocity is virtually absent in non-human animals (Amici et al., 2014; Clements & Stephens, 1995; Hauser, McAuliffe, & Blake, 2009; Pelé, Dufour, Thierry, & Call, 2009; Pelé, Thierry, Call, & Dufour, 2010; Sánchez-Amaro & Amici, 2015; Stephens, McLinn, & Stevens, 2002; Stevens et al., 2005; Stevens & Hauser, 2004). In contrast, researchers who assume that reciprocity varies in its cognitive load came to the opposite conclusion that reciprocity is widespread (Brosnan & de Waal, 2002; Brosnan, Salwiczek, & Bshary, 2010; Carter, 2014; Freidin, Carballo, & Bentosela, 2017; Melis & Semmann, 2010; Raihani & Bshary, 2011; Schino & Aureli, 2009, 2010b; Taborsky, Frommen, & Riehl, 2016). Here, we argue that the debate can be enriched and potentially resolved by identifying and discussing the concrete psychological processes of all different forms of reciprocity by using and incorporating findings from different lines of research. Although several authors have discussed the various mechanisms that may underlie reciprocal exchanges between individuals (e.g. Brosnan & de Waal, 2002; Schino & Aureli, 2010b), as far as we know, no attempt has been made to systematically relate those mechanisms to the psychological processes underlying them. Our aim in this article is to bring attention to this issue as a necessary step towards the ultimate goal of elucidating the psychological processes underlying different forms of reciprocity. Accordingly, our article is organised as follows: First, we summarise different behavioural strategies and cognitive mechanisms enabling reciprocity. Second, we apply this framework to three textbook examples of reciprocity, i.e. the Norway rat (Rattus norvegicus), the common vampire bat (Desmodus rotundus) and the brown capuchin monkey (Cebus apella). Third, we discuss the crucial behavioural, cognitive and emotional components enabling different forms of reciprocity. These psychological processes will allow us to draw clear predictions under which conditions reciprocity is likely to evolve. Finally, we provide concrete examples for prospective studies to better understand the evolutionary and psychological origins of reciprocity.
[...]
Three textbook examples of reciprocity
[...]
Norway rat
Wild Norway rats live in large multi-female, multi-male colonies, which may be composed of more than 150 individuals (Davis, 1953). They frequently interact with related and unrelated colony members and form dominance hierarchies (Calhoun, 1979). They engage in various social behaviours like alarm calls, food sharing, huddling, social grooming and social play (reviewed in Schweinfurth, under review).
Norway rats have been repeatedly shown to reciprocate help in different paradigms (reviewed in Schweinfurth in press). The most commonly used paradigm is the food-exchange setup. Here one rat, i.e. the cooperating partner, provides food via a movable platform to the focal rat (cf. Rutte & Taborsky, 2007). After a delay of up to six days, the roles are exchanged and the focal rat can provide food to its previous partner (e.g. Stieger, Schweinfurth, & Taborsky, 2017). To ensure that food donations by the focal rats are based on the previous help by a partner, focal rats are always also tested with a defecting partner that did not provide food to them. Several controls have been conducted to ensure that, for instance, differential food intake, activity, copying, or other factors cannot explain their helping levels (for a discussion, see Dolivo, Rutte, & Taborsky, 2016; Schweinfurth & Taborsky, 2018b).
Rats help each other reciprocally according to at least two behavioural strategies. First, they help each other according to direct reciprocity. Female and male rats donate food to others reciprocally by providing more food to cooperating than defecting partners (Li & Wood, 2017; Rutte & Taborsky, 2008; Schneeberger, Dietz, & Taborsky, 2012; Schweinfurth, Aeschbacher, Santi, & Taborsky, 2019; Schweinfurth & Taborsky, 2016, 2017, 2018c; Simones, 2007; Viana, Gordo, Sucena, & Moita, 2010). In addition, female rats apply direct reciprocity when grooming each other (Schweinfurth, Stieger, & Taborsky, 2017). Such reciprocal allogrooming has significant fitness benefits, as reciprocal groomers live longer and suffer less mammary tumours in the lab (Yee, Cavigelli, Delgado, & McClintock, 2008). Finally, female rats also exchange allogrooming for food and vice versa (Schweinfurth & Taborsky, 2018b; Stieger et al., 2017). Besides direct reciprocity, female, but not male, rats engage in generalised reciprocity (Rutte & Taborsky, 2007, 2008; Schweinfurth et al., 2019). In a direct comparison, however, female rats donate over 20% more food to a partner with whom they have interacted, showing that direct reciprocity generates higher levels of cooperation than generalised reciprocity (Rutte & Taborsky, 2008).
What is the mechanism underlying reciprocity in Norway rats? First, hard-wired reciprocity cannot explain reciprocity among Norway rats because they tailor their help to the partner’s helping quality (Dolivo & Taborsky, 2015) and the partner’s need (Márquez, Rennie, Costa, & Moita, 2015; Schneeberger et al., 2012; Schneeberger, Röder, & Taborsky, submitted; Schweinfurth & Taborsky, 2018a). Furthermore, rats reciprocate help by using different actions (Schweinfurth & Taborsky, 2017) and different commodities (Schweinfurth & Taborsky, 2018b; Stieger et al., 2017), making a fixed response unlikely. Second, emotion-based reciprocity is also unlikely to explain reciprocity in Norway rats. They seem not to form social bonds even after being housed together for more than one and a half years (Schweinfurth, Neuenschwander, et al., 2017). They do not accumulate social information with their partners, but rather use the last experience (Schweinfurth & Taborsky, under review; Stieger et al., 2017). Third, there is no evidence for calculated reciprocity either. The amounts of received and immediate given help are not matched in male and female rats (Schweinfurth et al., 2019; Schweinfurth & Taborsky, under review). This is in line with their numerical ability being limited to six items, which is below the amount of grooming bouts they are known to reciprocate (cf. Davis & Bradford, 1986; Schweinfurth, Stieger, et al., 2017).
Reciprocity in Norway rats is probably best explained by attitudinal reciprocity (reviewed in Schweinfurth, in press). Rats form attitudes that are based on the last encounter with a partner (Schweinfurth & Taborsky, under review). Importantly, using the last encounters for reciprocity is not a result of memory interference as rats have been shown to memorise several partners (Kettler, Schweinfurth, & Taborsky, under review), several food preferences (Galef, Lee, & Whiskin, 2005) and several unique events (Panoz-Brown et al., 2016) despite a long time delay with potentially disruptive experiences. Importantly, attitudes are linked to received cooperation and not just the product of ‘feeling good’, as a study showed in which rats refused to reciprocate food donations that they received in the presence of, but not by, another rat (Schmid, Schneeberger, & Taborsky, 2017). Attitudes can be generalised to other partners because female rats show not only direct but also generalised reciprocity (Rutte & Taborsky, 2008). In addition, attitudes are not binary responses, like a cooperator or defector tag, but can be modulated by different values of help. For instance, rats groom a partner more often that has provided food to them than vice versa, suggesting that one food item is valued more than one allogrooming bout (Schweinfurth & Taborsky, 2018b). In addition, rats are more likely to provide oat flakes to a partner that provided them with banana pieces, i.e. highly preferred food, than with carrot pieces, i.e. less preferred food (Dolivo & Taborsky, 2015).
Common vampire bat
Vampire bats, which include three species, feed exclusively on the blood of other mammals (Dalquest, 1955). Common vampire bats live in small groups of eight to twelve individuals (Wilkinson, 1984). Such groups roost together in large colonies, ranging from a few individuals to over 2,000, whereby females often move between different roosts (Wilkinson, 1988). Groups usually consist of one male and its female harem or male bachelor groups (Wilkinson, 1985b, 1985a). Vampire bats are the only bats that regurgitate food to donate it to others (Carter & Wilkinson, 2013a). Besides food donations, vampire bats show high levels of allogrooming compared to other bats (Carter & Leffer, 2015). In addition, they adopt and nurse offspring of other colony members (Carter & Wilkinson, 2013a; Wilkinson, Carter, Bohn, & Adams, 2016).
Common vampire bats regurgitate blood to donate it to others. Early studies used numerous observations of these bats in their natural habitat and found that given and received help is correlated and reciprocated (Denault & McFarlane, 1995; Wilkinson, 1984). Recently, these findings have been replicated (Carter & Wilkinson, 2013c) and extended by several controlled experiments with captive bats (reviewed in Carter & Wilkinson, 2013b). In several experiments, individual focal bats were removed from their colony and fasted for one day. After the hungry focal individual was returned to its group, all food donations to this individual were recorded. The focal individuals received blood mostly from partners, which they had provided food before (Carter & Wilkinson, 2013c).
They mainly donate blood with conspecifics with which they roost together frequently but that do not necessarily belong to their group (Carter & Wilkinson, 2013b). Food donations are more common between females, which form stable social bonds, but males have also been shown to regurgitate blood for others in the laboratory (Carter & Wilkinson, 2013b). Such donations are highly valuable to recipients because vampire bats die within two to three days without a blood meal (Freitas, Welker, Millan, & Pinheiro, 2003; McNab, 1973). They reciprocate blood with both kin and non-kin, whereby donations are better explained by reciprocity than by relatedness (Carter & Wilkinson, 2013c). Extending the network to non-kin has been shown to be beneficial for the bats because when their main association partner was temporarily removed, those that had more associations with unrelated roost mates received more food donations (Carter, Farine, & Wilkinson, 2017; Carter & Wilkinson, 2015b). Besides exchanging blood, the bats also exchange allogrooming for food according to direct reciprocity (Carter & Wilkinson, 2013c; Wilkinson, 1986). In contrast, they seem not to use generalised reciprocity (Carter & Wilkinson, 2013c).
What is the mechanism underlying reciprocity in common vampire bats? First, hard-wired reciprocity cannot explain reciprocity since the bats exchange different services, like allogrooming with food (Carter & Wilkinson, 2013c). In addition, reciprocity is limited to few closely bonded individuals (Carter et al., 2017), which makes automatic responses to received help unlikely. Second, attitudes based on recent encounters seem to be unable to generate reciprocity because reciprocity can only be detected over long time frames or when socially bonded bats are observed in captivity (reviewed in Carter & Wilkinson, 2013b; Wilkinson et al., 2016). Third, calculated reciprocity is an unlikely explanation for their reciprocity. Like Norway rats, common vampire bats do not match the actual amount of received and given help (Carter & Wilkinson, 2013c), which makes reciprocity based on calculations unlikely. However, little is known about their cognitive capabilities that could elucidate the underlying cognitive processes that they use (for the only studies of examining cognitive skills, see Ratcliffe, Fenton, & Galef, 2003; Vrtilek, Carter, Patriquin, Page, & Ratcliffe, 2018).
Reciprocity in vampire bats is probably best explained by emotion-based reciprocity. Rather than considering the last encounters as Norway rats do, the bats reciprocate help over long time spans (see above). They form enduring social bonds with kin and non-kin, which can last over more than ten years (Carter et al., 2017; Carter & Wilkinson, 2013c, 2015b; Wilkinson, 1985b, 1985a, 1986). Importantly, their food-sharing network closely mirrors their social-bonding network (Carter & Wilkinson, 2013c). This suggests that social bonds play a crucial role in their decisions to help conspecifics. In line with this, the amounts of donated blood and grooming are linked to oxytocin levels, which suggests an emotional component in the decision to reciprocate (Carter & Wilkinson, 2015a).
Brown capuchin monkey
Brown capuchin monkeys, also called tufted or black-capped capuchin monkeys, live in colonies of up to 30 individuals (Carosi, Linn, & Visalberghi, 2005). They show a distinct linear hierarchy in both sexes (Janson, 1985). Further, their breeding system can be described as either one-male, multi-female or multi-male, multi-female (Carosi et al., 2005). These monkeys show various social behaviours, such as alarm calls, collaborative hunting, food sharing, social grooming and social play (Fragaszy, Visalberghi, & Fegan, 2009; Izawa, 1980).
Brown capuchin monkeys are probably the best-known example for reciprocity. Reciprocity has been demonstrated repeatedly by using various methods under controlled captive conditions. For instance, they donate food to others by handing over or dropping food close to a partner in an adjacent compartment, who will return the favour in the same way (de Waal, 2000; see de Waal, 1997, for a detailed ethogram of the donations). To ensure that food donations are a product of received help and partner directed, several control conditions have been conducted with partners of differing relationship quality or partners being absent (reviewed in de Waal & Brosnan, 2006). In addition to these active and passive food transfers, the monkeys also reciprocate food donations by using various less intuitive testing apparatuses, i.e. bar pulling, joysticks, lever boxes and token exchanges (Hattori, Kuroshima, & Fujita, 2005; Mendres & de Waal, 2000; Parrish, Brosnan, & Beran, 2015; Suchak & de Waal, 2012).
All the tests described above investigated direct reciprocity. In addition, capuchin monkeys show generalised reciprocity (Leimgruber et al., 2014). As far as we are aware, there has been no published report of indirect reciprocity in brown capuchin monkeys. However, they pay attention to third-party interactions and are inclined to accept exchange offers from humans that were observed to frequently reject such exchange requests from other monkeys (Anderson, Kuroshima, Takimoto, & Fujita, 2013). This might suggest indirect reciprocity, but further studies are needed that directly test this possibility.
What is the mechanism underlying reciprocity in brown capuchin monkeys? In contrast to Norway rats and common vampire bats, the mechanism of reciprocity among these monkeys is less obvious. The only mechanism that can be almost certainly excluded is hard-wired reciprocity. When the monkeys help each other, they consider the quality of received help (de Waal, 2000) and the quality of their relationship (Sabbatini, De Bortoli Vizioli, Visalberghi, & Schino, 2012). This makes a fixed response unlikely. Furthermore, calculated reciprocity seems unlikely but possible because two studies found that the amount of received and given help was correlated, which suggests memory and score keeping of previous helpful events to some degree (e.g. de Waal, 1997, 2000). In fact, capuchin monkeys have been shown to add items to a pool of non-visible items, suggesting that they can keep track of multiple food donations, which is needed for calculations (Beran, Evans, Leidghty, Harris, & Rice, 2008). However, not all studies found such a correlation between received and given help (Brosnan, Freeman, & de Waal, 2006; Sabbatini et al., 2012; Suchak & de Waal, 2012). Finally, studies set out to test for calculated reciprocity found no evidence (Amici et al., 2014; Pelé et al., 2010). It should be noted, however, that the monkeys in these studies did not pass the task-understanding condition. Therefore, additional studies on calculated reciprocity are needed.
There is less ambiguous evidence for the two remaining mechanisms underlying reciprocity. There is good evidence for attitudinal reciprocity (reviewed in Brosnan & de Waal, 2002; de Waal & Brosnan, 2006). In contrast to common vampire bats and in accordance with Norway rats, brown capuchin monkeys show short-term reciprocity (see above). Furthermore, they reciprocate help with familiar and unfamiliar partners (Suchak & de Waal, 2012), which suggests short-term attitudes rather than long-term bonds being the reason to help. However, there is also good evidence for emotion-based reciprocity. In a direct comparison, the monkeys were more likely to allow access to their food to a socially bonded individual than to an individual that provided food to them before (Sabbatini et al., 2012). This study suggests that the monkeys probably cooperate over long time frames and an emotional bond can become more important that attitudinal reciprocity in a partner choice situation. This is in line with observations from the wild that showed evidence for long-term reciprocity (di Bitetti, 1997; Izawa, 1980; Schino, di Giuseppe, & Visalberghi, 2009a; Schino, Di Giuseppe, & Visalberghi, 2009b; Tiddi, Aureli, Polizzi di Sorrentino, Janson, & Schino, 2011; Tiddi, Aureli, & Schino, 2012).
Hence, reciprocity in brown capuchin monkeys is probably a result of attitudinal reciprocity and emotion-based reciprocity. While attitudinal reciprocity cannot explain the selective helping of bonded individuals, emotion-based reciprocity cannot explain the repeatedly observed short-term reciprocity. Still, the results must not be contradictory, and both may explain some aspects of cooperation in these monkeys. For instance, Sabbatini et al. (2012) found that the same monkeys show short-term reciprocity in dyads, but long-term reciprocity in trios. This is an interesting finding because it suggests that individuals can possess multiple mechanisms to achieve reciprocity. The same may apply to other species, although this has not been studied extensively other than in humans.
Humans use different mechanisms to achieve different forms of reciprocity. People employ a calculated reciprocity approach with unfamiliar (Andreoni & Miller, 1993; Gachter & Falk, 2002) or business partners (Anderson, Hakansson, & Johanson, 1994; Steidlmeier, 1999). If, however, it gets difficult to memorise several events with a partner, people focus on the attitude towards partners based on the last encounter (Milinski & Wedekind, 1998). In contrast, we rarely use short-term reciprocity when interacting with friends; instead, we rely on emotional bonds based on long-term reciprocity (reviewed in Massen, Sterck, & de Vos, 2010; Silk, 2003). Overall, we help friends more than strangers (Gächter, Kessler, & Königstein, 2011) because we trust them (Buchan, Croson, & Dawes, 2002; Majolo et al., 2013) and expect them to return favours (Deutsch, 1975; Walker, 1995). Interestingly, donations towards strangers can be increased by applying oxytocin, which is associated with emotional bonds (Zak, Stanton, & Ahmadi, 2007). In addition, the more often strangers interact, the more trust is built up and the more they are treated like friends (Gächter et al., 2011). This suggests that we use multiple mechanisms that can transform into each other rather than being static.
Abstract: Reciprocity is probably one of the most debated theories in evolutionary research. After more than 40 years of research, some scientists conclude that reciprocity is an almost uniquely human trait mainly because it is cognitively demanding. Others, however, conclude that reciprocity is widespread and of great importance to many species. Yet, it is unclear how these species reciprocate, given its apparent cognitive complexity. Therefore, our aim was to unravel the psychological processes underlying reciprocity. By bringing together findings from studies investigating different aspects of reciprocity, we show that reciprocity is a rich concept with different behavioural strategies and cognitive mechanisms that require very different psychological processes. We reviewed evidence from three textbook examples, i.e. the Norway rat, common vampire bat and brown capuchin monkey, and show that the species use different strategies and mechanisms to reciprocate. We continue by examining the psychological processes of reciprocity. We show that the cognitive load varies between different forms of reciprocity. Several factors can lower the memory demands of reciprocity such as distinctiveness of encounters, memory of details and network size. Furthermore, there are different information operation systems in place, which also vary in their cognitive load due to assessing the number of encounters and the quality and quantity of help. We conclude that many species possess the psychological processes to show some form of reciprocity. Hence, reciprocity might be a widespread phenomenon that varies in terms of strategies and mechanisms.
Keywords: Cooperation Reciprocity Cognition Emotion Norway rat Vampire bat Capuchin monkey
Introduction
The theory of natural selection predicts that only those behaviours evolve that increase the actor’s own survival and reproductive success (Darwin, 1859). Paradoxically, many species provide benefits to others, for instance, by providing care, food, information and support to con- and heterospecifics (Dugatkin, 1997). Cooperation is such a widespread phenomenon that we find evidence across the animal kingdom, ranging from bacteria (Crespi, 2001) to humans (Fehr & Fischbacher, 2003). The paradox of the evolution of cooperation was resolved for interactions between related individuals, i.e. by helping kin, shared genes are more likely to be transmitted to the next generations, which is in the interest of the helper (Hamilton, 1964). Still, the kin selection theory cannot explain the frequent occurrence of cooperation among unrelated individuals. Trivers (1971) offered a solution: reciprocal cooperation, i.e. helping those that were cooperative before. While several theoretical models have shown that cooperation can evolve via reciprocity (reviewed in Nowak, 2012), the theory has faced considerable resistance (e.g. Clutton-Brock, 2009; Connor, 2010; Hammerstein, 2003; Russell & Wright, 2009; Sánchez-Amaro & Amici, 2015; Stevens, Cushman, & Hauser, 2005; Stevens & Hauser, 2004, 2005; West, Griffin, & Gardner, 2007).
It is a central problem of the theory of reciprocity, and explaining cooperation more generally (especially among non-kin), that the extent to which behavioural exchanges are based on reciprocity is unknown. This problem is tightly linked to the cognitive underpinnings of reciprocity. Researchers, who assume reciprocity to be highly cognitively demanding, came to the conclusion that reciprocity is virtually absent in non-human animals (Amici et al., 2014; Clements & Stephens, 1995; Hauser, McAuliffe, & Blake, 2009; Pelé, Dufour, Thierry, & Call, 2009; Pelé, Thierry, Call, & Dufour, 2010; Sánchez-Amaro & Amici, 2015; Stephens, McLinn, & Stevens, 2002; Stevens et al., 2005; Stevens & Hauser, 2004). In contrast, researchers who assume that reciprocity varies in its cognitive load came to the opposite conclusion that reciprocity is widespread (Brosnan & de Waal, 2002; Brosnan, Salwiczek, & Bshary, 2010; Carter, 2014; Freidin, Carballo, & Bentosela, 2017; Melis & Semmann, 2010; Raihani & Bshary, 2011; Schino & Aureli, 2009, 2010b; Taborsky, Frommen, & Riehl, 2016). Here, we argue that the debate can be enriched and potentially resolved by identifying and discussing the concrete psychological processes of all different forms of reciprocity by using and incorporating findings from different lines of research. Although several authors have discussed the various mechanisms that may underlie reciprocal exchanges between individuals (e.g. Brosnan & de Waal, 2002; Schino & Aureli, 2010b), as far as we know, no attempt has been made to systematically relate those mechanisms to the psychological processes underlying them. Our aim in this article is to bring attention to this issue as a necessary step towards the ultimate goal of elucidating the psychological processes underlying different forms of reciprocity. Accordingly, our article is organised as follows: First, we summarise different behavioural strategies and cognitive mechanisms enabling reciprocity. Second, we apply this framework to three textbook examples of reciprocity, i.e. the Norway rat (Rattus norvegicus), the common vampire bat (Desmodus rotundus) and the brown capuchin monkey (Cebus apella). Third, we discuss the crucial behavioural, cognitive and emotional components enabling different forms of reciprocity. These psychological processes will allow us to draw clear predictions under which conditions reciprocity is likely to evolve. Finally, we provide concrete examples for prospective studies to better understand the evolutionary and psychological origins of reciprocity.
[...]
Three textbook examples of reciprocity
[...]
Norway rat
Wild Norway rats live in large multi-female, multi-male colonies, which may be composed of more than 150 individuals (Davis, 1953). They frequently interact with related and unrelated colony members and form dominance hierarchies (Calhoun, 1979). They engage in various social behaviours like alarm calls, food sharing, huddling, social grooming and social play (reviewed in Schweinfurth, under review).
Norway rats have been repeatedly shown to reciprocate help in different paradigms (reviewed in Schweinfurth in press). The most commonly used paradigm is the food-exchange setup. Here one rat, i.e. the cooperating partner, provides food via a movable platform to the focal rat (cf. Rutte & Taborsky, 2007). After a delay of up to six days, the roles are exchanged and the focal rat can provide food to its previous partner (e.g. Stieger, Schweinfurth, & Taborsky, 2017). To ensure that food donations by the focal rats are based on the previous help by a partner, focal rats are always also tested with a defecting partner that did not provide food to them. Several controls have been conducted to ensure that, for instance, differential food intake, activity, copying, or other factors cannot explain their helping levels (for a discussion, see Dolivo, Rutte, & Taborsky, 2016; Schweinfurth & Taborsky, 2018b).
Rats help each other reciprocally according to at least two behavioural strategies. First, they help each other according to direct reciprocity. Female and male rats donate food to others reciprocally by providing more food to cooperating than defecting partners (Li & Wood, 2017; Rutte & Taborsky, 2008; Schneeberger, Dietz, & Taborsky, 2012; Schweinfurth, Aeschbacher, Santi, & Taborsky, 2019; Schweinfurth & Taborsky, 2016, 2017, 2018c; Simones, 2007; Viana, Gordo, Sucena, & Moita, 2010). In addition, female rats apply direct reciprocity when grooming each other (Schweinfurth, Stieger, & Taborsky, 2017). Such reciprocal allogrooming has significant fitness benefits, as reciprocal groomers live longer and suffer less mammary tumours in the lab (Yee, Cavigelli, Delgado, & McClintock, 2008). Finally, female rats also exchange allogrooming for food and vice versa (Schweinfurth & Taborsky, 2018b; Stieger et al., 2017). Besides direct reciprocity, female, but not male, rats engage in generalised reciprocity (Rutte & Taborsky, 2007, 2008; Schweinfurth et al., 2019). In a direct comparison, however, female rats donate over 20% more food to a partner with whom they have interacted, showing that direct reciprocity generates higher levels of cooperation than generalised reciprocity (Rutte & Taborsky, 2008).
What is the mechanism underlying reciprocity in Norway rats? First, hard-wired reciprocity cannot explain reciprocity among Norway rats because they tailor their help to the partner’s helping quality (Dolivo & Taborsky, 2015) and the partner’s need (Márquez, Rennie, Costa, & Moita, 2015; Schneeberger et al., 2012; Schneeberger, Röder, & Taborsky, submitted; Schweinfurth & Taborsky, 2018a). Furthermore, rats reciprocate help by using different actions (Schweinfurth & Taborsky, 2017) and different commodities (Schweinfurth & Taborsky, 2018b; Stieger et al., 2017), making a fixed response unlikely. Second, emotion-based reciprocity is also unlikely to explain reciprocity in Norway rats. They seem not to form social bonds even after being housed together for more than one and a half years (Schweinfurth, Neuenschwander, et al., 2017). They do not accumulate social information with their partners, but rather use the last experience (Schweinfurth & Taborsky, under review; Stieger et al., 2017). Third, there is no evidence for calculated reciprocity either. The amounts of received and immediate given help are not matched in male and female rats (Schweinfurth et al., 2019; Schweinfurth & Taborsky, under review). This is in line with their numerical ability being limited to six items, which is below the amount of grooming bouts they are known to reciprocate (cf. Davis & Bradford, 1986; Schweinfurth, Stieger, et al., 2017).
Reciprocity in Norway rats is probably best explained by attitudinal reciprocity (reviewed in Schweinfurth, in press). Rats form attitudes that are based on the last encounter with a partner (Schweinfurth & Taborsky, under review). Importantly, using the last encounters for reciprocity is not a result of memory interference as rats have been shown to memorise several partners (Kettler, Schweinfurth, & Taborsky, under review), several food preferences (Galef, Lee, & Whiskin, 2005) and several unique events (Panoz-Brown et al., 2016) despite a long time delay with potentially disruptive experiences. Importantly, attitudes are linked to received cooperation and not just the product of ‘feeling good’, as a study showed in which rats refused to reciprocate food donations that they received in the presence of, but not by, another rat (Schmid, Schneeberger, & Taborsky, 2017). Attitudes can be generalised to other partners because female rats show not only direct but also generalised reciprocity (Rutte & Taborsky, 2008). In addition, attitudes are not binary responses, like a cooperator or defector tag, but can be modulated by different values of help. For instance, rats groom a partner more often that has provided food to them than vice versa, suggesting that one food item is valued more than one allogrooming bout (Schweinfurth & Taborsky, 2018b). In addition, rats are more likely to provide oat flakes to a partner that provided them with banana pieces, i.e. highly preferred food, than with carrot pieces, i.e. less preferred food (Dolivo & Taborsky, 2015).
Common vampire bat
Vampire bats, which include three species, feed exclusively on the blood of other mammals (Dalquest, 1955). Common vampire bats live in small groups of eight to twelve individuals (Wilkinson, 1984). Such groups roost together in large colonies, ranging from a few individuals to over 2,000, whereby females often move between different roosts (Wilkinson, 1988). Groups usually consist of one male and its female harem or male bachelor groups (Wilkinson, 1985b, 1985a). Vampire bats are the only bats that regurgitate food to donate it to others (Carter & Wilkinson, 2013a). Besides food donations, vampire bats show high levels of allogrooming compared to other bats (Carter & Leffer, 2015). In addition, they adopt and nurse offspring of other colony members (Carter & Wilkinson, 2013a; Wilkinson, Carter, Bohn, & Adams, 2016).
Common vampire bats regurgitate blood to donate it to others. Early studies used numerous observations of these bats in their natural habitat and found that given and received help is correlated and reciprocated (Denault & McFarlane, 1995; Wilkinson, 1984). Recently, these findings have been replicated (Carter & Wilkinson, 2013c) and extended by several controlled experiments with captive bats (reviewed in Carter & Wilkinson, 2013b). In several experiments, individual focal bats were removed from their colony and fasted for one day. After the hungry focal individual was returned to its group, all food donations to this individual were recorded. The focal individuals received blood mostly from partners, which they had provided food before (Carter & Wilkinson, 2013c).
They mainly donate blood with conspecifics with which they roost together frequently but that do not necessarily belong to their group (Carter & Wilkinson, 2013b). Food donations are more common between females, which form stable social bonds, but males have also been shown to regurgitate blood for others in the laboratory (Carter & Wilkinson, 2013b). Such donations are highly valuable to recipients because vampire bats die within two to three days without a blood meal (Freitas, Welker, Millan, & Pinheiro, 2003; McNab, 1973). They reciprocate blood with both kin and non-kin, whereby donations are better explained by reciprocity than by relatedness (Carter & Wilkinson, 2013c). Extending the network to non-kin has been shown to be beneficial for the bats because when their main association partner was temporarily removed, those that had more associations with unrelated roost mates received more food donations (Carter, Farine, & Wilkinson, 2017; Carter & Wilkinson, 2015b). Besides exchanging blood, the bats also exchange allogrooming for food according to direct reciprocity (Carter & Wilkinson, 2013c; Wilkinson, 1986). In contrast, they seem not to use generalised reciprocity (Carter & Wilkinson, 2013c).
What is the mechanism underlying reciprocity in common vampire bats? First, hard-wired reciprocity cannot explain reciprocity since the bats exchange different services, like allogrooming with food (Carter & Wilkinson, 2013c). In addition, reciprocity is limited to few closely bonded individuals (Carter et al., 2017), which makes automatic responses to received help unlikely. Second, attitudes based on recent encounters seem to be unable to generate reciprocity because reciprocity can only be detected over long time frames or when socially bonded bats are observed in captivity (reviewed in Carter & Wilkinson, 2013b; Wilkinson et al., 2016). Third, calculated reciprocity is an unlikely explanation for their reciprocity. Like Norway rats, common vampire bats do not match the actual amount of received and given help (Carter & Wilkinson, 2013c), which makes reciprocity based on calculations unlikely. However, little is known about their cognitive capabilities that could elucidate the underlying cognitive processes that they use (for the only studies of examining cognitive skills, see Ratcliffe, Fenton, & Galef, 2003; Vrtilek, Carter, Patriquin, Page, & Ratcliffe, 2018).
Reciprocity in vampire bats is probably best explained by emotion-based reciprocity. Rather than considering the last encounters as Norway rats do, the bats reciprocate help over long time spans (see above). They form enduring social bonds with kin and non-kin, which can last over more than ten years (Carter et al., 2017; Carter & Wilkinson, 2013c, 2015b; Wilkinson, 1985b, 1985a, 1986). Importantly, their food-sharing network closely mirrors their social-bonding network (Carter & Wilkinson, 2013c). This suggests that social bonds play a crucial role in their decisions to help conspecifics. In line with this, the amounts of donated blood and grooming are linked to oxytocin levels, which suggests an emotional component in the decision to reciprocate (Carter & Wilkinson, 2015a).
Brown capuchin monkey
Brown capuchin monkeys, also called tufted or black-capped capuchin monkeys, live in colonies of up to 30 individuals (Carosi, Linn, & Visalberghi, 2005). They show a distinct linear hierarchy in both sexes (Janson, 1985). Further, their breeding system can be described as either one-male, multi-female or multi-male, multi-female (Carosi et al., 2005). These monkeys show various social behaviours, such as alarm calls, collaborative hunting, food sharing, social grooming and social play (Fragaszy, Visalberghi, & Fegan, 2009; Izawa, 1980).
Brown capuchin monkeys are probably the best-known example for reciprocity. Reciprocity has been demonstrated repeatedly by using various methods under controlled captive conditions. For instance, they donate food to others by handing over or dropping food close to a partner in an adjacent compartment, who will return the favour in the same way (de Waal, 2000; see de Waal, 1997, for a detailed ethogram of the donations). To ensure that food donations are a product of received help and partner directed, several control conditions have been conducted with partners of differing relationship quality or partners being absent (reviewed in de Waal & Brosnan, 2006). In addition to these active and passive food transfers, the monkeys also reciprocate food donations by using various less intuitive testing apparatuses, i.e. bar pulling, joysticks, lever boxes and token exchanges (Hattori, Kuroshima, & Fujita, 2005; Mendres & de Waal, 2000; Parrish, Brosnan, & Beran, 2015; Suchak & de Waal, 2012).
All the tests described above investigated direct reciprocity. In addition, capuchin monkeys show generalised reciprocity (Leimgruber et al., 2014). As far as we are aware, there has been no published report of indirect reciprocity in brown capuchin monkeys. However, they pay attention to third-party interactions and are inclined to accept exchange offers from humans that were observed to frequently reject such exchange requests from other monkeys (Anderson, Kuroshima, Takimoto, & Fujita, 2013). This might suggest indirect reciprocity, but further studies are needed that directly test this possibility.
What is the mechanism underlying reciprocity in brown capuchin monkeys? In contrast to Norway rats and common vampire bats, the mechanism of reciprocity among these monkeys is less obvious. The only mechanism that can be almost certainly excluded is hard-wired reciprocity. When the monkeys help each other, they consider the quality of received help (de Waal, 2000) and the quality of their relationship (Sabbatini, De Bortoli Vizioli, Visalberghi, & Schino, 2012). This makes a fixed response unlikely. Furthermore, calculated reciprocity seems unlikely but possible because two studies found that the amount of received and given help was correlated, which suggests memory and score keeping of previous helpful events to some degree (e.g. de Waal, 1997, 2000). In fact, capuchin monkeys have been shown to add items to a pool of non-visible items, suggesting that they can keep track of multiple food donations, which is needed for calculations (Beran, Evans, Leidghty, Harris, & Rice, 2008). However, not all studies found such a correlation between received and given help (Brosnan, Freeman, & de Waal, 2006; Sabbatini et al., 2012; Suchak & de Waal, 2012). Finally, studies set out to test for calculated reciprocity found no evidence (Amici et al., 2014; Pelé et al., 2010). It should be noted, however, that the monkeys in these studies did not pass the task-understanding condition. Therefore, additional studies on calculated reciprocity are needed.
There is less ambiguous evidence for the two remaining mechanisms underlying reciprocity. There is good evidence for attitudinal reciprocity (reviewed in Brosnan & de Waal, 2002; de Waal & Brosnan, 2006). In contrast to common vampire bats and in accordance with Norway rats, brown capuchin monkeys show short-term reciprocity (see above). Furthermore, they reciprocate help with familiar and unfamiliar partners (Suchak & de Waal, 2012), which suggests short-term attitudes rather than long-term bonds being the reason to help. However, there is also good evidence for emotion-based reciprocity. In a direct comparison, the monkeys were more likely to allow access to their food to a socially bonded individual than to an individual that provided food to them before (Sabbatini et al., 2012). This study suggests that the monkeys probably cooperate over long time frames and an emotional bond can become more important that attitudinal reciprocity in a partner choice situation. This is in line with observations from the wild that showed evidence for long-term reciprocity (di Bitetti, 1997; Izawa, 1980; Schino, di Giuseppe, & Visalberghi, 2009a; Schino, Di Giuseppe, & Visalberghi, 2009b; Tiddi, Aureli, Polizzi di Sorrentino, Janson, & Schino, 2011; Tiddi, Aureli, & Schino, 2012).
Hence, reciprocity in brown capuchin monkeys is probably a result of attitudinal reciprocity and emotion-based reciprocity. While attitudinal reciprocity cannot explain the selective helping of bonded individuals, emotion-based reciprocity cannot explain the repeatedly observed short-term reciprocity. Still, the results must not be contradictory, and both may explain some aspects of cooperation in these monkeys. For instance, Sabbatini et al. (2012) found that the same monkeys show short-term reciprocity in dyads, but long-term reciprocity in trios. This is an interesting finding because it suggests that individuals can possess multiple mechanisms to achieve reciprocity. The same may apply to other species, although this has not been studied extensively other than in humans.
Humans use different mechanisms to achieve different forms of reciprocity. People employ a calculated reciprocity approach with unfamiliar (Andreoni & Miller, 1993; Gachter & Falk, 2002) or business partners (Anderson, Hakansson, & Johanson, 1994; Steidlmeier, 1999). If, however, it gets difficult to memorise several events with a partner, people focus on the attitude towards partners based on the last encounter (Milinski & Wedekind, 1998). In contrast, we rarely use short-term reciprocity when interacting with friends; instead, we rely on emotional bonds based on long-term reciprocity (reviewed in Massen, Sterck, & de Vos, 2010; Silk, 2003). Overall, we help friends more than strangers (Gächter, Kessler, & Königstein, 2011) because we trust them (Buchan, Croson, & Dawes, 2002; Majolo et al., 2013) and expect them to return favours (Deutsch, 1975; Walker, 1995). Interestingly, donations towards strangers can be increased by applying oxytocin, which is associated with emotional bonds (Zak, Stanton, & Ahmadi, 2007). In addition, the more often strangers interact, the more trust is built up and the more they are treated like friends (Gächter et al., 2011). This suggests that we use multiple mechanisms that can transform into each other rather than being static.
Optimistic people live longer & better predicts mortality than income, including health and other controls; predicted trends show declining optimism (1976–95) for those with low education
Longer, more optimistic, lives: Historic optimism and life expectancy in the United States. Kelsey J. O'Connor, CarolGraham. Journal of Economic Behavior & Organization, November 2 2019. https://doi.org/10.1016/j.jebo.2019.10.018
Highlights
• Optimistic people live longer, based on nearly 50 years of longitudinal data (PSID).
• Optimism better predicts mortality than income, including health and other controls.
• Optimism changes over time and is positively associated with socio-economic status.
• Predicted trends show declining optimism (1976–95) for those with low education.
• Our findings illustrate the importance of tracking optimism in a systematic manner.
Abstract: How was optimism related to mortality before the rise in “deaths of despair” that began in the late 1990s? Using the Panel Study of Income Dynamics, we show that as early as 1968 more optimistic people lived longer. The relationship depends on many factors including gender, race, health, and education. We then evaluate these and other variables as correlates of individual optimism over the period 1968–1975. We find women and African Americans were less optimistic at the time than men and whites, although this changed beginning in the late 1970′s. Greater education is associated with greater optimism and so is having wealthy parents. We then predict optimism for the same individuals in subsequent years, thus generating our best guess as to how optimism changed for various demographic groups from 1976–1995. We find people with less than a high school degree had the greatest declines in optimism, a trend with long-run links to premature mortality and deaths of despair. Our findings highlight the importance of better understanding optimism's causes and consequences.
Keywords: MortalityOptimismExpectationsDeaths of despairDemographic trendsPrediction
Highlights
• Optimistic people live longer, based on nearly 50 years of longitudinal data (PSID).
• Optimism better predicts mortality than income, including health and other controls.
• Optimism changes over time and is positively associated with socio-economic status.
• Predicted trends show declining optimism (1976–95) for those with low education.
• Our findings illustrate the importance of tracking optimism in a systematic manner.
Abstract: How was optimism related to mortality before the rise in “deaths of despair” that began in the late 1990s? Using the Panel Study of Income Dynamics, we show that as early as 1968 more optimistic people lived longer. The relationship depends on many factors including gender, race, health, and education. We then evaluate these and other variables as correlates of individual optimism over the period 1968–1975. We find women and African Americans were less optimistic at the time than men and whites, although this changed beginning in the late 1970′s. Greater education is associated with greater optimism and so is having wealthy parents. We then predict optimism for the same individuals in subsequent years, thus generating our best guess as to how optimism changed for various demographic groups from 1976–1995. We find people with less than a high school degree had the greatest declines in optimism, a trend with long-run links to premature mortality and deaths of despair. Our findings highlight the importance of better understanding optimism's causes and consequences.
Keywords: MortalityOptimismExpectationsDeaths of despairDemographic trendsPrediction
Beneficial effect of difficult learning tests: The effect was moderated by intelligence; no positive effect of testing for those of lower intelligence; average & especially higher intelligent learners got benefits
Relatively unintelligent individuals do not benefit from intentionally hindered learning: The role of desirable difficulties. Kristin Wenzel, Marc-André Reinhard. Intelligence, Volume 77, November–December 2019, 101405. https://doi.org/10.1016/j.intell.2019.101405
Highlights
• In two studies intelligence was positively correlated with later learning success.
• Study 2 also showed a beneficial effect of difficult learning tests.
• This effect was moderated by intelligence.
• There was no positive effect of testing for learners with lower intelligence.
• Average and especially higher intelligent learners profited from difficulties.
Abstract: Intelligence is an important predictor of long-term learning and academic achievement. In two studies we focused on the relation among intelligence, desirable difficulties–active generation/production of information and taking tests–, and long-term learning. We hypothesized that intelligence is positively correlated to long-term learning and that difficult learning situations, as opposed to easier reading, increase later long-term learning. We further assumed that the beneficial effects of difficult learning would be moderated by intelligence, thus, we supposed the positive effects to be stronger for learners with higher intelligence and weaker for learners with lower intelligence. We in turn conducted two experiments (N1 = 149, N2 = 176, respectively), measured participants' intelligence, applied desirable difficulties–generation/testing–in contrast to control tasks, and later assessed long-term learning indicated by delayed final test performance. Both studies showed positive correlations between intelligence and later long-term learning. Study 2 further found the expected beneficial effect of difficult learning, which was also moderated by intelligence. There was no difference between difficult tasks and control tasks for participants with relatively low intelligence. Retrieving answers in learning tests was, however, beneficial for participants with average intelligence and even more beneficial for participants with higher intelligence. In general, our two experiments highlight the importance of intelligence for complex and challenging learning tasks that are supposed to stimulate deeper encoding and more cognitive processing. Thus, specifically learners with higher, or at least average, intelligence should be confronted with difficulties to increase long-term learning and test performance.
Keywords: IntelligenceTesting effectRetrievalGeneration effectDesirable difficultiesLong-term learning
---
4. Generaldiscussion
In two studies, we analyzed the linkage between participants' intelligenceandtheirlong-termlearning, as well as moderating effects of intelligence on difficult learning situations like generation and testing that are supposed to increase long-term learning. Asmentionedinthe introduction, intelligence has often been assumed to be one of the best predictors for learning and academic achievement, especiallyregarding complexand stimulatinglearning. Higher intelligence was further discussedtoincrease the effectiveness of intentionally hindered and more difficult learningsituationsandtobelinked to better and more effortful cognitive information processing. Theresults of our two studies highlight the importance of general intelligence and the inevitability of focusing on intelligence for predicting long-term learning. The positive linkage of intelligence and long-term learning remained robust and strong when controlling for participants' previous knowledge and when manipulating the learning situation. Moreover, although desirable difficulties, at least regarding tests in our second study, were also beneficial, intelligence even moderated the effectiveness of such difficult learning. This moderation effect regarding complex and difficult information is the most important contribution of our second study to the existing intelligence literature. Notably, tests were not more effective than re-reading control tasks for participants with relatively low intelligence but were beneficial for average and highly intelligent participants. Highly intelligent learners profited especially from using learning tests. Hence, intelligence was not only generallylinkedto long-term learning but also moderated situations, processes, andmethods thatwerespecificallyconstructed to increase long-term learning. This is in line with the above-mentionedtheoriesstatingtheimportanceofageneral intelligence factor for learning, success, and academic achievement in different contexts (e. g. , Kuncel et al., 2004; Roth et al., 2015; Spearman, 1904). Additionally, our results are similar to previous (controversial) research stating educational interventions and learning methods to be especially–or even only–advantageous for individuals with at least average cognitive abilities like intelligence: Thus, methods trying to improve longterm learning and academic achievement for everyone are often suggested to only further increase the disparity between high and low abilitylearners (see also the Matthew or rich-get-richer effects; e. g. , Rapport, Brines, Theisen, &Axelrod, 1997; Stern, 2015, 2017; Walberg&Tsai, 1983). Our results further support the literature assuming the importance of higher cognitive abilities for the beneficial effects of desirable difficulties (e. g. , Kaiser et al. , 2018; McDaniel et al. , 2002; Minear et al. , 2018). Our findings present a unique contribution to theunderstandingoftherole of intelligence forlearningingeneral, aswellasforstimulating learning situationsusing difficult, challenging, and complexmaterials. Thus, atleast average and higher intelligence facilitates effective deeper semantic encoding, cognitive processing, cognitive effort, and consolidation of information that is triggered by tests. Due to ourresults, we can advise the implementation of learningtests foruniversitystudents, at least for averagelyandhighly intelligentlearners. These profit from using difficult learning tests, even when applying a rather short, low-stake test only once. Fortunately, suchlearning tests areadvantageousfor a larger population of university studentsandcanbe implemented easily into university courses. Still, lecturers must remain vigilant that the applied learning testsareactually difficultandcomplex enough to trigger the beneficial effects. Concerning relatively unintelligent learners, we cannot unconditionally advise lecturers to use tests because suchlearners would have to indulge in difficult learning without profiting from it. Nonetheless, we also cannot advise against using difficult tests because at the very least, participants with lower intelligence suffered no disadvantages on their long-term learning due to the application of learningtests (see also the often assumed poor-get-poorer effect; e. g. , Stanovich, 1986). However, one might also argue that difficult learning is correlated withstressor frustration for less intelligent learners, because difficult tasks were in general found to increase perceived anxiety, and even low-stake quizzes were linked to pressure compared to a re-readingcontroltask (e. g. , Hinze&Rapp, 2014; O'Neil, Spielberger, & Hansen, 1969). Regarding generation tasks, implications are not that clea rbecause the manipulation of the learning condition in Study1was unsuccessful. In line with this, Study 1 did not result in a significant effect of the learning condition, thus, generation was not more beneficial than a reading control task. At the very least the generation tasks did not reduce participants'long-term learning, thus, they were not harmful. There were some positive and negative aspects of our studies that we care to mention and that could be applied or adapted in future work. For instance, the intelligence test we used was a rather detailed one with high quality factors; future research should use similar measures. This applies especially to the importance and predictivity of a general intelligencefactor. Still, we only used the basis module of the intelligence test, which measures a general intelligence factor similar to g or to fluid intelligence encompassing knowledge components. Future studies may add the existing knowledge tests to additionally assess fluid and crystalline intelligence so that more information regarding intelligence is available. Both of our studies used different curricular and realistic learning materials that are actually used in school and university courses; that said, the results can be generalized for actual learning materials and for information that is complex and difficult instead of relatively abstract learning of word pairs, vocabulary, or associations. It is vital that the difficult learning tasks are perceived as more difficult than the easier control tasks and that both conditions are clearly distinguishable. As a limitation, we only observed the influence of a single manipulated learning condition–one generation task or one learning test–on one single final test assessing long-termlearning. However, itis important to test if the moderating effects of intelligence remain the same when applying multiple learning tests or multiple re-reads over the course of an entire semester. In line with this, future studies should use multiple follow-up final tests to check if the effects change over time. Although the positive effect of intelligence was found in previous studies over long periods, the beneficial effect of tests could decline. One main limitation of our studies is that in regard to intelligence, we were only able to observe correlations. Although we did infer causal effects due to the different times of measurements of intelligence and long-term learning, further causal analyses are still advantageous. Future studies should implement longitudinal designs because these are supposed to serve as a basis for causal effects (cf. Strenze, 2007, 2015). All in all, there remain open questions regarding the tested linkage among intelligence, cognitive processes, generation, testing, and longterm learning. This applies for instance to the underlying effects of cognitive processing for learning. Although we argue that intelligence is positively correlated to better retrieval as well as to deeper processing of information, and although we know that higher intelligence is generallyimportantfor learning, wedonotknowexactlywhy. Thesame applies to the consideration of why desirabledifficultiesincrease cognitive processes that lead to higher long-term learning. It is possible that higherworkingmemory capacities, theability to handle simultaneously more pieces of information, the amount of cognitive resources, or higher memory skills are responsible for increased long-term learning. However, higher success could also be due to the abilities to reason, abstract thinking, or elaboration, or to higher processing speed, or simply to the ability to handle more cognitive effort and to overcome challenging tasks. So, in addition to general intelligence, futurestudies could focus on the linkage between even more aspects of cognitive abilities, likeprocessingspeed, workingmemorycapacity, memory, or reasoning, on long-term learning and the effectiveness of generation/ testing. Moreover, future work should also focus on increasing the benefit of desirable difficulties for learners with all–and especially lower–ability levels and not only for average or highly intelligent individuals. Thus, future studies may try to design difficulties that are adequately difficult for every individual; the tasks should be difficult enough to elicit the beneficial effects of desirable difficulties but still easy enough that learners with lower intelligence are able to overcome them without being completely overwhelmed (see e. g. , Minear et al. , 2018). Future studies should there foremonitor and test which level of difficulty is beneficial for which individual. Lecturers could, for instance, also give lower ability learners more time or apply graded learning aids to support them (see e. g. , Hänze, Schmidt-Weigand, & Stäudel, 2010). Besides, researchers could test if lowerability learners would benefit from longer initial learning phases or from applications of desirable difficulties later in the learning process when these learners have already mastered some of the basic information or formed sufficient previous knowledge (see also the above-mentioned expertise-reversal effect or the aptitude-treatment-interaction; e. g. , Kalyugaetal. , 2003; Snow, 1989). Future work could also test if multiple applications of desirable difficulties or the usage of tests in high-stake learning situations in actual university courses may improve long-term learning forlowerabilityindividuals. In general, future work could also use a more naturalsetting, awithinsubjectdesign, or it could even implement further difficulty nuancesregarding the information as well as the desirable difficulties themselves. Although the forced application of learning tasks is rather common in university courses, it is advantageous to explore the effects of intelligence and desirable difficulties using self-regulated learning. Thus, one could explore if intelligence also moderates the decision to use generation tasks or tests instead of relatively easy re-reading tasks, and also if intelligence moderates learners' persistence while working on such difficulties.
Highlights
• In two studies intelligence was positively correlated with later learning success.
• Study 2 also showed a beneficial effect of difficult learning tests.
• This effect was moderated by intelligence.
• There was no positive effect of testing for learners with lower intelligence.
• Average and especially higher intelligent learners profited from difficulties.
Abstract: Intelligence is an important predictor of long-term learning and academic achievement. In two studies we focused on the relation among intelligence, desirable difficulties–active generation/production of information and taking tests–, and long-term learning. We hypothesized that intelligence is positively correlated to long-term learning and that difficult learning situations, as opposed to easier reading, increase later long-term learning. We further assumed that the beneficial effects of difficult learning would be moderated by intelligence, thus, we supposed the positive effects to be stronger for learners with higher intelligence and weaker for learners with lower intelligence. We in turn conducted two experiments (N1 = 149, N2 = 176, respectively), measured participants' intelligence, applied desirable difficulties–generation/testing–in contrast to control tasks, and later assessed long-term learning indicated by delayed final test performance. Both studies showed positive correlations between intelligence and later long-term learning. Study 2 further found the expected beneficial effect of difficult learning, which was also moderated by intelligence. There was no difference between difficult tasks and control tasks for participants with relatively low intelligence. Retrieving answers in learning tests was, however, beneficial for participants with average intelligence and even more beneficial for participants with higher intelligence. In general, our two experiments highlight the importance of intelligence for complex and challenging learning tasks that are supposed to stimulate deeper encoding and more cognitive processing. Thus, specifically learners with higher, or at least average, intelligence should be confronted with difficulties to increase long-term learning and test performance.
Keywords: IntelligenceTesting effectRetrievalGeneration effectDesirable difficultiesLong-term learning
---
4. Generaldiscussion
In two studies, we analyzed the linkage between participants' intelligenceandtheirlong-termlearning, as well as moderating effects of intelligence on difficult learning situations like generation and testing that are supposed to increase long-term learning. Asmentionedinthe introduction, intelligence has often been assumed to be one of the best predictors for learning and academic achievement, especiallyregarding complexand stimulatinglearning. Higher intelligence was further discussedtoincrease the effectiveness of intentionally hindered and more difficult learningsituationsandtobelinked to better and more effortful cognitive information processing. Theresults of our two studies highlight the importance of general intelligence and the inevitability of focusing on intelligence for predicting long-term learning. The positive linkage of intelligence and long-term learning remained robust and strong when controlling for participants' previous knowledge and when manipulating the learning situation. Moreover, although desirable difficulties, at least regarding tests in our second study, were also beneficial, intelligence even moderated the effectiveness of such difficult learning. This moderation effect regarding complex and difficult information is the most important contribution of our second study to the existing intelligence literature. Notably, tests were not more effective than re-reading control tasks for participants with relatively low intelligence but were beneficial for average and highly intelligent participants. Highly intelligent learners profited especially from using learning tests. Hence, intelligence was not only generallylinkedto long-term learning but also moderated situations, processes, andmethods thatwerespecificallyconstructed to increase long-term learning. This is in line with the above-mentionedtheoriesstatingtheimportanceofageneral intelligence factor for learning, success, and academic achievement in different contexts (e. g. , Kuncel et al., 2004; Roth et al., 2015; Spearman, 1904). Additionally, our results are similar to previous (controversial) research stating educational interventions and learning methods to be especially–or even only–advantageous for individuals with at least average cognitive abilities like intelligence: Thus, methods trying to improve longterm learning and academic achievement for everyone are often suggested to only further increase the disparity between high and low abilitylearners (see also the Matthew or rich-get-richer effects; e. g. , Rapport, Brines, Theisen, &Axelrod, 1997; Stern, 2015, 2017; Walberg&Tsai, 1983). Our results further support the literature assuming the importance of higher cognitive abilities for the beneficial effects of desirable difficulties (e. g. , Kaiser et al. , 2018; McDaniel et al. , 2002; Minear et al. , 2018). Our findings present a unique contribution to theunderstandingoftherole of intelligence forlearningingeneral, aswellasforstimulating learning situationsusing difficult, challenging, and complexmaterials. Thus, atleast average and higher intelligence facilitates effective deeper semantic encoding, cognitive processing, cognitive effort, and consolidation of information that is triggered by tests. Due to ourresults, we can advise the implementation of learningtests foruniversitystudents, at least for averagelyandhighly intelligentlearners. These profit from using difficult learning tests, even when applying a rather short, low-stake test only once. Fortunately, suchlearning tests areadvantageousfor a larger population of university studentsandcanbe implemented easily into university courses. Still, lecturers must remain vigilant that the applied learning testsareactually difficultandcomplex enough to trigger the beneficial effects. Concerning relatively unintelligent learners, we cannot unconditionally advise lecturers to use tests because suchlearners would have to indulge in difficult learning without profiting from it. Nonetheless, we also cannot advise against using difficult tests because at the very least, participants with lower intelligence suffered no disadvantages on their long-term learning due to the application of learningtests (see also the often assumed poor-get-poorer effect; e. g. , Stanovich, 1986). However, one might also argue that difficult learning is correlated withstressor frustration for less intelligent learners, because difficult tasks were in general found to increase perceived anxiety, and even low-stake quizzes were linked to pressure compared to a re-readingcontroltask (e. g. , Hinze&Rapp, 2014; O'Neil, Spielberger, & Hansen, 1969). Regarding generation tasks, implications are not that clea rbecause the manipulation of the learning condition in Study1was unsuccessful. In line with this, Study 1 did not result in a significant effect of the learning condition, thus, generation was not more beneficial than a reading control task. At the very least the generation tasks did not reduce participants'long-term learning, thus, they were not harmful. There were some positive and negative aspects of our studies that we care to mention and that could be applied or adapted in future work. For instance, the intelligence test we used was a rather detailed one with high quality factors; future research should use similar measures. This applies especially to the importance and predictivity of a general intelligencefactor. Still, we only used the basis module of the intelligence test, which measures a general intelligence factor similar to g or to fluid intelligence encompassing knowledge components. Future studies may add the existing knowledge tests to additionally assess fluid and crystalline intelligence so that more information regarding intelligence is available. Both of our studies used different curricular and realistic learning materials that are actually used in school and university courses; that said, the results can be generalized for actual learning materials and for information that is complex and difficult instead of relatively abstract learning of word pairs, vocabulary, or associations. It is vital that the difficult learning tasks are perceived as more difficult than the easier control tasks and that both conditions are clearly distinguishable. As a limitation, we only observed the influence of a single manipulated learning condition–one generation task or one learning test–on one single final test assessing long-termlearning. However, itis important to test if the moderating effects of intelligence remain the same when applying multiple learning tests or multiple re-reads over the course of an entire semester. In line with this, future studies should use multiple follow-up final tests to check if the effects change over time. Although the positive effect of intelligence was found in previous studies over long periods, the beneficial effect of tests could decline. One main limitation of our studies is that in regard to intelligence, we were only able to observe correlations. Although we did infer causal effects due to the different times of measurements of intelligence and long-term learning, further causal analyses are still advantageous. Future studies should implement longitudinal designs because these are supposed to serve as a basis for causal effects (cf. Strenze, 2007, 2015). All in all, there remain open questions regarding the tested linkage among intelligence, cognitive processes, generation, testing, and longterm learning. This applies for instance to the underlying effects of cognitive processing for learning. Although we argue that intelligence is positively correlated to better retrieval as well as to deeper processing of information, and although we know that higher intelligence is generallyimportantfor learning, wedonotknowexactlywhy. Thesame applies to the consideration of why desirabledifficultiesincrease cognitive processes that lead to higher long-term learning. It is possible that higherworkingmemory capacities, theability to handle simultaneously more pieces of information, the amount of cognitive resources, or higher memory skills are responsible for increased long-term learning. However, higher success could also be due to the abilities to reason, abstract thinking, or elaboration, or to higher processing speed, or simply to the ability to handle more cognitive effort and to overcome challenging tasks. So, in addition to general intelligence, futurestudies could focus on the linkage between even more aspects of cognitive abilities, likeprocessingspeed, workingmemorycapacity, memory, or reasoning, on long-term learning and the effectiveness of generation/ testing. Moreover, future work should also focus on increasing the benefit of desirable difficulties for learners with all–and especially lower–ability levels and not only for average or highly intelligent individuals. Thus, future studies may try to design difficulties that are adequately difficult for every individual; the tasks should be difficult enough to elicit the beneficial effects of desirable difficulties but still easy enough that learners with lower intelligence are able to overcome them without being completely overwhelmed (see e. g. , Minear et al. , 2018). Future studies should there foremonitor and test which level of difficulty is beneficial for which individual. Lecturers could, for instance, also give lower ability learners more time or apply graded learning aids to support them (see e. g. , Hänze, Schmidt-Weigand, & Stäudel, 2010). Besides, researchers could test if lowerability learners would benefit from longer initial learning phases or from applications of desirable difficulties later in the learning process when these learners have already mastered some of the basic information or formed sufficient previous knowledge (see also the above-mentioned expertise-reversal effect or the aptitude-treatment-interaction; e. g. , Kalyugaetal. , 2003; Snow, 1989). Future work could also test if multiple applications of desirable difficulties or the usage of tests in high-stake learning situations in actual university courses may improve long-term learning forlowerabilityindividuals. In general, future work could also use a more naturalsetting, awithinsubjectdesign, or it could even implement further difficulty nuancesregarding the information as well as the desirable difficulties themselves. Although the forced application of learning tasks is rather common in university courses, it is advantageous to explore the effects of intelligence and desirable difficulties using self-regulated learning. Thus, one could explore if intelligence also moderates the decision to use generation tasks or tests instead of relatively easy re-reading tasks, and also if intelligence moderates learners' persistence while working on such difficulties.
O Youth and Beauty: Children's Looks and Children's Cognitive Development
O Youth and Beauty: Children's Looks and Children's Cognitive Development. Daniel S. Hamermesh, Rachel A. Gordon, Robert Crosnoe. NBER Working Paper No. 26412, October 2019. https://www.nber.org/papers/w26412
Abstract: We use data from the 11 waves of the U.S. Study of Early Child Care and Youth Development 1991-2005, following children from ages 6 months through 15 years. Observers rated videos of them, obtaining measures of looks at each age. Given their family income, parents’ education, race/ethnicity and gender, being better-looking raised subsequent changes in measurements of objective learning outcomes. The gains imply a long-run impact on cognitive achievement of about 0.04 standard deviations per standard deviation of differences in looks. Similar estimates on changes in reading and arithmetic scores at ages 7, 11 and 16 in the U.K. National Child Development Survey 1958 cohort show larger effects. The extra gains persist when instrumenting children’s looks by their mother’s, and do not work through teachers’ differential treatment of better-looking children, any relation between looks and a child’s behavior, his/her victimization by bullies or self-confidence. Results from both data sets show that a substantial part of the economic returns to beauty result indirectly from its effects on educational attainment. A person whose looks are one standard deviation above average attains 0.4 years more schooling than an otherwise identical average-looking individual.
---
VIII. Conclusions and Implications
We have engaged in various exercises to examine how looks affect children’s cognitive development, measured by the changes in what are mostly objective measures of a child’s or adolescent’s cognitive achievement. One data set, the longitudinal U.S. Study of Early Child Care and Youth Development, followed a sample of over 1300 infants through age 15, collecting information at 11 waves based on a variety of measures of achievement, mostly objective from standardized tests. The other is the 1958 cohort of the U.K. National Child Development Study, which has followed all children born in the U.K. in a particular week up through middle age, with objective assessments of their achievement at ages 7, 11 and 16. In the SECCYD we employed contemporaries of this cohort to rate their looks based on thin slices of videos taken at each age, using averages of the normalized ratings of each child’s looks at each age. In the NCDS we use teachers’ assessments of children’s looks at ages 7 and 11.
Estimating autoregressions describing the change in cognitive achievement between waves as affected by these looks measures, and in some specifications by sets of class/income and racial/ethnic indicators, we demonstrate that looks matter—on average better-looking children show greater improvements in assessments based on objective tests. Because students who perform better in primary and secondary school are more likely to obtain additional education, these results imply that some of the labor market returns to education arise from the indirect effect of looks on educational attainment. This indirect effect is in addition to the direct effect of looks on earnings and other economic outcomes. This inference does not mean that schooling is unproductive. Rather, it implies that the benefits of schooling are tilted toward better-looking students, whose good looks lead them to greater achievements in school and to greater educational attainment than their less good-looking contemporaries.
The unanswered economic question here (and in research on beauty more generally) is: What are the welfare implications of the demonstrated impact of looks on cognitive development? On the side of teachers, do they spend more time teaching better-looking children without subtracting from time spent with less good-looking children? Or is their time merely switched from the bad- to the good-looking? The same questions apply to parents: Do parents tilt their time toward better-looking children without decreasing time spent with their less good-looking offspring; or do they spend more time with them while reducing time allocated toward less good-looking offspring? To the extent that interactions with children’s peers affect their cognitive development, the same questions might be asked about the behavior of a child’s fellow students.
In all cases, if teachers merely add to time spent with good-looking children, one might argue that this apparent discrimination is detrimental only to the extent that teachers’ and parents’ extra time might have been more productively allocated to children who would most benefit from it at the margin. If they switch time from bad- to good looking children, and assuming teachers and parents would allocate their time efficiently absent looks-based discrimination, resources are shifted inefficiently to a use that is less productive at the margin of their allocations of time. We have explored three plausible mechanisms by which better looks might produce higher achievement—teachers’ closeness to and conflict with the student, the child’s behavior and how s/he is treated by other children, as reported by their mothers, and the child’s self-confidence. Although each was associated in expected ways with looks and gains in achievement, none greatly affected the estimated impacts of looks on cognitive development. Inferring the indirect pathways will require studies designed specifically to consider how lookism might operate from early childhood through adolescence.
Studies are needed that connect what is known from the developmental psychology literature to observational studies tracking the natural unfolding of development and that are specifically focused on looks. Existing measures of relationships, identities and discrimination can be adapted to measure how others respond to children’s looks and how youths internalize those responses, including ratings probing looks-based teasing, avoidance or attraction, and experience-sampling methods capturing how teachers may differentially respond to equally-able students with better-and worse-rated looks. If such measures were embedded into longitudinal studies with the kinds of measurements of attractiveness and standardized achievement used here, the mechanisms generating the robust associations evident here could be better understood.
Abstract: We use data from the 11 waves of the U.S. Study of Early Child Care and Youth Development 1991-2005, following children from ages 6 months through 15 years. Observers rated videos of them, obtaining measures of looks at each age. Given their family income, parents’ education, race/ethnicity and gender, being better-looking raised subsequent changes in measurements of objective learning outcomes. The gains imply a long-run impact on cognitive achievement of about 0.04 standard deviations per standard deviation of differences in looks. Similar estimates on changes in reading and arithmetic scores at ages 7, 11 and 16 in the U.K. National Child Development Survey 1958 cohort show larger effects. The extra gains persist when instrumenting children’s looks by their mother’s, and do not work through teachers’ differential treatment of better-looking children, any relation between looks and a child’s behavior, his/her victimization by bullies or self-confidence. Results from both data sets show that a substantial part of the economic returns to beauty result indirectly from its effects on educational attainment. A person whose looks are one standard deviation above average attains 0.4 years more schooling than an otherwise identical average-looking individual.
---
VIII. Conclusions and Implications
We have engaged in various exercises to examine how looks affect children’s cognitive development, measured by the changes in what are mostly objective measures of a child’s or adolescent’s cognitive achievement. One data set, the longitudinal U.S. Study of Early Child Care and Youth Development, followed a sample of over 1300 infants through age 15, collecting information at 11 waves based on a variety of measures of achievement, mostly objective from standardized tests. The other is the 1958 cohort of the U.K. National Child Development Study, which has followed all children born in the U.K. in a particular week up through middle age, with objective assessments of their achievement at ages 7, 11 and 16. In the SECCYD we employed contemporaries of this cohort to rate their looks based on thin slices of videos taken at each age, using averages of the normalized ratings of each child’s looks at each age. In the NCDS we use teachers’ assessments of children’s looks at ages 7 and 11.
Estimating autoregressions describing the change in cognitive achievement between waves as affected by these looks measures, and in some specifications by sets of class/income and racial/ethnic indicators, we demonstrate that looks matter—on average better-looking children show greater improvements in assessments based on objective tests. Because students who perform better in primary and secondary school are more likely to obtain additional education, these results imply that some of the labor market returns to education arise from the indirect effect of looks on educational attainment. This indirect effect is in addition to the direct effect of looks on earnings and other economic outcomes. This inference does not mean that schooling is unproductive. Rather, it implies that the benefits of schooling are tilted toward better-looking students, whose good looks lead them to greater achievements in school and to greater educational attainment than their less good-looking contemporaries.
The unanswered economic question here (and in research on beauty more generally) is: What are the welfare implications of the demonstrated impact of looks on cognitive development? On the side of teachers, do they spend more time teaching better-looking children without subtracting from time spent with less good-looking children? Or is their time merely switched from the bad- to the good-looking? The same questions apply to parents: Do parents tilt their time toward better-looking children without decreasing time spent with their less good-looking offspring; or do they spend more time with them while reducing time allocated toward less good-looking offspring? To the extent that interactions with children’s peers affect their cognitive development, the same questions might be asked about the behavior of a child’s fellow students.
In all cases, if teachers merely add to time spent with good-looking children, one might argue that this apparent discrimination is detrimental only to the extent that teachers’ and parents’ extra time might have been more productively allocated to children who would most benefit from it at the margin. If they switch time from bad- to good looking children, and assuming teachers and parents would allocate their time efficiently absent looks-based discrimination, resources are shifted inefficiently to a use that is less productive at the margin of their allocations of time. We have explored three plausible mechanisms by which better looks might produce higher achievement—teachers’ closeness to and conflict with the student, the child’s behavior and how s/he is treated by other children, as reported by their mothers, and the child’s self-confidence. Although each was associated in expected ways with looks and gains in achievement, none greatly affected the estimated impacts of looks on cognitive development. Inferring the indirect pathways will require studies designed specifically to consider how lookism might operate from early childhood through adolescence.
Studies are needed that connect what is known from the developmental psychology literature to observational studies tracking the natural unfolding of development and that are specifically focused on looks. Existing measures of relationships, identities and discrimination can be adapted to measure how others respond to children’s looks and how youths internalize those responses, including ratings probing looks-based teasing, avoidance or attraction, and experience-sampling methods capturing how teachers may differentially respond to equally-able students with better-and worse-rated looks. If such measures were embedded into longitudinal studies with the kinds of measurements of attractiveness and standardized achievement used here, the mechanisms generating the robust associations evident here could be better understood.
Brand preferences: Increasing polarization in preference partisanship since 2016
Schoenmueller, Verena and Netzer, Oded and Stahl, Florian, Polarized America: From Political Partisanship to Preference Partisanship (October 17, 2019). SSRN: http://dx.doi.org/10.2139/ssrn.3471477
Abstract: In light of the widely discussed political divide post the 2016 election, we investigate in this paper whether this divide extends to the preferences of individuals for commercial brands, media sources and nonprofit organizations and how it evolved post the election. Using publicly available social media data of over 150 million Twitter users’ brand followerships we establish that commercial brands and organizations are affiliated with the consumers political ideology. We create a mosaic of brand preferences that are associated with either sides of the political spectrum, which we term preference partisanship, and explore the extent to which the political divide manifests itself also in the daily lives of individuals. Moreover, we identify an increasing polarization in preference partisanship since Donald Trump became President of the United States. Consistent with compensatory consumption theory, we find the increase in polarization post-election is stronger for liberals relative to conservatives. From a brand perspective, we show that brands can affect their degree of the political polarization by taking a political stand. Finally, after coloring brands as conservative or liberal we investigate the systematic differences and commonalities between them. We provide a publicly available API that allows access to our data and results.
Keywords: Political Marketing, Social Media, Data Mining, Political Polarization, Branding
JEL Classification: M31
Abstract: In light of the widely discussed political divide post the 2016 election, we investigate in this paper whether this divide extends to the preferences of individuals for commercial brands, media sources and nonprofit organizations and how it evolved post the election. Using publicly available social media data of over 150 million Twitter users’ brand followerships we establish that commercial brands and organizations are affiliated with the consumers political ideology. We create a mosaic of brand preferences that are associated with either sides of the political spectrum, which we term preference partisanship, and explore the extent to which the political divide manifests itself also in the daily lives of individuals. Moreover, we identify an increasing polarization in preference partisanship since Donald Trump became President of the United States. Consistent with compensatory consumption theory, we find the increase in polarization post-election is stronger for liberals relative to conservatives. From a brand perspective, we show that brands can affect their degree of the political polarization by taking a political stand. Finally, after coloring brands as conservative or liberal we investigate the systematic differences and commonalities between them. We provide a publicly available API that allows access to our data and results.
Keywords: Political Marketing, Social Media, Data Mining, Political Polarization, Branding
JEL Classification: M31
We perceive facial expressions of emotions in ethnic outgroup members as less intense than those of ingroup members’ expressions, as if they had a more superficial emotional life
Fischer, Agneta, Kai Jonas, and Pum Kommattam. 2019. “Perceived to Feel Less: Intensity Bias in Interethnic Emotion Perception.” Journal of Experimental Social Psychology, Volume 84, September 2019, 103809. https://doi.org/10.1016/j.jesp.2019.04.007
Abstract: The current research focuses on a bias in intensity perception and tests the hypothesis that individuals perceive facial expressions of emotions in ethnic outgroup members as less intense than those of ingroup members’ expressions. In addition to nine previously conducted and reported studies (focussing only on embarrassment, Kommattam, Jonas, & Fischer, 2017, Studies 1 - 9), we conducted a series of three additional studies including white Dutch, U.S., and U.K. participants (N total = 3201) judging the intensity of nine different emotions displayed by different ethnic group members. A random effects model meta-analysis shows that individuals perceive less intense emotions in ethnic outgroup members than in ethnic ingroup members (d = .33 [0.08 – 0.59], (r = .16)). This intensity bias in interethnic emotion perception points to a systematic downplaying of the intensity of outgroup emotions and suggests an empathy gap towards members from other ethnic groups.
Keywords: emotion perception, facial expressions, intensity bias, intergroup empathy, empathy gap
---
General Discussion
General Discussion
Our data provide support for the idea that European descended individuals perceive less intense emotions on the faces of ethnic outgroup members. We used a meta-analytical approach and found an overall effect, showing that white European, European-American, and British perceivers judged emotions expressed by models of Turkish or Moroccan descent as less intense than emotions expressed by Europeans or European-Americans. A random effects model allowed us to estimate the effect size of this intensity bias in the population. The effect was small (d = .33, r = .16) and shows variability (as indicated by the confidence interval [0.08 – 0.59]), but provided clear support for an intensity bias. In Study 5 and 6 reversed effects were found (Kommattam, Jonas, & Fischer, 2017, Study 5 and 6), which may suggest that the intensity bias only occurs when using a specific stimulus set. However, the fact that we again found the bias in a third stimulus set (Study 12) speaks against this explanation. We think that these reverse effects may be due to the unfortunate low power in those two studies (N=59 and N=60 respectively, while N=729 in Study 12).
In the studies included in the meta-analysis, the intercultural or interethnic context was made salient. We would like to stress that we defined this as a precondition for the intensity bias to occur (see also Hogg & Turner, 1987). Indeed, we did not find an intensity bias in earlier studies in which we did not make the intergroup context salient (i.e. without the ranking task)1. These studies were therefore not included in the meta-analysis, as we defined the presence of an interethnic context as one of the inclusion criteria. The salience of the interethnic context aimed to guarantee that the faces of strangers were seen in an interethnic context, and thus categorized as belonging to an ethnic ingroup or outgroup.
We also found evidence for the two boundary conditions. First, information on the emotional context reduces the intensity bias, as reflected in a difference between conditions where emotional (e.g., this woman just won the lottery’) versus neutral or no information was provided. This boundary condition is crucial since clear emotional cues are oftentimes absent in everyday interactions. In professional or educational settings, or when interacting with strangers for example, we often may see subtle signs of emotions, but we do not know what happened to that person, nor do we want to ask. Second, the intensity bias was more pronounced for so-called secondary emotions, namely contempt, pride and embarrassment. We cannot draw conclusions on whether these latter findings support one of the two explanations, however, because the secondary emotions are also least well recognized. In short, the bias may increase when the context is unknown, and when perceivers do not exactly know what happened to a person. Both boundary conditions suggest that more room for interpretation increases the intensity bias in an interethnic context.
The current data speak to various lines of research. The intensity bias may be an indicator of an interethnic empathy gap (Forgiarini, Gallucci, & Maravita, 2011; Gutsell & Inzlicht, 2010; Gutsell & Inzlicht, 2012; Trawalter, Hoffman, & Waytz, 2012), as well as a signal of dehumanization (Harris & Fiske, 2006) or infra humanization (Castano & Giner –Sorolla, 2006; Čehajić, Brown, & Gonzales, 2009; Haslam, Bain, Douge, Lee, & Bastian, 2005; Haslam & Loughnan, 2014; Leyens et al., 2000; Vaes, Leyens, Paladino, & Miranda, 2012; Wohl, Hornsey & Bennett, 2012). We think that the intensity bias especially plays a role in small-scale social interactions, which may lead to misunderstandings, conflicts or exclusion of ethnic outgroup members. The interpretation that members of ethnic outgroups have a more superficial emotional life may lead to ignoring or disregarding their feelings. Such perceptions may in turn be fertile ground for the rise of xenophobia targeting Muslims, migrants, and asylum seekers all over Europe and the United States.
There are also some limitations with regard to our present research. First, it should be emphasized that our data do not justify any conclusions on directionality. It could be that the perception of less intense emotions in ethnic outgroup members may lead to less empathy or more infra-humanization, but the reverse relations are also possible. Further research is needed in order to determine how these two biases are related. In addition, another limitation is that we have only included participants of European descent who perceive emotions displayed by ethnic outgroup members, especially from a Turkish or Moroccan descent. It is important to test a larger variety of perceiver groups in addition to only European descended perceivers in order to examine the generalizability of the effect.
Despite these limitations, we believe that these findings provide new and valuable insights. We demonstrated the presence of an intensity bias across nine different emotions, different ingroup and outgroup models, and with different European descended white samples, with a total 3201 participants. Furthermore, we established the effect using low intensity still images (Kommattam, Jonas, & Fischer, 2017, Studies 1-4 and 7- 9), as well as with standardized dynamic stimuli across three levels of intensity (Study 12). Finally, we performed a meta-analysis, which allowed us to estimate an effect size in the population, unlike null hypothesis significance testing (Field & Gillet, 2010; Rosenthal, 1995). Accordingly, these findings may be seen as a basis for follow up research testing different moderators of the intensity bias.
Families with greater cannabis use showed poorer general cognitive ability, yet within families, twins with more use rarely had lower scores; overall, there is little evidence of causal cannabis effect on cognition
Investigating the Causal Effect of Cannabis Use on Cognitive Function with a Quasi-Experimental Co-Twin Design. J. Megan Ross et al. Drug and Alcohol Dependence, November 2 2019, 107712. https://doi.org/10.1016/j.drugalcdep.2019.107712
Highlights
• Cannabis use and cognition showed small but significant phenotypic associations.
• Families with greater cannabis use showed poorer general cognitive ability.
• Yet within families, twins with higher use rarely had lower cognitive scores.
• Overall, there was little evidence for causal effect of cannabis on cognition.
Abstract
Background It is unclear whether cannabis use causes cognitive decline; several studies show an association between cannabis use and cognitive decline, but quasi-experimental twin studies have found little support for a causal effect. Here, we evaluate the association of cannabis use with general cognitive ability and executive functions (EFs) while controlling for genetic and shared environmental confounds in a longitudinal twin study.
Methods We first examined the phenotypic associations between cannabis initiation, frequency, and use disorder with cognitive abilities, while also controlling for pre-use general cognitive ability and other substance involvement. We tested the concurrent association between the cannabis use variables and cognitive abilities in late adolescence and young adulthood and the longitudinal association between cannabis use variables during adolescence and young adulthood cognitive abilities. Next, we used multilevel models to test whether these relations reflect between- and/or within-twin pair associations.
Results Phenotypically, cannabis use was related to poorer cognitive functioning, although most associations were negligible after accounting for other substance use. Nevertheless, there were few significant within-family twin-specific associations, except that age 17 cannabis frequency was associated with worse age 23 Common EF and general cognitive ability.
Conclusions We found little support for a potential causal effect of cannabis use on cognition, consistent with previous twin studies. Results suggest that cannabis use may not cause decline in cognitive ability among a normative sample of cannabis users.
Keywords: cannabisintelligenceexecutive controladolescenceyoung adulthood
---
An alternative strategy is to use monozygotic (MZ) twins discordant for various indices of
cannabis use, which controls for all genetic and shared environmental confounds as well as age, sex, cohort, and parental characteristics. Additionally, same-sex dizygotic (DZ) twins discordant for cannabis will control for all shared environmental confounds, age, sex, cohort, and parental characteristics, but only partially for genetic differences between families. To our knowledge, there have been three studies using this design to address the association between cannabis use and cognition. However, there has been extensive research using this design to examine causal associations with alcohol (Kendler et al., 2000; Prescott et al., 1999; Young-Wolffe et al., 2011). We choose to use the discordant twin design so that we can compare our results directly with previous studies on the association between cannabis use and cognition.
First, Lyons et al. (2004) examined MZ twins discordant for use 20 years after regular use, and found a significant difference between twins on only one of 50+ measures of cognition. Second, Jackson et al. (2014) found no evidence for a dose-dependent relationship or significant differences in cognition among MZ twins discordant for cannabis use. Similarly, Meier et al. (2017) found no evidence for differences in cognition among a combined sample of MZ and DZ twins discordant for cannabis dependence or use frequency. Thus, quasi-experimental, co-twin control designs have yielded little evidence that cannabis causes poorer cognition.
We expanded on these previous studies in several ways. First, we evaluated the association
between cannabis use and EFs in addition to intelligence; specifically, we examined three EF
components with factor scores derived from a well-validated latent variable model (Friedman et al., 2016; Miyake et al., 2000; Miyake and Friedman, 2012). Most of the discordant twin studies of the association between cannabis use and cognition have focused on general cognitive ability, not on EF. Second, we also considered whether cannabis–cognition associations are explained by other substance use, given the high frequency of polysubstance use (Kedia et al., 2007). Third, we assessed participants during adolescence and young adulthood, while the above-mentioned studies focused on twins up to age 20 and older than age 45.
From 2012... This study contributes to the scholarship of women’s sexuality by utilizing a hermeneutic phenomenological lens, with feminist, Tantric nondual underpinnings
From 2012... Transcending the 'grotesque' illumination of female sexuality. L. Marie Damgaard. Masters Thesis, Education School, Lethbridge Univ, 2012. https://hdl.handle.net/10133/5567
Abstract: This study contributes to the scholarship of women’s sexuality by utilizing a hermeneutic phenomenological lens, with feminist, Tantric nondual underpinnings to explore a group of co-researcher women's understanding of their self-defined ‘grotesque’ sexual experiences. Furthermore, the analysis of their interviews sought to discover the meanings and resulting transformations that these women have undertaken in their experience of sexuality, self, and beyond, which they ascribe to these ‘grotesque’ sexual experiences. Analysis and interpretation of the transcripts resulted in the emergence of several subthemes. The subthemes fell into three main themes, including, Theme A: Patriarchy as a Sculptor of the ‘Grotesque’, Theme B: Denying Authentic Self, and Theme C: Becoming Transformed Through Transgression. Ultimately, the findings of this research study are significant in that they give value to women’s experience, while simultaneously exploring the meaning and understanding that these co-researchers acquired from reflecting on these experiences.
---
Daniluk (1998) sums up many of the binary traps that women in her study mentioned they are forced to navigate below:
The accumulation of the totality of these expectations has some devastating consequences through adolescence and into adulthood regarding sexual exploration. Religion creates a structure in society, through appropriation and recreation of patriarchal ideas in practice and teachings (Daniluk & Browne, 2008). These findings are consistent with Ogden’s research discussing how all women must work through impacts of religion on their sexuality (2008, 2013, 2018). Daniluk (1998) work also supports the notion that women must consistently grapple with religious pressures on their sexuality, whether they are part of a religious community or not. Therefore, religion’s impact on women creates a context which contributes to the Egoic structure in individuals (Almaas, 2000), especially women, as their value is often determined by others and they spend time trying to stay on the ‘right’ or ‘proper’ side of the binary trap. It is important to note the emotional toll women experience when attempting to be perfect and maintain all these external standards; the debilitating deep shame felt when they realize the impossibility of perfectionism (Brown, 2010), and personal and/or societal judgement if their preferences do not match religious outlooks. These binaries continue to intertwine with gender expectations and therefore impact women’s sense of identity and self.
Abstract: This study contributes to the scholarship of women’s sexuality by utilizing a hermeneutic phenomenological lens, with feminist, Tantric nondual underpinnings to explore a group of co-researcher women's understanding of their self-defined ‘grotesque’ sexual experiences. Furthermore, the analysis of their interviews sought to discover the meanings and resulting transformations that these women have undertaken in their experience of sexuality, self, and beyond, which they ascribe to these ‘grotesque’ sexual experiences. Analysis and interpretation of the transcripts resulted in the emergence of several subthemes. The subthemes fell into three main themes, including, Theme A: Patriarchy as a Sculptor of the ‘Grotesque’, Theme B: Denying Authentic Self, and Theme C: Becoming Transformed Through Transgression. Ultimately, the findings of this research study are significant in that they give value to women’s experience, while simultaneously exploring the meaning and understanding that these co-researchers acquired from reflecting on these experiences.
---
Daniluk (1998) sums up many of the binary traps that women in her study mentioned they are forced to navigate below:
"Being based upon or interpreted through patriarchal lenses, the common elements related to women’s sexuality in the religious teachings mentioned above include: a dualistic separation between mind and body, spirit and sexuality; an emphasis on intercourse and procreation in circumscribing the definition and purpose of sex; valuing and treasuring of virginity; insistence on sexual exclusivity between married partners; sanctions against sex outside of marriage; and admonishment of masturbation, sexual fantasies, and homosexuality." (p. 3)
The accumulation of the totality of these expectations has some devastating consequences through adolescence and into adulthood regarding sexual exploration. Religion creates a structure in society, through appropriation and recreation of patriarchal ideas in practice and teachings (Daniluk & Browne, 2008). These findings are consistent with Ogden’s research discussing how all women must work through impacts of religion on their sexuality (2008, 2013, 2018). Daniluk (1998) work also supports the notion that women must consistently grapple with religious pressures on their sexuality, whether they are part of a religious community or not. Therefore, religion’s impact on women creates a context which contributes to the Egoic structure in individuals (Almaas, 2000), especially women, as their value is often determined by others and they spend time trying to stay on the ‘right’ or ‘proper’ side of the binary trap. It is important to note the emotional toll women experience when attempting to be perfect and maintain all these external standards; the debilitating deep shame felt when they realize the impossibility of perfectionism (Brown, 2010), and personal and/or societal judgement if their preferences do not match religious outlooks. These binaries continue to intertwine with gender expectations and therefore impact women’s sense of identity and self.
A preliminary but methodologically improved investigation of the relationships between major personality dimensions and human ejaculate quality
A preliminary but methodologically improved investigation of the relationships between major personality dimensions and human ejaculate quality. Tara DeLecce et al. Personality and Individual Differences, Volume 153, January 15 2020, 109614. https://doi.org/10.1016/j.paid.2019.109614
Abstract: Some research has reported relationships between personality dimensions and ejaculate quality, but this research has methodological limitations. In the current study, we investigated the relationships between six major personality dimensions and ejaculate quality in a design that offered several methodological improvements over previous research. Forty-five fertile men provided two masturbatory ejaculates and completed a measure of personality (HEXACO-60) assessing honesty-humility, emotionality, extraversion, agreeableness, conscientiousness, and openness to experience. Agreeableness was the only personality dimension associated with ejaculate quality, after controlling statistically for participant age, Body Mass Index (BMI), and abstinence duration, and this association was negative. However, once the covariates of BMI, age, and abstinence duration were included in a hierarchical regression (along with the six personality dimensions), agreeableness was no longer a statistically significant predictor of ejaculate quality, although the direction of the relationship remained negative. The current study adds to previous research documenting that psychological attributes—including major dimensions of personality—may be associated with ejaculate quality. We highlight limitations of the current research and identify directions for future study.
Keywords: PersonalityAgreeablenessEjaculate qualitySemen analysisHEXACO
Abstract: Some research has reported relationships between personality dimensions and ejaculate quality, but this research has methodological limitations. In the current study, we investigated the relationships between six major personality dimensions and ejaculate quality in a design that offered several methodological improvements over previous research. Forty-five fertile men provided two masturbatory ejaculates and completed a measure of personality (HEXACO-60) assessing honesty-humility, emotionality, extraversion, agreeableness, conscientiousness, and openness to experience. Agreeableness was the only personality dimension associated with ejaculate quality, after controlling statistically for participant age, Body Mass Index (BMI), and abstinence duration, and this association was negative. However, once the covariates of BMI, age, and abstinence duration were included in a hierarchical regression (along with the six personality dimensions), agreeableness was no longer a statistically significant predictor of ejaculate quality, although the direction of the relationship remained negative. The current study adds to previous research documenting that psychological attributes—including major dimensions of personality—may be associated with ejaculate quality. We highlight limitations of the current research and identify directions for future study.
Keywords: PersonalityAgreeablenessEjaculate qualitySemen analysisHEXACO
Saturday, November 2, 2019
Evolutionary Psychology and Suicidology
Evolutionary Psychology and Suicidology. John F. Gunn III, Pablo Malo, and C. A. Soper. SAGE Handbook of Evolutionary Psychology, Vol 3, Part 7, Chapter 69. https://www.researchgate.net/publication/332142286_Evolutionary_Psychology_and_Suicidology_Draft_chapter_for_SAGE_Handbook_of_Evolutionary_Psychology_2019
ABSTRACT: Suicide – deliberate, intentional self-killing – is a major cause of human mortality and a global public health concern. Suicidology emerged as an interdisciplinary field focused on the prediction and prevention of suicide. Progress has been disappointing: suicide rates resist efforts to reduce them, and there is no theoretical consensus on suicide’s causation. At least since the writing of Sigmund Freud, the search for a scientific understanding of suicide has included theories with evolutionary links. Apparently a human universal behavior, suicide presents a longstanding evolutionary puzzle: the fitness cost of suicide, of being dead, is predictably injurious for the individual’s reproductive future. Some adaptationist theories have been advanced from the neo-Darwinian idea of inclusive fitness: selection may produce behaviors that, while self-injurious for the individual, favor the reproductive prospects of individual’s genetic kin, but there are multiple theoretical and empirical problems with such proposals. Others suggest “by-product” explanations, that suicide is not in itself adaptive, but may be a noxious side-effect of evolved adaptations that are fitness enhancing overall. Most of these proposals coalesce around the central idea that pain, particularly social pain, a vital protective signal which demands the organism take action to end or escape it, may incidentally provoke suicide as a means to achieve that escape. A minimal level of cognitive functioning appears to operate as a second, incidentally suicidogenic, adaptation among adult humans. Together these twin “pain-and- brain” conditions appear to be both necessary and sufficient for suicide, a formulation that implicitly reframes self-killing as an adaptive problem in the evolution of the human species. The explanatory focus shifts, on this basis, from attempting to identify causes of suicide, to identifying adaptive solutions that operate to prevent suicide. A general framework recently proposed by Soper envisages multiple lines of “pain-type” and “brain-type” antisuicide defenses, transmitted culturally and genetically across generations. One class of evolved psychological mechanisms, “keepers”, is reviewed in detail. Soper posits keepers to mobilize as emergency interventions among people at imminent risk of taking their own lives. Challengingly, the hypothesized features of keepers appear to match symptoms of several common mental disorders, including depression, addictions, non-suicidal self-harm, and psychoses. The model would help to explain why it has not proved possible to predict suicide at the individual level: any and all usefully predictive clues would probably already have been exploited and exhausted by evolved antisuicide defenses. The framework may also account for, among other things, the close association of diverse, and often comorbid, psychopathologies with suicidal ideation, but the only weak association of psychopathologies with the progression from ideation to suicidal action. Major implications of, and problems with, Soper’s conceptual approach are discussed. In conclusion, it is suggested that evolutionary psychology offers fresh perspectives for suicidology, and potentially a means to achieve a long overdue, and much needed, integration and unification of suicide theory.
“Natural selection will never produce in a being anything injurious to itself, for natural selection acts solely by and for the good of each” (Darwin, 1859, p. 201).
Suicide, “the act of deliberately killing oneself” (W.H.O., 2014, p. 12) accounts for some million deaths around the world each year and ends about 1.4% of human lives: more people die by intentional self-killing than from wars, accidents, homicide and all other forms of violent death put together (W.H.O., 2013). Many millions of the living are affected, left to deal with the aftermath of others’ suicides (Cerel et al., 2019). A cross-cultural killer, suicide is viewed as a global public health challenge – a major, and preventable, cause of mortality and misery (Satcher, 1999; W.H.O., 2012, 2014). A new multi-disciplinary field of research emerged in the second half of the 20th century, suicidology, focused on trying to understand and tackle the problem (American Association of Suicidology, 2019; Shneidman, 2001).
But frustratingly, by most accounts, decades of concerted effort have made little impression. While other forms of violent mortality have markedly reduced (Pinker, 2011) the global suicide rate is probably much the same now as it was 50 or 100 years ago (Linehan, 2011; Nock et al., 2012; Nock, Ramirez, & Rankin, 2019). Indeed, self-killings in the USA are reportedly on the increase (Hedegaard, Curtin, & Warner, 2018). Progress in building a theoretical base has been equally disappointing, the causation of suicide remaining a scientific mystery (Lester, 2019; Soper, 2019b). As a report published by the World Health Organization admits, “we continue to lack a firm understanding of why, when, and among whom suicidal behavior will occur” (Nock, Borges, & Ono, 2012a, p. 222). Diverse ideas have accumulated over more than a hundred years of theorizing – reviews of prominent offers can be found in the general suicidology literature (Gunn, 2019; Gunn & Lester, 2014; O’Connor & Portzky, 2018; Paniagua, Black, Gallaway, & Coombs, 2010; Selby, Joiner, & Ribeiro, 2014). But perhaps reflecting widespread acceptance that the proximal causes of suicide are complex and multifactorial, no model has won a consensus of support (Hjelmeland, Jaworski, Knizek, & Marsh, 2019; Lester, 2019). Suicidology is fragmented to the extent that a recent meta-review describes the field as “still in a preparadigmatic phase” (Franklin et al., 2017, p. 188) – that is, still in its infancy.
A subset of theories with evolutionary links shows a long-held, if often implicit, acceptance that a coherent understanding of suicide needs to take its place, alongside other fields of modern life sciences, within an evolutionary paradigm. A hundred years ago, Freud (1920/1991) proposed a potentially suicidogenic “death drive” in the context of his broader theory of libido, a framework which, in keeping with the evolutionist spirit of the age, sought to build on the Darwinian premise that selection is predicated on sexual success (Gilbert, 1989; Litman, 1967; Tolaas, 2005).
Psychoanalysis failed to find a satisfactory explanation for suicide, as its practitioners acknowledged at the time (Zilboorg, 1936a), although comparable ideas continue to circulate in suicidology (Selby et al., 2014). Many alternative approaches have been advanced in the intervening century, as we will discuss, but suicide is still widely viewed as an evolutionary puzzle (Aubin, Berlin, & Kornreich, 2013; Blasco-Fontecilla, Lopez-Castroman, Gomez-Carrillo, & Baca-Garcia, 2009; Confer et al., 2010). The conundrum follows from our opening quotation: how is it that, seemingly contradicting Darwin’s (1859) prediction, selection permits so self-destructive a behavior?
In search of answers, we review and critique prominent ways in which evolutionary ideas have informed suicide research, and outline a new general approach, the modelling of suicide as an evolutionary by-product which, we believe, offers scope for long overdue convergence in suicide theory and, it is hoped, prospects for saving lives (Gunn, 2017; Humphrey, 2018; Soper, 2016, 2017, 2018).
7 Concluding comments
At one level this chapter carries an encouraging message. The evolutionary approach would seem, in principle, capable of bringing much needed and long overdue unity and coherence to suicidology’s current morass of unconnected theory. A “pain-and-brain” framework in particular could offer a rallying point for numerous, superficially disparate, theoretical positions, including IPTS (Van Orden et al., 2010), IMV (O’Connor et al., 2016), SPM (Gunn, 2017) and others, which essentially characterize suicide as a way to escape intolerable emotional states. None would appear incompatible with the view that pain, as a biological imperative, demands action to end or escape it, while regular adult human cognition offers intentional self-killing as an effective, but genetically destructive, means to answer that demand.
But the corollary, that humans are protected from suicide by evolved psychological mechanisms, may be harder to digest. Blind to our own instinctual motivations (Cosmides & Tooby, 1994), we may be oblivious to their functioning. The idea of antisuicide defenses is not new (e. g., Himmelhoch, 1988; Hundert, 1992; Miller, 2008), but it is only now gaining momentum in the research agenda (Humphrey, 2011, 2018; Soper, 2016, 2017, 2018, 2019a, 2019b; Soper & Shackelford, 2018). Their implications may run wide and deep, and call some long-held preconceptions into question: as Lester (2019) finds, accepting the ramifications requires close reading of the arguments.
Progress is not helped by what Soper (2018) believes to be a two-way blockage in communication between suicidology and evolutionary psychology that may go beyond the kind of institutional fragmentation seen elsewhere in psychological sciences (Staats, 2004). Soper posits that suicide and evolution, each for its own reasons, are concepts that many people, researchers included, find intuitively uncomfortable to think or talk about. It may be for this reason that, in one direction, evolutionary psychology has largely ignored suicide, as has psychology generally (J. R. Rogers, 2001): in view of the gravity and ubiquity of suicide as a human phenomenon, and the conspicuous evolutionary puzzle it presents, remarkably little has been written on the subject from an evolutionary psychology perspective, at least until recent years. It is a reticence traceable to Darwin himself: as Tolaas (2005) points out, it is remarkable that a thinker as fearless as Darwin did not confront suicide as a potential problem for his theory. In the other direction, suicidology has largely ignored evolutionary psychology. Illustratively, a review article promisingly titled “Evolutionary processes in suicide” (Chiurliza, Rogers, Schneider, Chu, & Joiner, 2017) attempts to appraise its research group’s ideas (and their ideas alone) without reference to evolutionary psychology’s corpus of texts and tenets – a surprising omission given that evolutionary psychology, “the study of behavior from an evolutionary perspective” (Cornwell, Palmer, Guinther, & Davis, 2005, p. 369), is centrally relevant.
This chapter proposes a pragmatic consilience between the two fields. An evolutionary stance would not seem in itself to entail a radical departure for suicidology: it would, rather, follow the lead set by Freud (1920/1991), Shneidman (1985), Joiner (2005), and other prominent researchers who have drawn on evolutionary ideas across more than a century. Evolutionary psychology could coalesce, not replace, suicidology’s existing theoretical content. There may be little to lose in such an incremental move. The upsides, on the other hand, may be great. Evolutionary psychology offers fresh perspectives and ready tools that could be decisive in a battle currently at stalemate. Evolutionary psychology and suicidology deserve each other’s attention.
ABSTRACT: Suicide – deliberate, intentional self-killing – is a major cause of human mortality and a global public health concern. Suicidology emerged as an interdisciplinary field focused on the prediction and prevention of suicide. Progress has been disappointing: suicide rates resist efforts to reduce them, and there is no theoretical consensus on suicide’s causation. At least since the writing of Sigmund Freud, the search for a scientific understanding of suicide has included theories with evolutionary links. Apparently a human universal behavior, suicide presents a longstanding evolutionary puzzle: the fitness cost of suicide, of being dead, is predictably injurious for the individual’s reproductive future. Some adaptationist theories have been advanced from the neo-Darwinian idea of inclusive fitness: selection may produce behaviors that, while self-injurious for the individual, favor the reproductive prospects of individual’s genetic kin, but there are multiple theoretical and empirical problems with such proposals. Others suggest “by-product” explanations, that suicide is not in itself adaptive, but may be a noxious side-effect of evolved adaptations that are fitness enhancing overall. Most of these proposals coalesce around the central idea that pain, particularly social pain, a vital protective signal which demands the organism take action to end or escape it, may incidentally provoke suicide as a means to achieve that escape. A minimal level of cognitive functioning appears to operate as a second, incidentally suicidogenic, adaptation among adult humans. Together these twin “pain-and- brain” conditions appear to be both necessary and sufficient for suicide, a formulation that implicitly reframes self-killing as an adaptive problem in the evolution of the human species. The explanatory focus shifts, on this basis, from attempting to identify causes of suicide, to identifying adaptive solutions that operate to prevent suicide. A general framework recently proposed by Soper envisages multiple lines of “pain-type” and “brain-type” antisuicide defenses, transmitted culturally and genetically across generations. One class of evolved psychological mechanisms, “keepers”, is reviewed in detail. Soper posits keepers to mobilize as emergency interventions among people at imminent risk of taking their own lives. Challengingly, the hypothesized features of keepers appear to match symptoms of several common mental disorders, including depression, addictions, non-suicidal self-harm, and psychoses. The model would help to explain why it has not proved possible to predict suicide at the individual level: any and all usefully predictive clues would probably already have been exploited and exhausted by evolved antisuicide defenses. The framework may also account for, among other things, the close association of diverse, and often comorbid, psychopathologies with suicidal ideation, but the only weak association of psychopathologies with the progression from ideation to suicidal action. Major implications of, and problems with, Soper’s conceptual approach are discussed. In conclusion, it is suggested that evolutionary psychology offers fresh perspectives for suicidology, and potentially a means to achieve a long overdue, and much needed, integration and unification of suicide theory.
“Natural selection will never produce in a being anything injurious to itself, for natural selection acts solely by and for the good of each” (Darwin, 1859, p. 201).
Suicide, “the act of deliberately killing oneself” (W.H.O., 2014, p. 12) accounts for some million deaths around the world each year and ends about 1.4% of human lives: more people die by intentional self-killing than from wars, accidents, homicide and all other forms of violent death put together (W.H.O., 2013). Many millions of the living are affected, left to deal with the aftermath of others’ suicides (Cerel et al., 2019). A cross-cultural killer, suicide is viewed as a global public health challenge – a major, and preventable, cause of mortality and misery (Satcher, 1999; W.H.O., 2012, 2014). A new multi-disciplinary field of research emerged in the second half of the 20th century, suicidology, focused on trying to understand and tackle the problem (American Association of Suicidology, 2019; Shneidman, 2001).
But frustratingly, by most accounts, decades of concerted effort have made little impression. While other forms of violent mortality have markedly reduced (Pinker, 2011) the global suicide rate is probably much the same now as it was 50 or 100 years ago (Linehan, 2011; Nock et al., 2012; Nock, Ramirez, & Rankin, 2019). Indeed, self-killings in the USA are reportedly on the increase (Hedegaard, Curtin, & Warner, 2018). Progress in building a theoretical base has been equally disappointing, the causation of suicide remaining a scientific mystery (Lester, 2019; Soper, 2019b). As a report published by the World Health Organization admits, “we continue to lack a firm understanding of why, when, and among whom suicidal behavior will occur” (Nock, Borges, & Ono, 2012a, p. 222). Diverse ideas have accumulated over more than a hundred years of theorizing – reviews of prominent offers can be found in the general suicidology literature (Gunn, 2019; Gunn & Lester, 2014; O’Connor & Portzky, 2018; Paniagua, Black, Gallaway, & Coombs, 2010; Selby, Joiner, & Ribeiro, 2014). But perhaps reflecting widespread acceptance that the proximal causes of suicide are complex and multifactorial, no model has won a consensus of support (Hjelmeland, Jaworski, Knizek, & Marsh, 2019; Lester, 2019). Suicidology is fragmented to the extent that a recent meta-review describes the field as “still in a preparadigmatic phase” (Franklin et al., 2017, p. 188) – that is, still in its infancy.
A subset of theories with evolutionary links shows a long-held, if often implicit, acceptance that a coherent understanding of suicide needs to take its place, alongside other fields of modern life sciences, within an evolutionary paradigm. A hundred years ago, Freud (1920/1991) proposed a potentially suicidogenic “death drive” in the context of his broader theory of libido, a framework which, in keeping with the evolutionist spirit of the age, sought to build on the Darwinian premise that selection is predicated on sexual success (Gilbert, 1989; Litman, 1967; Tolaas, 2005).
Psychoanalysis failed to find a satisfactory explanation for suicide, as its practitioners acknowledged at the time (Zilboorg, 1936a), although comparable ideas continue to circulate in suicidology (Selby et al., 2014). Many alternative approaches have been advanced in the intervening century, as we will discuss, but suicide is still widely viewed as an evolutionary puzzle (Aubin, Berlin, & Kornreich, 2013; Blasco-Fontecilla, Lopez-Castroman, Gomez-Carrillo, & Baca-Garcia, 2009; Confer et al., 2010). The conundrum follows from our opening quotation: how is it that, seemingly contradicting Darwin’s (1859) prediction, selection permits so self-destructive a behavior?
In search of answers, we review and critique prominent ways in which evolutionary ideas have informed suicide research, and outline a new general approach, the modelling of suicide as an evolutionary by-product which, we believe, offers scope for long overdue convergence in suicide theory and, it is hoped, prospects for saving lives (Gunn, 2017; Humphrey, 2018; Soper, 2016, 2017, 2018).
7 Concluding comments
At one level this chapter carries an encouraging message. The evolutionary approach would seem, in principle, capable of bringing much needed and long overdue unity and coherence to suicidology’s current morass of unconnected theory. A “pain-and-brain” framework in particular could offer a rallying point for numerous, superficially disparate, theoretical positions, including IPTS (Van Orden et al., 2010), IMV (O’Connor et al., 2016), SPM (Gunn, 2017) and others, which essentially characterize suicide as a way to escape intolerable emotional states. None would appear incompatible with the view that pain, as a biological imperative, demands action to end or escape it, while regular adult human cognition offers intentional self-killing as an effective, but genetically destructive, means to answer that demand.
But the corollary, that humans are protected from suicide by evolved psychological mechanisms, may be harder to digest. Blind to our own instinctual motivations (Cosmides & Tooby, 1994), we may be oblivious to their functioning. The idea of antisuicide defenses is not new (e. g., Himmelhoch, 1988; Hundert, 1992; Miller, 2008), but it is only now gaining momentum in the research agenda (Humphrey, 2011, 2018; Soper, 2016, 2017, 2018, 2019a, 2019b; Soper & Shackelford, 2018). Their implications may run wide and deep, and call some long-held preconceptions into question: as Lester (2019) finds, accepting the ramifications requires close reading of the arguments.
Progress is not helped by what Soper (2018) believes to be a two-way blockage in communication between suicidology and evolutionary psychology that may go beyond the kind of institutional fragmentation seen elsewhere in psychological sciences (Staats, 2004). Soper posits that suicide and evolution, each for its own reasons, are concepts that many people, researchers included, find intuitively uncomfortable to think or talk about. It may be for this reason that, in one direction, evolutionary psychology has largely ignored suicide, as has psychology generally (J. R. Rogers, 2001): in view of the gravity and ubiquity of suicide as a human phenomenon, and the conspicuous evolutionary puzzle it presents, remarkably little has been written on the subject from an evolutionary psychology perspective, at least until recent years. It is a reticence traceable to Darwin himself: as Tolaas (2005) points out, it is remarkable that a thinker as fearless as Darwin did not confront suicide as a potential problem for his theory. In the other direction, suicidology has largely ignored evolutionary psychology. Illustratively, a review article promisingly titled “Evolutionary processes in suicide” (Chiurliza, Rogers, Schneider, Chu, & Joiner, 2017) attempts to appraise its research group’s ideas (and their ideas alone) without reference to evolutionary psychology’s corpus of texts and tenets – a surprising omission given that evolutionary psychology, “the study of behavior from an evolutionary perspective” (Cornwell, Palmer, Guinther, & Davis, 2005, p. 369), is centrally relevant.
This chapter proposes a pragmatic consilience between the two fields. An evolutionary stance would not seem in itself to entail a radical departure for suicidology: it would, rather, follow the lead set by Freud (1920/1991), Shneidman (1985), Joiner (2005), and other prominent researchers who have drawn on evolutionary ideas across more than a century. Evolutionary psychology could coalesce, not replace, suicidology’s existing theoretical content. There may be little to lose in such an incremental move. The upsides, on the other hand, may be great. Evolutionary psychology offers fresh perspectives and ready tools that could be decisive in a battle currently at stalemate. Evolutionary psychology and suicidology deserve each other’s attention.
Association between childhood adoption & bad mental health: Not fully to be attributed to stressful environments; it is partly explained by differences in genetic risk between adoptees & those not-adopted
Childhood adoption and mental health in adulthood: The role of gene-environment correlations and interactions in the UK Biobank. Kelli Lehto et al. Biological Psychiatry, October 31 2019. https://doi.org/10.1016/j.biopsych.2019.10.016
Abstract
Background Being adopted early in life, an indicator of exposure to early life adversity, has been consistently associated with poor mental health outcomes in adulthood. Such associations have largely been attributed to stressful environments, e.g. exposure to trauma, abuse or neglect. However, mental health is substantially heritable, and genetic influences may contribute to the exposure to childhood adversity, resulting in potential genetic confounding of such associations.
Methods Here we explored associations between childhood adoption and mental health-related outcomes in mid-life in 243 797 UK Biobank participants (n adopted=3151). We used linkage disequilibrium score regression and polygenic risk scores for depressive symptoms, schizophrenia, neuroticism and subjective wellbeing to address potential genetic confounding (gene-environment correlations) and gene-environment interactions. As outcomes we explored depressive symptoms, bipolar disorder, neuroticism, loneliness, and mental health-related socioeconomic and psychosocial measures in adoptees compared to non-adopted participants.
Results Adoptees were slightly worse off on almost all mental, socioeconomic and psychosocial measures. Each standard deviation increase in polygenic risk for depressive symptoms, schizophrenia, and neuroticism was associated with 6%, 5%, and 6% increase in the odds of being adopted, respectively. Significant genetic correlations between adoption status and depressive symptoms, major depression, and schizophrenia were observed. No evidence for gene-environment interaction between genetic risk and adoption on mental health was found.
Conclusions The association between childhood adoption and mental health cannot fully be attributed to stressful environments, but is partly explained by differences in genetic risk between adoptees and those not-adopted (i.e. gene-environment correlation).
Keywords: gene-environment interplaydepressive symptomsschizophrenianeuroticismchildhood adversitypolygenic risk scores
Abstract
Background Being adopted early in life, an indicator of exposure to early life adversity, has been consistently associated with poor mental health outcomes in adulthood. Such associations have largely been attributed to stressful environments, e.g. exposure to trauma, abuse or neglect. However, mental health is substantially heritable, and genetic influences may contribute to the exposure to childhood adversity, resulting in potential genetic confounding of such associations.
Methods Here we explored associations between childhood adoption and mental health-related outcomes in mid-life in 243 797 UK Biobank participants (n adopted=3151). We used linkage disequilibrium score regression and polygenic risk scores for depressive symptoms, schizophrenia, neuroticism and subjective wellbeing to address potential genetic confounding (gene-environment correlations) and gene-environment interactions. As outcomes we explored depressive symptoms, bipolar disorder, neuroticism, loneliness, and mental health-related socioeconomic and psychosocial measures in adoptees compared to non-adopted participants.
Results Adoptees were slightly worse off on almost all mental, socioeconomic and psychosocial measures. Each standard deviation increase in polygenic risk for depressive symptoms, schizophrenia, and neuroticism was associated with 6%, 5%, and 6% increase in the odds of being adopted, respectively. Significant genetic correlations between adoption status and depressive symptoms, major depression, and schizophrenia were observed. No evidence for gene-environment interaction between genetic risk and adoption on mental health was found.
Conclusions The association between childhood adoption and mental health cannot fully be attributed to stressful environments, but is partly explained by differences in genetic risk between adoptees and those not-adopted (i.e. gene-environment correlation).
Keywords: gene-environment interplaydepressive symptomsschizophrenianeuroticismchildhood adversitypolygenic risk scores
Happiness is not best understood as an affective state, but better understood within its behavioral context, as an emergent property of activity
Imaging Happiness: Meta Analysis and Review. Joshua Ray Tanzer, Lisa Weyandt. Journal of Happiness Studies, October 31 2019. https://link.springer.com/article/10.1007/s10902-019-00195-7
Abstract: A challenge in studying happiness is its conceptual nature. Is the happiness of hedonistic indulgence the same as the happiness of selfless volunteering? To understand some of these questions, a narrative review of 64 neuroimaging studies between 1995 and 2018 was conducted. Studies were grouped based on how they conceptualized happiness based on Seligman’s (Authentic happiness: using the new positive psychology to realize your potential for lasting fulfillment, Simon and Schuster, New York, 2002) authentic happiness theory. A qualitative narrative review was performed as well as an ALE meta analysis of activation regions. Happiness was identified in 33 separate brain regions across the telencephalon, diencephalon, and metencephalon. Stratifying results by definition of happiness, regions of activity were generally relevant to the tasks performed during the experiment and the kinds of tasks enjoyed by the phenomenon of happiness examined. The ALE analysis identified the claustrum, insula, basal ganglia, and thalamus as showing meaningful activation clusters across studies. Happiness as pleasure and engagement demonstrated close relevance of neural activity to literal activities being performed. Tasks for happiness as meaning, on the other hand, were generally more abstract. Likewise, there was less direct relationship between behavior and phenomenon of happiness, the insula most likely to activate for happiness as meaning. It was concluded that happiness is not best understood as an affective state, but better understood within its behavioral context, as an emergent property of activity. For pleasure and engagement, this meant a literal relationship between behavioral and neurological activity. For meaning, this meant the ongoing assessment of the moral implications of events. Limitations included cross sectional design and hemodynamic focus. Future research should consider concordance of happiness and brain activity across the lifespan. Additionally, future studies should consider the dynamics of neuropeptides.
Keywords: Authentic happiness Neuroimaging Review
Abstract: A challenge in studying happiness is its conceptual nature. Is the happiness of hedonistic indulgence the same as the happiness of selfless volunteering? To understand some of these questions, a narrative review of 64 neuroimaging studies between 1995 and 2018 was conducted. Studies were grouped based on how they conceptualized happiness based on Seligman’s (Authentic happiness: using the new positive psychology to realize your potential for lasting fulfillment, Simon and Schuster, New York, 2002) authentic happiness theory. A qualitative narrative review was performed as well as an ALE meta analysis of activation regions. Happiness was identified in 33 separate brain regions across the telencephalon, diencephalon, and metencephalon. Stratifying results by definition of happiness, regions of activity were generally relevant to the tasks performed during the experiment and the kinds of tasks enjoyed by the phenomenon of happiness examined. The ALE analysis identified the claustrum, insula, basal ganglia, and thalamus as showing meaningful activation clusters across studies. Happiness as pleasure and engagement demonstrated close relevance of neural activity to literal activities being performed. Tasks for happiness as meaning, on the other hand, were generally more abstract. Likewise, there was less direct relationship between behavior and phenomenon of happiness, the insula most likely to activate for happiness as meaning. It was concluded that happiness is not best understood as an affective state, but better understood within its behavioral context, as an emergent property of activity. For pleasure and engagement, this meant a literal relationship between behavioral and neurological activity. For meaning, this meant the ongoing assessment of the moral implications of events. Limitations included cross sectional design and hemodynamic focus. Future research should consider concordance of happiness and brain activity across the lifespan. Additionally, future studies should consider the dynamics of neuropeptides.
Keywords: Authentic happiness Neuroimaging Review
Fear‐containing dreams serve an emotion regulation function; the stronger the recruitment of fear‐responsive regions during dreaming, the weaker their response to actual fear‐eliciting stimuli during wakefulness
Fear in dreams and in wakefulness: Evidence for day/night affective homeostasis. Virginie Sterpenich et al. Human Brain Mapping, October 30 2019. https://doi.org/10.1002/hbm.24843
Abstract: Recent neuroscientific theories have proposed that emotions experienced in dreams contribute to the resolution of emotional distress and preparation for future affective reactions. We addressed one emerging prediction, namely that experiencing fear in dreams is associated with more adapted responses to threatening signals during wakefulness. Using a stepwise approach across two studies, we identified brain regions activated when experiencing fear in dreams and showed that frightening dreams modulated the response of these same regions to threatening stimuli during wakefulness. Specifically, in Study 1, we performed serial awakenings in 18 participants recorded throughout the night with high‐density electroencephalography (EEG) and asked them whether they experienced any fear in their dreams. Insula and midcingulate cortex activity increased for dreams containing fear. In Study 2, we tested 89 participants and found that those reporting higher incidence of fear in their dreams showed reduced emotional arousal and fMRI response to fear‐eliciting stimuli in the insula, amygdala and midcingulate cortex, while awake. Consistent with better emotion regulation processes, the same participants displayed increased medial prefrontal cortex activity. These findings support that emotions in dreams and wakefulness engage similar neural substrates, and substantiate a link between emotional processes occurring during sleep and emotional brain functions during wakefulness.
1 INTRODUCTION
Converging evidence from human and animal research suggests functional links between sleep and emotional processes (Boyce, Glasgow, Williams, & Adamantidis, 2016; Perogamvros & Schwartz, 2012; Wagner, Hallschmid, Rasch, & Born, 2006; Walker & van der Helm, 2009). Chronic sleep disruption can lead to increased aggressiveness (Kamphuis, Meerlo, Koolhaas, & Lancel, 2012) and negative mood states (Zohar, Tzischinsky, Epstein, & Lavie, 2005), whereas affective disorders such as depression and post‐traumatic stress disorder (PTSD) are frequently associated with sleep abnormalities (e.g., insomnia and nightmares). Experimental evidence indicates that acute sleep deprivation impairs the prefrontal control over limbic regions during wakefulness, hence, exacerbating emotional responses to negative stimuli (Yoo, Gujar, Hu, Jolesz, & Walker, 2007). Neuroimaging and intracranial data further established that, during human sleep, emotional limbic networks are activated (e.g., Braun et al., 1997; Corsi‐Cabrera et al., 2016; Maquet et al., 1996; Nofzinger, Mintun, Wiseman, Kupfer, & Moore, 1997; Schabus et al., 2007). Together these findings indicate that sleep physiology may offer a permissive condition for affective information to be reprocessed and reorganized. Yet, it remains unsettled whether such emotion regulation processes also happen at the subjective, experiential level during sleep, and may be expressed in dreams. Several influential theoretical models formalized this idea. For example, the threat simulation theory postulated that dreaming may fulfill a neurobiological function by allowing an offline simulation of threatening events and rehearsal of threat‐avoidance skills, through the activation of a fear‐related amygdalocortical network (Revonsuo, 2000; Valli et al., 2005). Such mechanism would promote adapted behavioral responses in real life situations (Valli & Revonsuo, 2009). By contrast, other models suggested that dreaming would facilitate the resolution of current emotional conflict (Cartwright, Agargun, Kirkby, & Friedman, 2006; Cartwright, Luten, Young, Mercer, & Bears, 1998), the reduction of next‐day negative mood (Schredl, 2010) and extinction learning (Nielsen & Levin, 2007). Although these two main theoretical lines differ, because one focuses on the optimization of waking affective reactions (Perogamvros & Schwartz, 2012; Revonsuo, 2000) and the other on the resolution of current emotional distress (e.g., fear extinction; Nielsen & Levin, 2007), both converge to suggest that experiencing fear in dreams leads to more adapted responses to threatening signals during wakefulness (Scarpelli, Bartolacci, D'Atri, Gorgoni, & De Gennaro, 2019). The proposed mechanism is that memories from a person's affective history are replayed in the virtual and safe environment of the dream so that they can be reorganized (Nielsen & Levin, 2007; Perogamvros & Schwartz, 2012). From a neuroscience perspective, one key premise of these theoretical models is that experiencing emotions in dreams implicates the same brain circuits as in wakefulness (Hobson & Pace‐Schott, 2002; Schwartz, 2003). Preliminary evidence from two anatomical investigations showed that impaired structural integrity of the left amygdala was associated with reduced emotional intensity in dreams (Blake, Terburg, Balchin, van Honk, & Solms, 2019; De Gennaro et al., 2011).
Like during wakefulness, people experience a large variety of emotions in their dreams, with rapid eye movement (REM) dreaming being usually more emotionally loaded than non‐rapid eye movement (NREM) dreams (Carr & Nielsen, 2015; Smith et al., 2004). While some studies found a relative predominance of negative emotions, such as fear and anxiety, in dreams (Merritt, Stickgold, Pace‐Schott, Williams, & Hobson, 1994; Roussy et al., 2000), others reported a balance of positive and negative emotions (Schredl & Doll, 1998), or found that joy and emotions related to approach behaviors may prevail (Fosse, Stickgold, & Hobson, 2001; Malcolm‐Smith, Koopowitz, Pantelis, & Solms, 2012). When performing a lexicostatistical analysis of large data sets of dream reports, a clear dissociation between dreams containing basic, mostly fear‐related, emotions and those with other more social emotions (e.g., embarrassment, excitement, frustration) was found, highlighting distinct affective modes operating during dreaming, with fear in dreams representing a prevalent and biologically‐relevant emotional category (Revonsuo, 2000; Schwartz, 2004). Thus, if fear‐containing dreams serve an emotion regulation function, as hypothesized by the theoretical models, the stronger the recruitment of fear‐responsive brain regions (e.g., amygdala, cingulate cortex, and insula; see Phan, Wager, Taylor, & Liberzon, 2002) during dreaming, the weaker the response of these same regions to actual fear‐eliciting stimuli during wakefulness should be. This compensatory or homeostatic mechanism may also be accompanied by an enhanced recruitment of emotion regulation brain regions (such as the medial prefrontal cortex, mPFC, which is implicated in fear extinction) during wakefulness (Dunsmoor et al., 2019; Phelps, Delgado, Nearing, & LeDoux, 2004; Quirk, Likhtik, Pelletier, & Pare, 2003; Yoo et al., 2007).
Here, we collected dream reports and functional brain measures using high‐density EEG (hdEEG) and functional MRI (fMRI) across two studies to address the following questions: (a) do emotions in dreams (here fear‐related emotions) engage the same neural circuits as during wakefulness and (b) is there a link between emotions experienced in dreams and brain responses to emotional stimuli during wakefulness. By addressing these fundamental and complementary topics, we aim at clarifying the grounding conditions for the study of dreaming as pertaining to day/night affective homeostasis.
Friday, November 1, 2019
Are atheists unprejudiced? Forms of nonbelief and prejudice toward antiliberal and mainstream religious groups
Uzarevic, F., Saroglou, V., & Muñoz-García, A. (2019). Are atheists unprejudiced? Forms of nonbelief and prejudice toward antiliberal and mainstream religious groups. Psychology of Religion and Spirituality, Oct 2019, http://dx.doi.org/10.1037/rel0000247
Abstract: Building on the ideological-conflict hypothesis, we argue that, beyond the religion–prejudice association, there should exist an irreligion–prejudice association toward groups perceived as actively opposing the values of nonbelievers (antiliberal targets) or even as simply being ideologically different: religionists of mainstream religions. Collecting data from three secularized Western European countries (total N = 1,158), we found that, though both believers and nonbelievers disliked moral and religious antiliberals (antigay activists and fundamentalists), atheists and agnostics showed prejudicial discriminatory attitudes toward antiliberals, but also toward mere Christians, and atheists did so also toward Buddhists. Prejudice toward antiliberal and mainstream religious targets was predicted uniquely by antireligious critique, occasionally in addition to high existential quest for the antiliberal targets, but in addition to low existential quest and low belief in the world’s benevolence for mainstream religionists. Future studies should determine whether the effects are similar, more pronounced, or attenuated in very religious societies.
---
Check also Are atheists undogmatic? Filip Uzarevic, Vassilis Saroglou, Magali Clobert. Personality and Individual Differences 116:164-170, October 2017. DOI: 10.1016/j.paid.2017.04.046
Are nonreligious people open-minded, flexible, and undogmatic? Previous research has investigated the links between religiosity, or specific forms of it, and social cognitive tendencies reflecting various aspects of closed-mindedness. The results regarding religious fundamentalism are clear and consistent (Rowatt, Shen, LaBouff, & Gonzalez, 2013). However, even common religiosity, that is being high vs. low on common religious attitudes, beliefs, and practices, often reflects closed-minded ways of thinking to some extent.
Indeed, religiosity is, to a modestdegree, characterized by dogmatism, defined as an inflexibility of ideas, unjustified certainty or denial of evidence contrary to one's own beliefs (Moore&Leach,2016;Vonk & Pitzen, 2016), the need for closure, i.e. the need for structure, order, and answers (Saroglou, 2002), and, in terms of broader personality traits, low openness to experience, in particular low openness to values (Saroglou, 2010). Experimental work provides some causal evidence, that religious beliefs increase when people are confronted with disorder, ambiguity, uncertainty, a lack of control, or a threat to self-esteem (Sedikides & Gebauer, 2014). Not surprisingly thus, religiosity, though to a lesser extent and less consistently than fundamentalism, is often found to predict prejudice. This is certainly the case against moral (e.g., gay persons) and religious outgroups and atheists, but also against ethnic or racial outgroups, at least in monotheistic religious contexts (see Clobert, Saroglou, & Hwang, 2017, for limitations in the East) and when prejudice against a speci fic target is not explicitly socially/religiously prohibited (Batson, Schoenrade, & Ventis, 1993; Ng & Gervais, 2017; Rowatt, Carpenter, & Haggard, 2014).
From this line of research, it is often concluded that non-believers tend to be undogmatic, flexible, open-minded, and unprejudiced, or, to phrase it reversely, express closed-minded tendencies to a lesser degreethan religious believers (Streib&Klein,2013;Zuckerman,Galen,& Pasquale,2016). Beyond the above mentioned evidence whichhastypically been derivedfromanalysesin which religiosity is treated as a continuum, thus assuming linearity from the low to the high end of the religiosity continuum, sociological work based on comparisons between groups who provide self-identification intermsof conviction/affiliation also suggests that atheists are indeed the lowest in the above-mentioned kinds of prejudice(Norris & Inglehart, 2004).
Can psychological research thus clearly and unambiguously affirm that atheists are undogmatic and flexible, at least to a greater degree than their religious peers? We argue that such a conclusion is premature. In the present work, we investigate specific domains of cognition where non-believers may show higher inflexibility in thinking, at least in secularized cultural contexts like those in Western Europe. We also examine whether the above holds for all non-religious persons (for brevity hereafter: non-believers) or only for the subtype who self-identify as atheists. Finally, we will examine the above questions using both self-reported and implicit measures of closed-mindedness. Below, we will first develop our rationale and then detail the study expectations.
Abstract: Building on the ideological-conflict hypothesis, we argue that, beyond the religion–prejudice association, there should exist an irreligion–prejudice association toward groups perceived as actively opposing the values of nonbelievers (antiliberal targets) or even as simply being ideologically different: religionists of mainstream religions. Collecting data from three secularized Western European countries (total N = 1,158), we found that, though both believers and nonbelievers disliked moral and religious antiliberals (antigay activists and fundamentalists), atheists and agnostics showed prejudicial discriminatory attitudes toward antiliberals, but also toward mere Christians, and atheists did so also toward Buddhists. Prejudice toward antiliberal and mainstream religious targets was predicted uniquely by antireligious critique, occasionally in addition to high existential quest for the antiliberal targets, but in addition to low existential quest and low belief in the world’s benevolence for mainstream religionists. Future studies should determine whether the effects are similar, more pronounced, or attenuated in very religious societies.
---
Check also Are atheists undogmatic? Filip Uzarevic, Vassilis Saroglou, Magali Clobert. Personality and Individual Differences 116:164-170, October 2017. DOI: 10.1016/j.paid.2017.04.046
Abstract: Previous theory and evidence favor the idea that religious people tend to be dogmatic to some extent whereas non-religious people are undogmatic: the former firmly hold beliefs, some of which are implausible or even contrary to the real world evidence. We conducted a further critical investigation of this idea, distinguishing three aspects of rigidity: (1) self-reported dogmatism, defined as unjustified certainty vs. not standing for any beliefs, (2) intolerance of contradiction, measured through (low) endorsement of contradictory statements, and (3) low readiness to take a different from one's own perspective, measured through the myside bias technique. Non-believers, at least in Western countries where irreligion has become normative, should be lower on the first, but higher on the other two constructs. Data collected from three countries (UK, France, and Spain, total N = 788) and comparisons between Christians, atheists, and agnostics confirmed the expectations, with agnostics being overall similar to atheists.1.Introduction
Are nonreligious people open-minded, flexible, and undogmatic? Previous research has investigated the links between religiosity, or specific forms of it, and social cognitive tendencies reflecting various aspects of closed-mindedness. The results regarding religious fundamentalism are clear and consistent (Rowatt, Shen, LaBouff, & Gonzalez, 2013). However, even common religiosity, that is being high vs. low on common religious attitudes, beliefs, and practices, often reflects closed-minded ways of thinking to some extent.
Indeed, religiosity is, to a modestdegree, characterized by dogmatism, defined as an inflexibility of ideas, unjustified certainty or denial of evidence contrary to one's own beliefs (Moore&Leach,2016;Vonk & Pitzen, 2016), the need for closure, i.e. the need for structure, order, and answers (Saroglou, 2002), and, in terms of broader personality traits, low openness to experience, in particular low openness to values (Saroglou, 2010). Experimental work provides some causal evidence, that religious beliefs increase when people are confronted with disorder, ambiguity, uncertainty, a lack of control, or a threat to self-esteem (Sedikides & Gebauer, 2014). Not surprisingly thus, religiosity, though to a lesser extent and less consistently than fundamentalism, is often found to predict prejudice. This is certainly the case against moral (e.g., gay persons) and religious outgroups and atheists, but also against ethnic or racial outgroups, at least in monotheistic religious contexts (see Clobert, Saroglou, & Hwang, 2017, for limitations in the East) and when prejudice against a speci fic target is not explicitly socially/religiously prohibited (Batson, Schoenrade, & Ventis, 1993; Ng & Gervais, 2017; Rowatt, Carpenter, & Haggard, 2014).
From this line of research, it is often concluded that non-believers tend to be undogmatic, flexible, open-minded, and unprejudiced, or, to phrase it reversely, express closed-minded tendencies to a lesser degreethan religious believers (Streib&Klein,2013;Zuckerman,Galen,& Pasquale,2016). Beyond the above mentioned evidence whichhastypically been derivedfromanalysesin which religiosity is treated as a continuum, thus assuming linearity from the low to the high end of the religiosity continuum, sociological work based on comparisons between groups who provide self-identification intermsof conviction/affiliation also suggests that atheists are indeed the lowest in the above-mentioned kinds of prejudice(Norris & Inglehart, 2004).
Can psychological research thus clearly and unambiguously affirm that atheists are undogmatic and flexible, at least to a greater degree than their religious peers? We argue that such a conclusion is premature. In the present work, we investigate specific domains of cognition where non-believers may show higher inflexibility in thinking, at least in secularized cultural contexts like those in Western Europe. We also examine whether the above holds for all non-religious persons (for brevity hereafter: non-believers) or only for the subtype who self-identify as atheists. Finally, we will examine the above questions using both self-reported and implicit measures of closed-mindedness. Below, we will first develop our rationale and then detail the study expectations.
Subscribe to:
Posts (Atom)