Who makes a good citizen? The role of personality. Scott Pruysers, Julie Blais, Phillip G.Chen. Personality and Individual Differences. Volume 146, 1 August 2019, Pages 99-104. https://doi.org/10.1016/j.paid.2019.04.007
Abstract: In this paper we explore the link between personality and attitudes towards good citizenship and civic duty. To do so we recruited 371 eligible Canadian voters from a national panel, asking a variety of questions regarding their level of political participation and attitudinal questions regarding the importance of a number of behaviors typically associated with good citizenship (i.e., voting, paying taxes, staying informed, etc.). Importantly, we included two batteries of personality items: the HEXACO, which covers general personality (Honesty-Humility, Emotionality, Extraversion, Agreeableness, Conscientiousness, and Openness to Experience), and the Dark Triad (psychopathy, narcissism, and Machiavellianism). The analysis reveals a consistent and important explanatory role for personality, even after controlling for standard explanatory factors such as age, gender, income, education, political interest, knowledge, efficacy, and placement on the left-right scale. Among other findings, we document a positive relationship between the endorsement of good citizenship and narcissism, and a negative relationship for psychopathy.
Bipartisan Alliance, a Society for the Study of the US Constitution, and of Human Nature, where Republicans and Democrats meet.
Thursday, April 11, 2019
Prejudiced and unaware of it: Evidence for the Dunning-Kruger model in the domains of racism and sexism
Prejudiced and unaware of it: Evidence for the Dunning-Kruger model in the domains of racism and sexism. Keon West, Asia A.Eaton. Personality and Individual Differences, Volume 146, 1 August 2019, Pages 111-119. https://doi.org/10.1016/j.paid.2019.03.047
Abstract: Prior research, and high-prolife contemporary examples, show that individuals tend to underestimate their own levels of bias. This underestimation is partially explained by motivational factors. However, (meta-) cognitive factors may also be involved. Conceptualising contemporary egalitarianism as type of skill or competence, this research proposed that egalitarianism should conform to the Dunning-Kruger model. That is, individuals should overestimate their own ability, and this overestimation should be strongest in the least competent individuals. Furthermore, training should improve metacognition and reduce this overestimation. Two studies on racism (N = 148), and sexism (N = 159) partially supported these hypotheses. In line with the Dunning-Kruger model, participants overestimated their levels of racial and gender-based egalitarianism, and this pattern was strongest among the most prejudiced participants. However, diversity training did not affect participants' overestimation of their egalitarianism. Implications for contemporary prejudice, and prejudice-reducing strategies are discussed.
Abstract: Prior research, and high-prolife contemporary examples, show that individuals tend to underestimate their own levels of bias. This underestimation is partially explained by motivational factors. However, (meta-) cognitive factors may also be involved. Conceptualising contemporary egalitarianism as type of skill or competence, this research proposed that egalitarianism should conform to the Dunning-Kruger model. That is, individuals should overestimate their own ability, and this overestimation should be strongest in the least competent individuals. Furthermore, training should improve metacognition and reduce this overestimation. Two studies on racism (N = 148), and sexism (N = 159) partially supported these hypotheses. In line with the Dunning-Kruger model, participants overestimated their levels of racial and gender-based egalitarianism, and this pattern was strongest among the most prejudiced participants. However, diversity training did not affect participants' overestimation of their egalitarianism. Implications for contemporary prejudice, and prejudice-reducing strategies are discussed.
Joint attention skills in wild Arabian babblers (Turdoides squamiceps): a consequence of cooperative breeding?
Yitzchak Ben Mocha et al. Joint attention skills in wild Arabian babblers (Turdoides squamiceps): a consequence of cooperative breeding?, Proceedings of the Royal Society B: Biological Sciences (2019). Apr 3 2019. DOI: 10.1098/rspb.2019.0147
Abstract
Human cooperation strongly relies on the ability of interlocutors to coordinate each other's attentional state: joint attention. One predominant hypothesis postulates that this hallmark of the unique cognitive system of humans evolved due to the combination of an ape-like cognitive system and the prosocial motives that facilitate cooperative breeding. Here, we tested this hypothesis by investigating communicative interactions of a cooperatively breeding bird species, the Arabian babbler (Turdoides squamiceps). The behaviour of 12 wild social groups was observed focusing on two distinct communicative behaviours: object presentation and babbler walk. The results showed that both behaviours fulfilled the criteria for first-order intentional communication and involved co-orientation of recipients' attention. In turn, recipients responded with cooperative and communicative acts that resulted in coordinated joint travel between interlocutors. These findings provide the first evidence that another animal species shows several key criteria traditionally used to infer joint attention in prelinguistic human infants. Furthermore, they emphasize the influence of cooperative breeding on sophisticated socio-cognitive performances, while questioning the necessity of an ape-like cognitive system underlying joint attentional behaviour.
1. Introduction
The extraordinary degree of cooperation exhibited by humans seems unrivalled in the animal kingdom [1,2]. Theorists have implicated a specific cognitive capacity, joint attention, as one of the essential building blocks for the evolution of the cooperative abilities of humans [1,3]. Traditionally, joint attention has been defined as the ability to attract and coordinate the attention of a recipient towards a locus of mutual interest (e.g. an object/event in the environment [4]). Precursors of this cognitive capacity can already be found in human infants at the age of approximately six months, who respond to joint attention by following the direction of a caretaker's gesture [5]. At the age of 9–12 months, infants are capable of initiating joint attention with a social partner by gesturing towards a locus of mutual interest [4]. Consequently, the development of joint attention skills is seen as a fundamental milestone in the ontogeny of human cooperative communication [5,6]. Some scholars even see joint attention as the ‘small difference that made a big difference’ in the cognitive evolution of our species [3].
One predominant hypothesis about the evolution of human sophisticated social cognition, the ‘cooperative breeding’ hypothesis [2], postulates that cooperative breeding installed prosocial psychological functioning supporting systematic alloparental care. This, in turn, resulted in immediate consequences for socio-cognitive performance [7]: Caretakers of cooperative breeders evolved specific cognitive capacities for understanding the needs of others' offspring. In parallel, the offspring developed elaborate communicative signalling to attract attention and care from caretakers other than their mothers [8]. Concerning humans, it has been argued that the exceptional co-occurrence of an existing ape-like cognitive system together with humans’ systematic reliance on alloparental care [9,10] gave rise to unique socio-cognitive capacities such as joint attention (figure 1) [2,7].
...
However, are joint attentional skills indeed uniquely human and is the combination of an ape-like cognitive system and cooperative breeding a crucial prerequisite?
To date, relatively little is known about joint attentional skills and the acting selection pressures in other animal species. Furthermore, whether specific behaviours of non-human animals towards human caretakers (e.g. gaze alternation in chimpanzees, Pan troglodytes [11]; pointing in bottlenose dolphins, Tursiops truncatus [12]; and vocal learning in grey parrots, Psittacus erithacus [13]) qualify as joint attentional skills in the human sense is subject to a contentious debate [3,14,15].
Here, we revisited the claim that joint attentional skills are a uniquely human ability [3,15] and tested whether the combination of an ape-like cognitive system and cooperative breeding represents a necessary requirement for joint attention to unfold [2]. We predicted that if an ape-like cognitive system and cooperative breeding are both necessary for joint attention [2,7], bird species will not exhibit key hallmarks of this trait. However, if an ape-like cognitive system is not necessary, but cooperative breeding does facilitate the performance of joint attention skills [16], cooperatively breeding species will demonstrate hallmarks of joint attention. To test these predictions, we investigated communicative interactions in a cooperatively breeding bird species, the Arabian babbler (Turdoides squamiceps), in the wild.
Arabian babblers live in stable social groups [17], consisting of 2–20 kin and non-kin from both sexes [17]. All members provide substantial allopaternal care and use elaborate communicative signalling [17]. We focused on two distinct signals, ‘object presentation’ [18] and ‘babbler walk’, which are frequently used to solicit following behaviour from conspecifics. object presentation involves the discreet presentation of an object to attract the recipient's attention without being seen by other group members. The goal is to lead the recipient towards a hidden location for copulation [18] (see electronic supplementary material, video S1). babbler walk is a multi-modal signal that involves conspicuous wing waving and vocalizations, and is used to solicit a conspecific to follow the signaller (figure 2).
...
We paid special attention to two distinct issues that have hampered comparative research on joint attention. First, disagreement on an applicable definition of joint attention has so far prevented valid quantitative comparisons between human and other animal species [19–21]. For example, some define joint attention as the ‘intentional co-orientation of two or more organisms to the same locus’ [21]. Others (e.g. [3]), by contrast, require complex mind reading and define it as ‘the mutual awareness of having attended to the same entity between two (or more) individuals. Mutual awareness is established through communication by at least one individual during mutual gaze’ [20]. To overcome this lack of consensus, we refrained from testing whether communicative episodes between Arabian babblers fit a specific definition. Instead, we investigated whether communicative interactions that were initiated by object presentation and babbler walk fulfilled distinct hallmarks that have traditionally been used to infer joint attention in prelinguistic human infants: (i) intentional communication (e.g. [21,22]), (ii) co-orientation of attention (e.g. [4,15]) and (iii) mutual awareness (e.g. [20,23]). By applying this approach, we aimed to instigate a constructive discussion on joint attentional skills that provides useful tools to pinpoint the different degrees of joint attention and cognitive capacities involved [19].
Second, research on joint attention in non-human species has been strongly biased towards interactions with objects (e.g. [12,20]). However, joint attention can revolve around any type of locus [15] such as, for instance, the interlocutors themselves, or the activity they are performing at the time [4,24]. Hence, we examined whether signallers acted to attract and co-orient recipients' attention to their joint travel.
We thus investigated whether communicative episodes initiated by object presentation and babbler walk fulfil the following three key hallmarks of joint attention (see table 1 and methods for detailed operational criteria).
Check also Yitzchak Ben Mocha et al. Intentional Presentation of Objects in Cooperatively Breeding Arabian Babblers (Turdoides squamiceps), Frontiers in Ecology and Evolution (2019). DOI: 10.3389/fevo.2019.00087
Popular: Attention skills in a nonhuman cooperative breeding species. Max Planck Society Press Article, Apr 11 2019. https://phys.org/news/2019-04-attention-skills-nonhuman-cooperative-species.html
Abstract
Human cooperation strongly relies on the ability of interlocutors to coordinate each other's attentional state: joint attention. One predominant hypothesis postulates that this hallmark of the unique cognitive system of humans evolved due to the combination of an ape-like cognitive system and the prosocial motives that facilitate cooperative breeding. Here, we tested this hypothesis by investigating communicative interactions of a cooperatively breeding bird species, the Arabian babbler (Turdoides squamiceps). The behaviour of 12 wild social groups was observed focusing on two distinct communicative behaviours: object presentation and babbler walk. The results showed that both behaviours fulfilled the criteria for first-order intentional communication and involved co-orientation of recipients' attention. In turn, recipients responded with cooperative and communicative acts that resulted in coordinated joint travel between interlocutors. These findings provide the first evidence that another animal species shows several key criteria traditionally used to infer joint attention in prelinguistic human infants. Furthermore, they emphasize the influence of cooperative breeding on sophisticated socio-cognitive performances, while questioning the necessity of an ape-like cognitive system underlying joint attentional behaviour.
1. Introduction
The extraordinary degree of cooperation exhibited by humans seems unrivalled in the animal kingdom [1,2]. Theorists have implicated a specific cognitive capacity, joint attention, as one of the essential building blocks for the evolution of the cooperative abilities of humans [1,3]. Traditionally, joint attention has been defined as the ability to attract and coordinate the attention of a recipient towards a locus of mutual interest (e.g. an object/event in the environment [4]). Precursors of this cognitive capacity can already be found in human infants at the age of approximately six months, who respond to joint attention by following the direction of a caretaker's gesture [5]. At the age of 9–12 months, infants are capable of initiating joint attention with a social partner by gesturing towards a locus of mutual interest [4]. Consequently, the development of joint attention skills is seen as a fundamental milestone in the ontogeny of human cooperative communication [5,6]. Some scholars even see joint attention as the ‘small difference that made a big difference’ in the cognitive evolution of our species [3].
One predominant hypothesis about the evolution of human sophisticated social cognition, the ‘cooperative breeding’ hypothesis [2], postulates that cooperative breeding installed prosocial psychological functioning supporting systematic alloparental care. This, in turn, resulted in immediate consequences for socio-cognitive performance [7]: Caretakers of cooperative breeders evolved specific cognitive capacities for understanding the needs of others' offspring. In parallel, the offspring developed elaborate communicative signalling to attract attention and care from caretakers other than their mothers [8]. Concerning humans, it has been argued that the exceptional co-occurrence of an existing ape-like cognitive system together with humans’ systematic reliance on alloparental care [9,10] gave rise to unique socio-cognitive capacities such as joint attention (figure 1) [2,7].
...
However, are joint attentional skills indeed uniquely human and is the combination of an ape-like cognitive system and cooperative breeding a crucial prerequisite?
To date, relatively little is known about joint attentional skills and the acting selection pressures in other animal species. Furthermore, whether specific behaviours of non-human animals towards human caretakers (e.g. gaze alternation in chimpanzees, Pan troglodytes [11]; pointing in bottlenose dolphins, Tursiops truncatus [12]; and vocal learning in grey parrots, Psittacus erithacus [13]) qualify as joint attentional skills in the human sense is subject to a contentious debate [3,14,15].
Here, we revisited the claim that joint attentional skills are a uniquely human ability [3,15] and tested whether the combination of an ape-like cognitive system and cooperative breeding represents a necessary requirement for joint attention to unfold [2]. We predicted that if an ape-like cognitive system and cooperative breeding are both necessary for joint attention [2,7], bird species will not exhibit key hallmarks of this trait. However, if an ape-like cognitive system is not necessary, but cooperative breeding does facilitate the performance of joint attention skills [16], cooperatively breeding species will demonstrate hallmarks of joint attention. To test these predictions, we investigated communicative interactions in a cooperatively breeding bird species, the Arabian babbler (Turdoides squamiceps), in the wild.
Arabian babblers live in stable social groups [17], consisting of 2–20 kin and non-kin from both sexes [17]. All members provide substantial allopaternal care and use elaborate communicative signalling [17]. We focused on two distinct signals, ‘object presentation’ [18] and ‘babbler walk’, which are frequently used to solicit following behaviour from conspecifics. object presentation involves the discreet presentation of an object to attract the recipient's attention without being seen by other group members. The goal is to lead the recipient towards a hidden location for copulation [18] (see electronic supplementary material, video S1). babbler walk is a multi-modal signal that involves conspicuous wing waving and vocalizations, and is used to solicit a conspecific to follow the signaller (figure 2).
...
We paid special attention to two distinct issues that have hampered comparative research on joint attention. First, disagreement on an applicable definition of joint attention has so far prevented valid quantitative comparisons between human and other animal species [19–21]. For example, some define joint attention as the ‘intentional co-orientation of two or more organisms to the same locus’ [21]. Others (e.g. [3]), by contrast, require complex mind reading and define it as ‘the mutual awareness of having attended to the same entity between two (or more) individuals. Mutual awareness is established through communication by at least one individual during mutual gaze’ [20]. To overcome this lack of consensus, we refrained from testing whether communicative episodes between Arabian babblers fit a specific definition. Instead, we investigated whether communicative interactions that were initiated by object presentation and babbler walk fulfilled distinct hallmarks that have traditionally been used to infer joint attention in prelinguistic human infants: (i) intentional communication (e.g. [21,22]), (ii) co-orientation of attention (e.g. [4,15]) and (iii) mutual awareness (e.g. [20,23]). By applying this approach, we aimed to instigate a constructive discussion on joint attentional skills that provides useful tools to pinpoint the different degrees of joint attention and cognitive capacities involved [19].
Second, research on joint attention in non-human species has been strongly biased towards interactions with objects (e.g. [12,20]). However, joint attention can revolve around any type of locus [15] such as, for instance, the interlocutors themselves, or the activity they are performing at the time [4,24]. Hence, we examined whether signallers acted to attract and co-orient recipients' attention to their joint travel.
We thus investigated whether communicative episodes initiated by object presentation and babbler walk fulfil the following three key hallmarks of joint attention (see table 1 and methods for detailed operational criteria).
Check also Yitzchak Ben Mocha et al. Intentional Presentation of Objects in Cooperatively Breeding Arabian Babblers (Turdoides squamiceps), Frontiers in Ecology and Evolution (2019). DOI: 10.3389/fevo.2019.00087
Popular: Attention skills in a nonhuman cooperative breeding species. Max Planck Society Press Article, Apr 11 2019. https://phys.org/news/2019-04-attention-skills-nonhuman-cooperative-species.html
Speaking like a Man: Women’s Pitch as a Cue for Gender Stereotyping --Women’s average voice pitch has decreased in recent years, reducing the gap between men on this vocal dimension
Speaking like a Man: Women’s Pitch as a Cue for Gender Stereotyping. Barbara KrahĂ©, Lida Papakonstantinou. Sex Roles, April 11 2019. https://link.springer.com/article/10.1007/s11199-019-01041-z
Abstract: Women’s average voice pitch has decreased in recent years, reducing the gap between men on this vocal dimension. The present study examined whether a woman speaking at a lower pitch would be perceived as less feminine and more masculine than a woman speaking at a higher pitch. Participants (n = 100, 67 female) listened to an audiotape of a woman in which her natural voice was manipulated to represent a pitch of either 220 Hz or 165 Hz. They then rated her on positive and negative facets of masculinity and femininity as well as competence and likeability. In addition, participants’ gendered self-concept was measured to examine potential moderator effects. As predicted, positive masculinity ratings were significantly higher, and positive and negative femininity ratings were significantly lower, in the 165 Hz than in the 220 Hz condition. The woman was also rated as more likeable in the 220 Hz than in the 165 Hz condition. No difference was found for negative masculinity and competence ratings, and no moderation effect of participants’ gendered self-concept emerged. The findings suggest that lower voice pitch is a masculinity cue that elicits stereotyped perceptions of female speakers and may have implications for impression formation in a variety of domains.
Keywords: Gender stereotypes Voice pitch Masculinity Femininity Likeability
Abstract: Women’s average voice pitch has decreased in recent years, reducing the gap between men on this vocal dimension. The present study examined whether a woman speaking at a lower pitch would be perceived as less feminine and more masculine than a woman speaking at a higher pitch. Participants (n = 100, 67 female) listened to an audiotape of a woman in which her natural voice was manipulated to represent a pitch of either 220 Hz or 165 Hz. They then rated her on positive and negative facets of masculinity and femininity as well as competence and likeability. In addition, participants’ gendered self-concept was measured to examine potential moderator effects. As predicted, positive masculinity ratings were significantly higher, and positive and negative femininity ratings were significantly lower, in the 165 Hz than in the 220 Hz condition. The woman was also rated as more likeable in the 220 Hz than in the 165 Hz condition. No difference was found for negative masculinity and competence ratings, and no moderation effect of participants’ gendered self-concept emerged. The findings suggest that lower voice pitch is a masculinity cue that elicits stereotyped perceptions of female speakers and may have implications for impression formation in a variety of domains.
Keywords: Gender stereotypes Voice pitch Masculinity Femininity Likeability
Hungry People Prefer Larger Bodies and Objects: The Importance of Testing Boundary Effects
Saxton, Tamsin, Kristofor McCarty, Jasmine Caizley, Dane McCarrick, and Thomas V. Pollet. 2019. “Hungry People Prefer Larger Bodies and Objects: The Importance of Testing Boundary Effects.” PsyArXiv. April 11. doi:10.31234/osf.io/s7ta8
Abstract: Several experimental studies have indicated that when people are hungry, they assess larger women’s bodies as more attractive, compared to when they are satiated. These satiety-dependent judgements are assumed to contribute to the noted cross-cultural differences in attitudes towards women’s adiposity. However, it is premature to make this assumption until satiety-dependent judgements of stimuli other than female bodies have also been tested. Accordingly, we collected attractiveness judgements of female and male bodies manipulated to vary in size by varying level of adiposity, and objects manipulated to vary in size, from 186 participants who also reported their current hunger level. We found that larger sizes of stimuli in general, and women’s bodies in particular, especially when judged by women, were judged as more attractive at higher levels under conditions of hunger. We discuss these patterns in the context of the Insurance Hypothesis, the Environmental Security Hypothesis, and the impact of hunger on acquisition.
Abstract: Several experimental studies have indicated that when people are hungry, they assess larger women’s bodies as more attractive, compared to when they are satiated. These satiety-dependent judgements are assumed to contribute to the noted cross-cultural differences in attitudes towards women’s adiposity. However, it is premature to make this assumption until satiety-dependent judgements of stimuli other than female bodies have also been tested. Accordingly, we collected attractiveness judgements of female and male bodies manipulated to vary in size by varying level of adiposity, and objects manipulated to vary in size, from 186 participants who also reported their current hunger level. We found that larger sizes of stimuli in general, and women’s bodies in particular, especially when judged by women, were judged as more attractive at higher levels under conditions of hunger. We discuss these patterns in the context of the Insurance Hypothesis, the Environmental Security Hypothesis, and the impact of hunger on acquisition.
From 2013... Do not end the filibuster: Because narrow Senate majorities often represent only a minority of Americans, many filibusters are not at odds with majority rule at all
The Majoritarian Filibuster. Benjamin Eidelson. Yale Law Journal (2013), Volume 122, Issue 4. https://digitalcommons.law.yale.edu/cgi/viewcontent.cgi?article=5549&context=ylj
ABSTRACT:The debate over the Senate filibuster revolves around its apparent conflict with the principle of majority rule. Because narrow Senate majorities often represent only a minority of Americans, however, many filibusters are not at odds with majority rule at all. Bypaying attention to such "majoritarian filibusters," this Note aims to disrupt the terms of the traditional debate and open up a new space for potential compromise. This Note reports the first empirical study of the majoritarian or countermajoritarian character of recent filibusters. These data reveal that, in half of the Congresses over the past two decades, successful filibustering minorities usually represented more people than the majorities they defeated. The choicewhether to preserve the filibuster therefore cannot be reduced to a simple choice between majority rule and minority rights. After exploring the distribution of majoritarian and countermajoritarian filibusters along other dimensions of interest, this Note proposes that themajority-rule principle might be better served by simply reducing the sixty-vote cloture threshold-thereby shifting the balance toward majoritarian as opposed to countermajoritarian filibusters-than by abolishing the filibuster altogether.
ABSTRACT:The debate over the Senate filibuster revolves around its apparent conflict with the principle of majority rule. Because narrow Senate majorities often represent only a minority of Americans, however, many filibusters are not at odds with majority rule at all. Bypaying attention to such "majoritarian filibusters," this Note aims to disrupt the terms of the traditional debate and open up a new space for potential compromise. This Note reports the first empirical study of the majoritarian or countermajoritarian character of recent filibusters. These data reveal that, in half of the Congresses over the past two decades, successful filibustering minorities usually represented more people than the majorities they defeated. The choicewhether to preserve the filibuster therefore cannot be reduced to a simple choice between majority rule and minority rights. After exploring the distribution of majoritarian and countermajoritarian filibusters along other dimensions of interest, this Note proposes that themajority-rule principle might be better served by simply reducing the sixty-vote cloture threshold-thereby shifting the balance toward majoritarian as opposed to countermajoritarian filibusters-than by abolishing the filibuster altogether.
Even in the rare cases where smartphones might alter cognition, this effect is likely transitory
An examination of the potential lingering effects of smartphone use on cognition. Peter Frost et al. Applied Cognitive Psychology, March 18 2019. https://doi.org/10.1002/acp.3546
Summary
Smartphones might offer an extension of our own cognitive abilities, potentially preventing practice of certain forms of cognition. Our first study established that heavier usage of smartphones was negatively correlated with social problem solving and delayed gratification, as well as positively correlated with some aspects of critical thinking. Studies 2 and 3 involved experiments where participants were assigned to either a lower or higher smartphone usage group. In both experiments, higher usage of smartphones led only to a diminished ability to interpret and analyze the deeper meaning of information. However, Study 3 showed that, after a 4‐week interval, the difference in the ability to interpret and analyze meaning between lower and higher phone usage groups was no longer evident. The findings of this study suggest that, even in the rare cases where smartphones might alter cognition, this effect is likely transitory.
1 INTRODUCTION
In his book, “The Shallows,” Carr (2010) suggests the following when it comes to how technology might be changing our cognition:
Over the past few years I've had the uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn't going—so far as I can tell—but it's changing. I'm not thinking the way I used to think. (Carr, 2010, p. 5)
Other popular media and books portray smartphones, social media, and other technology as diminishing cognitive capacities, like attention and concentration (Bauerlein, 2011).
Smartphone technology, in particular, is taking on an increasingly larger role in our daily mental activities. Pew Research data show that 77% of all U.S. adults had a smartphone in 2018, up from 35% in 2011 (Pew Research Center, 2018). The Pew Center also indicated that, in 2018, 94% of people from age 18 to 29 owned a smartphone. While access to smartphone technology has grown, so too has the body of literature investigating the concurrent and immediate residual inhibitory effects of these devices on certain forms of cognition, particularly attention (Strayer, Cooper, Turrill, Coleman, & Hopman, 2015; Strayer & Drews, 2007). Our study, however, focuses more on the potential for longer term effects these devices might exert on higher order cognition, including the ability to think critically, problem solve, reason, and engage in executive control. Some researchers acknowledge that consideration of the intersection between cognition and technology is an understudied area in psychology (Pennycook, Fugelsang, & Koehler, 2015; Wilmer, Sherman, & Chein, 2017).
If we consider the neuroplasticity of the brain—its ability to physically alter neural connectivity on the basis of changes in the environment and experience—could smartphone technology, which provides immediate and 24/7 access to the internet, be altering our cognitive abilities? Underlying this debate is the neuropsychological phenomenon of “use‐it‐or‐lose it,” demonstrated by neuroimaging and physiological studies that show new experiences strengthen synaptic connections in areas of the brain frequently used while weakening connections that are rarely used (Carr, 2010). Reliance on mobile phone technology as a form of “extended cognition” (Barr, Pennycook, Stolz, & Fugelsang, 2015; Clayton, Leshner, & Almond, 2015) or “ibrain” (Small & Vorgan, 2008) might be leading to changes occurring in the brain that alter our ability to engage in various cognitive processes. Might easy access to information take away from practice of more deliberative thoughts and critical thinking skills? Moreover, might the constant notifications and cues provided by mobile phones be interrupting our ability to engage in higher cognition and/or delay gratification for longer term goals?
Some studies, such as that by Barr et al. (2015), have examined whether there may be a correlation between analytical thinking and smartphone technology. Barr and colleagues found that people who relied more heavily on smartphones and information sources performed worse on analytic‐thinking tasks. In a separate review article, Pennycook et al. (2015) acknowledged that this preliminary research suggests that smartphones may serve as a “second brain” that allows people to offload thinking.
Some studies show that smartphones might influence other forms of higher order cognitive functions, such as executive function and ability to delay gratification. The ability to resist temptation in favor of long‐term goals has important implications for individual, societal, and economic success (Casey et al., 2011). Delay of gratification is also negatively correlated with the control abilities in children and adults (Casey et al., 2011; Metcalfe & Mischel, 1999).
Wilmer and Chein (2016) showed that higher scores on a self‐report measure of mobile technology engagement was correlated with a weaker tendency to delay gratification, as measured by a delay discounting task. Their self‐report questionnaire measured (a) daily usage of social media apps, (b) frequency of posting public status updates, and (c) phone‐checking behavior. Wilmer and Chein found a negative correlation between mobile device engagement and delay of gratification. They also found that this relationship might be specifically mediated by individual differences in impulse control and not reward sensitivity. The correlational nature of this study limited inferences about causation, but did show an interesting association between mobile device usage and habits associated with delay of gratification.
There is also evidence that the use of smartphones is correlated with some subjective reports associated with attention. Marty‐Dugas, Ralph, Oakman, and Smilek (2018) found that the results of a self‐report that assessed attention lapses, attention‐related errors, and mind wandering was positively related to results of a self‐report that assessed extent of smartphone use. These researchers found the correlation was particularly strong for participants who used smartphones without a specific purpose in mind (for what they referred to as “absent‐minded” usage). Even the mere presence of smartphones can interrupt attention and task performance (Thornton, Faires, Robbins, & Rollins, 2014). These studies show a relationship between concurrent smartphone use and attention. What about lingering effects on attention?
There is research showing that the multitasking aspect of smartphone technology habits, particularly for purposes of using social media, might relate to longer term effects on control of attention and concentration. Ophir, Clifford, and Wagner (2009) had participants engage in a number classification task (even or odd?) or letter classification task (vowel or consonant?). They then occasionally switched tasks. They calculated a “switch cost” as the difference in mean response time between trials preceded by a trial of the other type (switch trials) minus trials preceded by a trail of the same type (nonswitch trials). The switch cost for heavy social media multitaskers was 167 ms greater than that of low social media multitaskers. Heavy social media multimedia users performed worse on a test of task switching, perhaps—they speculated—because of reduced ability to filter out interference from irrelevant representations.
Relevant to smartphone usage for social networking purposes, studies show a positive association between heavier usage of social networking sites and diminished cognitive control or lack of judgment (Alloway & Alloway, 2012; Cao, Masood, Luqman, & Ali, 2018; Chen & Kim, 2013). Chen and Kim (2013) found that, if heavier users of social networking sites went to them for entertainment or pleasure, those desires overrode privacy concerns, such as unauthorized secondary use and improper access. Heavier usage was also associated with greater problematic social networking behaviors. Chen and Kim identified problematic social networking use based on Young's (1998) 20‐item version of problematic Internet use, and items were adjusted to be more relevant to the context of social networking. For example, respondents were asked if they found themselves staying on social networking sites longer than they intended, and if they feared that life without social networking sites would be empty and joyless. They suggest that problematic use might be a compulsivity control issue that cannot be overridden by recognition of potential long‐term exposure to harm. Again, though, a causal link was not established in this particular study. It is unclear whether heavy usage of social networking attracts people with less cognitive control or if social networking may be contributing to a lack of cognitive control. Some of these studies also emphasize the most extreme cases of heavy technology usage.
Some studies also find a negative association between the use of mobile phone technology and working memory (Abramson et al., 2009). Abramson et al. (2009) administered a questionnaire about mobile phone usage. They administered cognitive tests and found that the more frequently calls were made per week, the poorer was working memory accuracy and reaction time for a simple learning task was shorter.
Some studies have also explored whether the frequent interruptions from smartphones influences social reasoning and learning. Reed, Hirsh‐Pasek, and Golinkoff (2017) conducted an experiment where they found that phone interruptions that occurred when a mother was attempting to teach their 2‐year‐old novel words lead to poorer language acquisition. Though these studies show a concurrent effect of smartphone interruptions on social reasoning and learning, few studies have explored lingering effects of smartphone use on social cognition. Other studies show evidence that reliance on smartphone use for social interactions might correlate with poorer social skills and reasoning (e.g., Jin & Park, 2012). Our series of studies, particularly Studies 2 and 3, included controlled experiments to explore causative aspects pertaining to lingering effects of smartphone use.
It should be noted that whereas some studies have explored potential associations between smartphone usage and cognition, some studies have also explored whether exposure to the electromagnetic radiation emitted by smartphone technology alters cognition and brain physiology. Recent research shows that exposure to electromagnetic fields from smartphones does not alter event‐related potentials, like N100, P200, N200, P300 latencies and N2‐P300 amplitudes (Mohan, Khaliq, Panwar, & Vaney, 2016), nor do these fields alter measures of attention and short‐term memory (Cinel, Boldini, Fox, & Russo, 2008). Therefore, it is not likely that electromagnetic fields will confound an examination of how smartphone activities (for social media, phone calls, internet browsing, etc.) might influence cognition.
Our first study explored whether smartphone usage correlated with various measures of higher order cognition, including the Delayed Gratification Inventory, Cornell Critical Thinking Test, Level Z (for college‐age participants), and Modified Means End Problem Solving. The Modified Means End was used to assess social problem solving because we suspected that higher rates of mobile phone technology and social media might lead to a lack of practice with social problem solving. Although a study by Ekinci (2014) found that problematic internet use was positively correlated with the perception of avoidant and impulsive problem solving styles, no one to our knowledge has examined the relationship between use of smartphones and direct (rather than perceptions of) problem solving.
Once we identified relationships between mobile device usage and various forms of cognition, we then used an experimental manipulation in Study 2 to determine if there were any causal links. Only the cognitive tests and scales found to be related to cognition in Study 1 were investigated in Study 2. Study 3 was intended to replicate the findings in Study 2, as well as include a longer interval of time associated with manipulation of smartphone usage.
Given previous research (e.g., Wilmer & Chein, 2016) and our suspicion that more frequent smartphone usage would lead to less reliance on higher cognition, we hypothesized that in Study 1, the use of smartphones should be negatively correlated with particular executive control abilities, particularly delay gratification and critical thinking. Further, we hypothesized in Studies 2 and 3 that higher rates of smartphone usage might cause diminished abilities associated with certain forms of higher cognition.
Our study adds to the literature on mobile phone technology and cognition by (a) examining some cognitive variables not previously associated with research on mobile technology usage, such as social problem solving and more aspects of critical thinking, (b) using tracking apps rather than self‐reports to more accurately and objectively measure usage rates, and (c) introducing experimental control in Studies 2 and 3.
We chose to use tracking apps rather than rely exclusively on self‐reports since studies have shown that people generally misjudge time durations, especially rapid, yet pervasive, checking behaviors (Grondin, 2010; Rachman, 2002). Andrews, Ellis, Shaw, and Piwek (2015) were the first to use a tracking software they developed to find out how much time was actually spent using mobile phones. Comparing self‐reported usage of mobile phones versus tracking app reports over 2 weeks, they found that their participants did not always self‐report some aspects of their mobile phone usage accurately. For example, they found that young adults far underestimated the average daily number of phone uses.
Summary
Smartphones might offer an extension of our own cognitive abilities, potentially preventing practice of certain forms of cognition. Our first study established that heavier usage of smartphones was negatively correlated with social problem solving and delayed gratification, as well as positively correlated with some aspects of critical thinking. Studies 2 and 3 involved experiments where participants were assigned to either a lower or higher smartphone usage group. In both experiments, higher usage of smartphones led only to a diminished ability to interpret and analyze the deeper meaning of information. However, Study 3 showed that, after a 4‐week interval, the difference in the ability to interpret and analyze meaning between lower and higher phone usage groups was no longer evident. The findings of this study suggest that, even in the rare cases where smartphones might alter cognition, this effect is likely transitory.
1 INTRODUCTION
In his book, “The Shallows,” Carr (2010) suggests the following when it comes to how technology might be changing our cognition:
Over the past few years I've had the uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn't going—so far as I can tell—but it's changing. I'm not thinking the way I used to think. (Carr, 2010, p. 5)
Other popular media and books portray smartphones, social media, and other technology as diminishing cognitive capacities, like attention and concentration (Bauerlein, 2011).
Smartphone technology, in particular, is taking on an increasingly larger role in our daily mental activities. Pew Research data show that 77% of all U.S. adults had a smartphone in 2018, up from 35% in 2011 (Pew Research Center, 2018). The Pew Center also indicated that, in 2018, 94% of people from age 18 to 29 owned a smartphone. While access to smartphone technology has grown, so too has the body of literature investigating the concurrent and immediate residual inhibitory effects of these devices on certain forms of cognition, particularly attention (Strayer, Cooper, Turrill, Coleman, & Hopman, 2015; Strayer & Drews, 2007). Our study, however, focuses more on the potential for longer term effects these devices might exert on higher order cognition, including the ability to think critically, problem solve, reason, and engage in executive control. Some researchers acknowledge that consideration of the intersection between cognition and technology is an understudied area in psychology (Pennycook, Fugelsang, & Koehler, 2015; Wilmer, Sherman, & Chein, 2017).
If we consider the neuroplasticity of the brain—its ability to physically alter neural connectivity on the basis of changes in the environment and experience—could smartphone technology, which provides immediate and 24/7 access to the internet, be altering our cognitive abilities? Underlying this debate is the neuropsychological phenomenon of “use‐it‐or‐lose it,” demonstrated by neuroimaging and physiological studies that show new experiences strengthen synaptic connections in areas of the brain frequently used while weakening connections that are rarely used (Carr, 2010). Reliance on mobile phone technology as a form of “extended cognition” (Barr, Pennycook, Stolz, & Fugelsang, 2015; Clayton, Leshner, & Almond, 2015) or “ibrain” (Small & Vorgan, 2008) might be leading to changes occurring in the brain that alter our ability to engage in various cognitive processes. Might easy access to information take away from practice of more deliberative thoughts and critical thinking skills? Moreover, might the constant notifications and cues provided by mobile phones be interrupting our ability to engage in higher cognition and/or delay gratification for longer term goals?
Some studies, such as that by Barr et al. (2015), have examined whether there may be a correlation between analytical thinking and smartphone technology. Barr and colleagues found that people who relied more heavily on smartphones and information sources performed worse on analytic‐thinking tasks. In a separate review article, Pennycook et al. (2015) acknowledged that this preliminary research suggests that smartphones may serve as a “second brain” that allows people to offload thinking.
Some studies show that smartphones might influence other forms of higher order cognitive functions, such as executive function and ability to delay gratification. The ability to resist temptation in favor of long‐term goals has important implications for individual, societal, and economic success (Casey et al., 2011). Delay of gratification is also negatively correlated with the control abilities in children and adults (Casey et al., 2011; Metcalfe & Mischel, 1999).
Wilmer and Chein (2016) showed that higher scores on a self‐report measure of mobile technology engagement was correlated with a weaker tendency to delay gratification, as measured by a delay discounting task. Their self‐report questionnaire measured (a) daily usage of social media apps, (b) frequency of posting public status updates, and (c) phone‐checking behavior. Wilmer and Chein found a negative correlation between mobile device engagement and delay of gratification. They also found that this relationship might be specifically mediated by individual differences in impulse control and not reward sensitivity. The correlational nature of this study limited inferences about causation, but did show an interesting association between mobile device usage and habits associated with delay of gratification.
There is also evidence that the use of smartphones is correlated with some subjective reports associated with attention. Marty‐Dugas, Ralph, Oakman, and Smilek (2018) found that the results of a self‐report that assessed attention lapses, attention‐related errors, and mind wandering was positively related to results of a self‐report that assessed extent of smartphone use. These researchers found the correlation was particularly strong for participants who used smartphones without a specific purpose in mind (for what they referred to as “absent‐minded” usage). Even the mere presence of smartphones can interrupt attention and task performance (Thornton, Faires, Robbins, & Rollins, 2014). These studies show a relationship between concurrent smartphone use and attention. What about lingering effects on attention?
There is research showing that the multitasking aspect of smartphone technology habits, particularly for purposes of using social media, might relate to longer term effects on control of attention and concentration. Ophir, Clifford, and Wagner (2009) had participants engage in a number classification task (even or odd?) or letter classification task (vowel or consonant?). They then occasionally switched tasks. They calculated a “switch cost” as the difference in mean response time between trials preceded by a trial of the other type (switch trials) minus trials preceded by a trail of the same type (nonswitch trials). The switch cost for heavy social media multitaskers was 167 ms greater than that of low social media multitaskers. Heavy social media multimedia users performed worse on a test of task switching, perhaps—they speculated—because of reduced ability to filter out interference from irrelevant representations.
Relevant to smartphone usage for social networking purposes, studies show a positive association between heavier usage of social networking sites and diminished cognitive control or lack of judgment (Alloway & Alloway, 2012; Cao, Masood, Luqman, & Ali, 2018; Chen & Kim, 2013). Chen and Kim (2013) found that, if heavier users of social networking sites went to them for entertainment or pleasure, those desires overrode privacy concerns, such as unauthorized secondary use and improper access. Heavier usage was also associated with greater problematic social networking behaviors. Chen and Kim identified problematic social networking use based on Young's (1998) 20‐item version of problematic Internet use, and items were adjusted to be more relevant to the context of social networking. For example, respondents were asked if they found themselves staying on social networking sites longer than they intended, and if they feared that life without social networking sites would be empty and joyless. They suggest that problematic use might be a compulsivity control issue that cannot be overridden by recognition of potential long‐term exposure to harm. Again, though, a causal link was not established in this particular study. It is unclear whether heavy usage of social networking attracts people with less cognitive control or if social networking may be contributing to a lack of cognitive control. Some of these studies also emphasize the most extreme cases of heavy technology usage.
Some studies also find a negative association between the use of mobile phone technology and working memory (Abramson et al., 2009). Abramson et al. (2009) administered a questionnaire about mobile phone usage. They administered cognitive tests and found that the more frequently calls were made per week, the poorer was working memory accuracy and reaction time for a simple learning task was shorter.
Some studies have also explored whether the frequent interruptions from smartphones influences social reasoning and learning. Reed, Hirsh‐Pasek, and Golinkoff (2017) conducted an experiment where they found that phone interruptions that occurred when a mother was attempting to teach their 2‐year‐old novel words lead to poorer language acquisition. Though these studies show a concurrent effect of smartphone interruptions on social reasoning and learning, few studies have explored lingering effects of smartphone use on social cognition. Other studies show evidence that reliance on smartphone use for social interactions might correlate with poorer social skills and reasoning (e.g., Jin & Park, 2012). Our series of studies, particularly Studies 2 and 3, included controlled experiments to explore causative aspects pertaining to lingering effects of smartphone use.
It should be noted that whereas some studies have explored potential associations between smartphone usage and cognition, some studies have also explored whether exposure to the electromagnetic radiation emitted by smartphone technology alters cognition and brain physiology. Recent research shows that exposure to electromagnetic fields from smartphones does not alter event‐related potentials, like N100, P200, N200, P300 latencies and N2‐P300 amplitudes (Mohan, Khaliq, Panwar, & Vaney, 2016), nor do these fields alter measures of attention and short‐term memory (Cinel, Boldini, Fox, & Russo, 2008). Therefore, it is not likely that electromagnetic fields will confound an examination of how smartphone activities (for social media, phone calls, internet browsing, etc.) might influence cognition.
Our first study explored whether smartphone usage correlated with various measures of higher order cognition, including the Delayed Gratification Inventory, Cornell Critical Thinking Test, Level Z (for college‐age participants), and Modified Means End Problem Solving. The Modified Means End was used to assess social problem solving because we suspected that higher rates of mobile phone technology and social media might lead to a lack of practice with social problem solving. Although a study by Ekinci (2014) found that problematic internet use was positively correlated with the perception of avoidant and impulsive problem solving styles, no one to our knowledge has examined the relationship between use of smartphones and direct (rather than perceptions of) problem solving.
Once we identified relationships between mobile device usage and various forms of cognition, we then used an experimental manipulation in Study 2 to determine if there were any causal links. Only the cognitive tests and scales found to be related to cognition in Study 1 were investigated in Study 2. Study 3 was intended to replicate the findings in Study 2, as well as include a longer interval of time associated with manipulation of smartphone usage.
Given previous research (e.g., Wilmer & Chein, 2016) and our suspicion that more frequent smartphone usage would lead to less reliance on higher cognition, we hypothesized that in Study 1, the use of smartphones should be negatively correlated with particular executive control abilities, particularly delay gratification and critical thinking. Further, we hypothesized in Studies 2 and 3 that higher rates of smartphone usage might cause diminished abilities associated with certain forms of higher cognition.
Our study adds to the literature on mobile phone technology and cognition by (a) examining some cognitive variables not previously associated with research on mobile technology usage, such as social problem solving and more aspects of critical thinking, (b) using tracking apps rather than self‐reports to more accurately and objectively measure usage rates, and (c) introducing experimental control in Studies 2 and 3.
We chose to use tracking apps rather than rely exclusively on self‐reports since studies have shown that people generally misjudge time durations, especially rapid, yet pervasive, checking behaviors (Grondin, 2010; Rachman, 2002). Andrews, Ellis, Shaw, and Piwek (2015) were the first to use a tracking software they developed to find out how much time was actually spent using mobile phones. Comparing self‐reported usage of mobile phones versus tracking app reports over 2 weeks, they found that their participants did not always self‐report some aspects of their mobile phone usage accurately. For example, they found that young adults far underestimated the average daily number of phone uses.
The evolutionary context of personality development: Ontogenetic and Deferred Adaptations
The evolutionary context of personality development. Marco del Giudice. In Handbook of Personality Development, Chapter 2. New York: Guilford, 2019. January 2019. https://www.researchgate.net/publication/316120672_The_evolutionary_context_of_personality_development
Ontogenetic and Deferred Adaptations
Looking at developmental stages through the lens of biological function suggests a useful distinction between two kinds of adaptations that are often observed in early life. Ontogenetic adaptations are designed to serve their fitnessenhancing function at a specific time in development, and often disappear as soon as they are no longer needed. Examples include the placenta (a fetal organ that provides nourishment and other vital functions during the fetal stage and is discarded immediately after birth) and infantile reflexes, such as the suckling reflex. Deferred adaptations are traits that appear in childhood but function—at least in part—to prepare children for adult behavior (Bjorklund & Ellis, 2014). Play is a paramount example of a deferred adaptation; in humans and other mammals, playing trains youngsters to deal with unexpected events and, at the same time, paves the way to the acquisition of specialized adult skills (e.g., foraging, fighting, parenting) (Geary, 2010; Spinka, Newberry, & Bekoff, 2001). The concept of an ontogenetic adaptation is particularly useful to understand the limits of early experiences in shaping adult personality. Some behavioral traits expressed in childhood serve important functions in the context of family life but may cease to be useful as the child turns into an independent adult. These traits may either disappear or get repurposed in a different form in the service of new developmental goals. For example, attachment styles in infancy are largely determined by the parents’ caregiving styles and show negligible genetic effects. In middle childhood, attachment styles start to become differentiated by sex, possibly under the influence of adrenal androgens; adults’ attachment styles to romantic partners are only weakly correlated with those of infancy, and reflect a sizable contribution of genetic factors (Barbaro, Boutwell, Barnes, & Shackelford, 2017; Del Giudice, 2009, 2015a). At an even deeper level, the existence of parent–offspring conflict implies that the parents’ behavior is not completely in the best interest of their children.
For this reason, children should not passively accept the influence of parents; instead, they should show a certain amount of developmental “resistance” to parental shaping. While it is difficult to directly test this hypothesis, parent– offspring conflict may well contribute to explain why family experiences have only small and inconsistent effects on the development of adult personality.
Ontogenetic and Deferred Adaptations
Looking at developmental stages through the lens of biological function suggests a useful distinction between two kinds of adaptations that are often observed in early life. Ontogenetic adaptations are designed to serve their fitnessenhancing function at a specific time in development, and often disappear as soon as they are no longer needed. Examples include the placenta (a fetal organ that provides nourishment and other vital functions during the fetal stage and is discarded immediately after birth) and infantile reflexes, such as the suckling reflex. Deferred adaptations are traits that appear in childhood but function—at least in part—to prepare children for adult behavior (Bjorklund & Ellis, 2014). Play is a paramount example of a deferred adaptation; in humans and other mammals, playing trains youngsters to deal with unexpected events and, at the same time, paves the way to the acquisition of specialized adult skills (e.g., foraging, fighting, parenting) (Geary, 2010; Spinka, Newberry, & Bekoff, 2001). The concept of an ontogenetic adaptation is particularly useful to understand the limits of early experiences in shaping adult personality. Some behavioral traits expressed in childhood serve important functions in the context of family life but may cease to be useful as the child turns into an independent adult. These traits may either disappear or get repurposed in a different form in the service of new developmental goals. For example, attachment styles in infancy are largely determined by the parents’ caregiving styles and show negligible genetic effects. In middle childhood, attachment styles start to become differentiated by sex, possibly under the influence of adrenal androgens; adults’ attachment styles to romantic partners are only weakly correlated with those of infancy, and reflect a sizable contribution of genetic factors (Barbaro, Boutwell, Barnes, & Shackelford, 2017; Del Giudice, 2009, 2015a). At an even deeper level, the existence of parent–offspring conflict implies that the parents’ behavior is not completely in the best interest of their children.
For this reason, children should not passively accept the influence of parents; instead, they should show a certain amount of developmental “resistance” to parental shaping. While it is difficult to directly test this hypothesis, parent– offspring conflict may well contribute to explain why family experiences have only small and inconsistent effects on the development of adult personality.
Rolf Degen summarizing: The emergence of the ability to build elevated sleeping nests allowed great apes to adopt a more comfortable & human-like sleep posture than the one monkeys take up
Nesting, sleeping, and nighttime behaviors in wild and captive great apes. James R. Anderson, Mabel Y. L. Ang, Louise C. Lock, Iris Weiche. Primates, Apr 10 2019. https://link.springer.com/article/10.1007/s10329-019-00723-2
Abstract: The past few decades have seen a burgeoning of scientific studies on great apes’ use of nests for sleeping in the wild, as well as their nesting behavior and sleep in captivity. We review recent advances in knowledge of these topics, with the aim of promoting information exchange between people working in the field and with captive great apes. We trace developments in research into nest-building techniques in adults and immatures, factors that influence selection of general sleeping sites and specific locations, social aspects of sleep, postures, and nighttime activities. We argue that exchanges of information deriving from studies of captive and wild apes are valuable for obtaining a better understanding of sleep-related adaptations in our nearest evolutionary neighbors, and conclude by making some recommendations regarding sleeping arrangements in captivity from a welfare perspective.
Keywords: Chimpanzee Bonobo Gorilla Orangutan Nest building Sleep posture Sleep quality Environmental enrichment Welfare
Abstract: The past few decades have seen a burgeoning of scientific studies on great apes’ use of nests for sleeping in the wild, as well as their nesting behavior and sleep in captivity. We review recent advances in knowledge of these topics, with the aim of promoting information exchange between people working in the field and with captive great apes. We trace developments in research into nest-building techniques in adults and immatures, factors that influence selection of general sleeping sites and specific locations, social aspects of sleep, postures, and nighttime activities. We argue that exchanges of information deriving from studies of captive and wild apes are valuable for obtaining a better understanding of sleep-related adaptations in our nearest evolutionary neighbors, and conclude by making some recommendations regarding sleeping arrangements in captivity from a welfare perspective.
Keywords: Chimpanzee Bonobo Gorilla Orangutan Nest building Sleep posture Sleep quality Environmental enrichment Welfare