The Rare Sides of Twin Research: Important to Remember/Twin Research Reviews: Representation of Self-Image; Twins With Kleine–Levin Syndrome; Heteropaternal Lemur Twins; Risk of Dental Caries/In the Media: High-Society Models; ‘Winkelevii’ Super Bowl Twins; Multiple Birth × Three; Twin Sister Surrogate; A Presidential Twin? Nancy L. Segal. Twin Research and Human Genetics, October 14 2019. https://doi.org/10.1017/thg.2019.84
Abstract: This article explores some rare sides of twin research. The focus of this article is the sad plight of the Dionne quintuplets, born in Canada in 1934. However, several other studies belong in this category, such as Dr Josef Mengele’s horrifying twin research conducted at the Auschwitz concentration camp, Dr John Money’s misguided attempt to turn an accidentally castrated male twin into a female, Russian scientists’ cruel medical study of conjoined female twins and Dr Peter Neubauer’s secret project that tracked the development of separated twins. Reviews of current twin research span twins’ representation of self-image, twins with Kleine–Levin Syndrome, heteropaternal twinning in lemurs and factors affecting risk of dental caries. Media coverage includes a pair of high-society models, a book about the ‘Winkelevii’ twins, Super Bowl twin teammates, a family with three sets of fraternal twins, a twin sister surrogate and a near presidential twin.
Monday, October 14, 2019
Male Qualities and Likelihood of Orgasm in Women: Consistent findings regarding males who elicit orgasm more frequently pertain to their sexual behavior (being attentive, patient, & receptive to instruction) rather than their traits
Male Qualities and Likelihood of Orgasm. James M. Sherlock and Morgan J. Sidari. In T.K. Shackelford, V.A. Weekes-Shackelford (eds.), Springer Encyclopedia of Evolutionary Psychological Science, 2020, https://doi.org/10.1007/978-3-319-16999-6_278-1
Definition
Male traits and sexual behaviors that predict the likelihood of female orgasm.
Introduction
In contrast to male orgasm, the female orgasm is enormously variable in frequency during penetrative sex. While men almost always ejaculate during penile-vaginal intercourse, women orgasm far less frequently from penetrative sex alone and experience significant variation in orgasm frequency with different partners (Lloyd 2005). While variation in women.s orgasm frequency is poorly understood, evolutionary psychology tends to view variation in behavior as adaptive and responsive to environmental conditions. Current evolutionary theories regarding the female orgasm can be broadly divided into two positions: those that focus on selection on male sexual function and those that focus on selection on female sexual function. The by-product hypothesis concerns the former and posits that the capacity of women to experience orgasm is a consequence of strong selection pressure on males' capacity to reach orgasm. This position is based on similarities between male and female orgasm as well as the observation that the male glans penis and female clitoris arise from homologous tissue during development. In contrast, the mate choice hypothesis argues that variation in female orgasm frequency during sexual intercourse is reflective of varying quality of their male partners. [...]
Mate-Choice Hypotheses of Female Orgasm
Given the high gestational cost of rearing human offspring women should be driven to select mates of high quality. Under the sire-choice hypothesis, quality tends to be defined as the ability to pass favorable genetic traits onto offspring (i.e., heritable traits) that will contribute to offspring fitness. In contrast, the pair-bonding hypothesis postulates that male traits that are likely to benefit the woman through increased care and investment in offspring ought to promote orgasm. Traits that have been identified as theoretically important to each theory can be seen in Table 1 below.
Male Qualities and Likelihood of Orgasm, Table 1
Partner traits distinguishing mate-choice hypotheses (For review see Sherlock et al. 2016)
> Sire choice
Physical attractiveness - Height - Athleticism - Muscularity - Voice depth - Physical fitness - Humor - Creativity - Dominance - Body odor pleasantness
> Pair-bonding
Faithfulness - Warmth - Earning potential - Kindness
Mate-Choice Traits that Predict Orgasm Likelihood
Sire-Choice
The sire-choice hypothesis has been more extensively tested than the pair-bonding hypothesis and some evidence does suggest that women may be more likely to orgasm when their partner possesses traits putatively associated with genetic quality. [...]
Pair-Bonding
Few studies to date have thoroughly investigated pair-bonding traits in the context of female orgasm frequency. However, Costa and Brody (2007) have observed that greater orgasm frequency is associated with overall relationship quality. [...]
Alternative Considerations
While some male traits have been reported to covary with orgasm frequency, this may not represent a causal relationship. Firstly, it is possible that women who orgasm more frequently may be more likely to select particular types of partners than women who orgasm infrequently. For example, women who report a higher orgasm frequency may choose to have sex with more physically attractive men. These men are likely to have more experience in short-term sexual relationships and therefore may be more effective at eliciting orgasm in their partners. [...]
Only one study to date has investigated how women.s orgasm frequencies have varied with different sexual partners. Sherlock et al. (2016) had single women with more than two male sexual partners report on a range of characteristics of the man with whom orgasm was the easiest and the man with whom orgasm was the most difficult (or did not occur at all). By comparing high- and low-orgasm men, several traits emerged as important in predicting ease of orgasm. High-orgasm men tended to be higher in humor, attractiveness, creativity, emotional warmth, faithfulness, and body odor pleasantness, consistent with both sire-choice and pair-bonding hypotheses.
Male Sexual Behavior
Importantly, Sherlock et al. (2016) also observed that a number of sexual behaviors differed between high- and low-orgasm males. Specifically, high-orgasm males were more likely to be focused on their partner.s pleasure, engage in oral sex, use sex toys, spend more time on foreplay, and stimulate their partner's clitoris during sex. Women were also more likely to communicate their sexual position preferences and stimulate their clitoris when having sex with high-orgasm partners. [...] [...] Across all three studies, orgasm was more likely to occur with manual stimulation of the clitoris during intercourse (Frederick et al. 2017; Richters et al. 2006; Sherlock et al. 2016). Consequently, any contribution of male traits (e.g., attractiveness) to female orgasm needs to be considered in the context of variation in male sexual behavior. Further complicating these results is the likely association between male traits and sexual behaviors. [...]
Conclusion
Despite some consistency in male traits associated with orgasm across several studies, there is reason to be cautious interpreting these results without first accounting for male sexual behavior. The most prominent trait associated with increases in the likelihood of female orgasm is attractiveness (Andersson 1994; Gallup et al. 2014; Grammer et al. 2003; Shackelford et al. 2000; Sherlock et al. 2016), yet the causal pathway between attractiveness and female orgasm could be inverse to the current theorizing. That is, women could come to view their partners as more attractive if they are more frequent benefactors of orgasms. In sum, the most consistent findings regarding males who elicit orgasm more frequently pertain to their sexual behavior rather than their traits. Women are more likely to achieve orgasm with a partner who is attentive, patient, and receptive to instruction.
Female Mate Choice; Mate selection; Orgasm; Pair-bonding; The Evolution of Genitalia
Definition
Male traits and sexual behaviors that predict the likelihood of female orgasm.
Introduction
In contrast to male orgasm, the female orgasm is enormously variable in frequency during penetrative sex. While men almost always ejaculate during penile-vaginal intercourse, women orgasm far less frequently from penetrative sex alone and experience significant variation in orgasm frequency with different partners (Lloyd 2005). While variation in women.s orgasm frequency is poorly understood, evolutionary psychology tends to view variation in behavior as adaptive and responsive to environmental conditions. Current evolutionary theories regarding the female orgasm can be broadly divided into two positions: those that focus on selection on male sexual function and those that focus on selection on female sexual function. The by-product hypothesis concerns the former and posits that the capacity of women to experience orgasm is a consequence of strong selection pressure on males' capacity to reach orgasm. This position is based on similarities between male and female orgasm as well as the observation that the male glans penis and female clitoris arise from homologous tissue during development. In contrast, the mate choice hypothesis argues that variation in female orgasm frequency during sexual intercourse is reflective of varying quality of their male partners. [...]
Mate-Choice Hypotheses of Female Orgasm
Given the high gestational cost of rearing human offspring women should be driven to select mates of high quality. Under the sire-choice hypothesis, quality tends to be defined as the ability to pass favorable genetic traits onto offspring (i.e., heritable traits) that will contribute to offspring fitness. In contrast, the pair-bonding hypothesis postulates that male traits that are likely to benefit the woman through increased care and investment in offspring ought to promote orgasm. Traits that have been identified as theoretically important to each theory can be seen in Table 1 below.
Male Qualities and Likelihood of Orgasm, Table 1
Partner traits distinguishing mate-choice hypotheses (For review see Sherlock et al. 2016)
> Sire choice
Physical attractiveness - Height - Athleticism - Muscularity - Voice depth - Physical fitness - Humor - Creativity - Dominance - Body odor pleasantness
> Pair-bonding
Faithfulness - Warmth - Earning potential - Kindness
Mate-Choice Traits that Predict Orgasm Likelihood
Sire-Choice
The sire-choice hypothesis has been more extensively tested than the pair-bonding hypothesis and some evidence does suggest that women may be more likely to orgasm when their partner possesses traits putatively associated with genetic quality. [...]
Pair-Bonding
Few studies to date have thoroughly investigated pair-bonding traits in the context of female orgasm frequency. However, Costa and Brody (2007) have observed that greater orgasm frequency is associated with overall relationship quality. [...]
Alternative Considerations
While some male traits have been reported to covary with orgasm frequency, this may not represent a causal relationship. Firstly, it is possible that women who orgasm more frequently may be more likely to select particular types of partners than women who orgasm infrequently. For example, women who report a higher orgasm frequency may choose to have sex with more physically attractive men. These men are likely to have more experience in short-term sexual relationships and therefore may be more effective at eliciting orgasm in their partners. [...]
Only one study to date has investigated how women.s orgasm frequencies have varied with different sexual partners. Sherlock et al. (2016) had single women with more than two male sexual partners report on a range of characteristics of the man with whom orgasm was the easiest and the man with whom orgasm was the most difficult (or did not occur at all). By comparing high- and low-orgasm men, several traits emerged as important in predicting ease of orgasm. High-orgasm men tended to be higher in humor, attractiveness, creativity, emotional warmth, faithfulness, and body odor pleasantness, consistent with both sire-choice and pair-bonding hypotheses.
Male Sexual Behavior
Importantly, Sherlock et al. (2016) also observed that a number of sexual behaviors differed between high- and low-orgasm males. Specifically, high-orgasm males were more likely to be focused on their partner.s pleasure, engage in oral sex, use sex toys, spend more time on foreplay, and stimulate their partner's clitoris during sex. Women were also more likely to communicate their sexual position preferences and stimulate their clitoris when having sex with high-orgasm partners. [...] [...] Across all three studies, orgasm was more likely to occur with manual stimulation of the clitoris during intercourse (Frederick et al. 2017; Richters et al. 2006; Sherlock et al. 2016). Consequently, any contribution of male traits (e.g., attractiveness) to female orgasm needs to be considered in the context of variation in male sexual behavior. Further complicating these results is the likely association between male traits and sexual behaviors. [...]
Conclusion
Despite some consistency in male traits associated with orgasm across several studies, there is reason to be cautious interpreting these results without first accounting for male sexual behavior. The most prominent trait associated with increases in the likelihood of female orgasm is attractiveness (Andersson 1994; Gallup et al. 2014; Grammer et al. 2003; Shackelford et al. 2000; Sherlock et al. 2016), yet the causal pathway between attractiveness and female orgasm could be inverse to the current theorizing. That is, women could come to view their partners as more attractive if they are more frequent benefactors of orgasms. In sum, the most consistent findings regarding males who elicit orgasm more frequently pertain to their sexual behavior rather than their traits. Women are more likely to achieve orgasm with a partner who is attentive, patient, and receptive to instruction.
Female Mate Choice; Mate selection; Orgasm; Pair-bonding; The Evolution of Genitalia
Effects of the over-the-counter pain reliever acetaminophen, which can alter consumers’ emotional experiences and their economic behavior well beyond soothing their aches and pains, has also memory effects
Drug influences on consumer judgments: emerging insights and research opportunities from the intersection of pharmacology and psychology. Geoffrey R. O. Durso, Kelly L. Haws, Baldwin M. Way. Marketing Letters, October 10 2019. https://link.springer.com/article/10.1007/s11002-019-09500-z
Abstract: Recent evidence at the intersection of pharmacology and psychology suggests that pharmaceutical products and other drugs can exert previously unrecognized effects on consumers’ judgments, emotions, and behavior. We highlight the importance of a wider perspective for marketing science by proposing novel questions about how drugs might influence consumers. As a model for this framework, we review recently discovered effects of the over-the-counter pain reliever acetaminophen, which can alter consumers’ emotional experiences and their economic behavior well beyond soothing their aches and pains, and also present novel data on its memory effects. Observing effects of putatively benign over-the-counter medicines that extend beyond their originally approved usages suggests that many other drugs are also likely to influence processes relevant for consumers. The ubiquity of drug consumption—medical or recreational, legal or otherwise—underscores the importance of considering several novel research directions for understanding pharmacological-psychological interactions on consumer judgments, emotions, and behaviors.
Keywords: Decision making Emotion Memory Pharmaceuticals Substances Acetaminophen
Check also Conference Talk—SMPC 2019: Effect of Acetaminophen on Emotional Sounds. Lindsay A. Warrenburg. 2019. https://osf.io/79f4d/
Abstract: Recent evidence at the intersection of pharmacology and psychology suggests that pharmaceutical products and other drugs can exert previously unrecognized effects on consumers’ judgments, emotions, and behavior. We highlight the importance of a wider perspective for marketing science by proposing novel questions about how drugs might influence consumers. As a model for this framework, we review recently discovered effects of the over-the-counter pain reliever acetaminophen, which can alter consumers’ emotional experiences and their economic behavior well beyond soothing their aches and pains, and also present novel data on its memory effects. Observing effects of putatively benign over-the-counter medicines that extend beyond their originally approved usages suggests that many other drugs are also likely to influence processes relevant for consumers. The ubiquity of drug consumption—medical or recreational, legal or otherwise—underscores the importance of considering several novel research directions for understanding pharmacological-psychological interactions on consumer judgments, emotions, and behaviors.
Keywords: Decision making Emotion Memory Pharmaceuticals Substances Acetaminophen
Check also Conference Talk—SMPC 2019: Effect of Acetaminophen on Emotional Sounds. Lindsay A. Warrenburg. 2019. https://osf.io/79f4d/
Description: The capacity of listeners to perceive or experience emotions in response to music, speech, and natural sounds depends on many factors including dispositional traits, empathy, and enculturation. Emotional responses are also known to be mediated by pharmacological factors, including both legal and illegal drugs. Existing research has established that acetaminophen, a common over-the-counter pain medication, blunts emotional responses to visual stimuli (e.g., Durso, Luttrell, & Way, 2015). The current study extends this research by examining possible effects of acetaminophen on both perceived and felt responses to emotionally-charged sound stimuli. Additionally, it tests whether the effects of acetaminophen are specific for particular emotions (e.g., sadness, fear) or whether acetaminophen blunts emotional responses in general. Finally, the study tests whether acetaminophen has similar or differential effects on three categories of sound: music, speech, and natural sounds. The experiment employs a randomized, double-blind, parallel-group, placebo-controlled design. Participants are randomly assigned to ingest acetaminophen or a placebo. Then, the listeners are asked to complete two experimental blocks regarding musical and non-musical sounds. The first block asks participants to judge the extent to which a sound conveys a certain affect (on a Likert scale). The second block aims to examine a listener’s emotional responses to sound stimuli (also on a Likert scale). In light of the fact that some 50 million Americans take acetaminophen each week, this study suggests that future studies in music and emotion might consider controlling for the pharmacological state of participants.
Sunday, October 13, 2019
Some people share knowledge online, often without tangible compensation; they are motivated to signal general intelligence g; observers infer g from contributions' quality
The quality of online knowledge sharing signals general intelligence. Christian N.Yoder, Scott A.Reid. Personality and Individual Differences, Volume 148, 1 October 2019, Pages 90-94. https://doi.org/10.1016/j.paid.2019.05.013
Abstract: Some people share knowledge online, often without tangible compensation. Who does this, when, and why? According to costly signaling theory people use behavioral displays to provide observers with useful information about traits or states in exchange for fitness benefits. We tested whether individuals higher in general intelligence, g, provided better quality contributions to an information pool under high than low identifiability, and whether observers could infer signaler g from contribution quality. Using a putative online wiki (N = 98) we found that as individuals' scores on Ravens Progressive Matrices (RPM) increased, participants were judged to have written better quality articles, but only when identifiable and not when anonymous. Further, the effect of RPM scores on inferred intelligence was mediated by article quality, but only when signalers were identifiable. Consistent with costly signaling theory, signalers are extrinsically motivated and observers act as “naive psychometricians.” We discuss the implications for understanding online information pools and altruism.
Abstract: Some people share knowledge online, often without tangible compensation. Who does this, when, and why? According to costly signaling theory people use behavioral displays to provide observers with useful information about traits or states in exchange for fitness benefits. We tested whether individuals higher in general intelligence, g, provided better quality contributions to an information pool under high than low identifiability, and whether observers could infer signaler g from contribution quality. Using a putative online wiki (N = 98) we found that as individuals' scores on Ravens Progressive Matrices (RPM) increased, participants were judged to have written better quality articles, but only when identifiable and not when anonymous. Further, the effect of RPM scores on inferred intelligence was mediated by article quality, but only when signalers were identifiable. Consistent with costly signaling theory, signalers are extrinsically motivated and observers act as “naive psychometricians.” We discuss the implications for understanding online information pools and altruism.
Brain Syndrome Can Make Owners Think Pets Are Impostors
Herzog, Harold, "Brain Syndrome Can Make Owners Think Pets Are Impostors" (2018). 'Animals and Us' Blog Posts. 103, Apr 25, 2018. https://animalstudiesrepository.org/aniubpos/103
Mary was 40 years old when she became convinced that Sarah, her 9 year old daughter, was an impostor. The real Sarah, she told relatives, had been taken away and placed into a foster home. She claimed social workers had replaced her actual child with an identical-looking impostor. Mary was so convinced of this substitution that she would sometimes refuse to pick her daughter up at school. Mary would scream to the teachers, “Give me my real daughter back, I know what you have done!”
To no avail, her family and health care providers tried to convince Mary that no substitution had occurred, that Sarah was indeed her real daughter. But even after Mary was treated with risperiodone, a powerful anti-psychotic drug, she held on to the delusion. The local Department of Social Services became concerned about her ability to raise a child. And when it became apparent that Mary could no longer provide care for the daughter who she believed was an impostor, they successfully sought legal guardianship for Sarah. At one point during the hearing, Sarah told the court, “I love my mother, except when she doesn’t believe I’m me.”
As described in an article by Drs. Jeremy Matuszak and Matthew Parra in the journal Psychiatric Times, Mary was suffering from Capgras Syndrome. This is a rare variant of a group of neuropsychiatric conditions called delusional misidentification disorders. First identified in 1923 by the French psychiatrists Joseph Capgras and Reboul-Lachaux, individuals with the Capgras delusion come to believe that a person they know has been replaced by an identical-looking impostor. Usually the target of the delusion is a family member or a loved one. In Mary’s case, it was her young daughter.
[...]
What Causes Animal Capgras Delusions?
I admit that nine cases is a small sample, but some interesting patterns did emerge among this group of patients. For example, twice as many women as men thought they were living with impostor animals. And, as group, the patients tended to be on the old side. Six of eight individuals were over 50, and half were in their late 60’s or older. While only two of the patients had suffered identifiable brain damage, nearly all the patients had been diagnosed with a functional psychosis, usually a form of schizophrenia. Finally, all seven of the patients which there was information on treatments were given anti-psychotic drugs. In nearly all of these cases, their pet impostor delusions diminished, and in several cases, they seemed to have disappeared.
Speculations about the causes of Capras syndrome abound. The some researchers argue that impostor delusions are a way of subconsciously dealing with love-hate conflicts. V. S. Ramachandran believes that imposter delusions result from disconnections between emotional and face recognition centers in the brain. Others argue it is usually the result of degenerative diseases such as Alzheimer’s and Lewy body disease. Indeed, impostor delusions have been associated with a wide array of conditions including psychiatric disorders, strokes, tumors, epilepsy, and even vitamin deficiencies and drug use.
Mary was 40 years old when she became convinced that Sarah, her 9 year old daughter, was an impostor. The real Sarah, she told relatives, had been taken away and placed into a foster home. She claimed social workers had replaced her actual child with an identical-looking impostor. Mary was so convinced of this substitution that she would sometimes refuse to pick her daughter up at school. Mary would scream to the teachers, “Give me my real daughter back, I know what you have done!”
To no avail, her family and health care providers tried to convince Mary that no substitution had occurred, that Sarah was indeed her real daughter. But even after Mary was treated with risperiodone, a powerful anti-psychotic drug, she held on to the delusion. The local Department of Social Services became concerned about her ability to raise a child. And when it became apparent that Mary could no longer provide care for the daughter who she believed was an impostor, they successfully sought legal guardianship for Sarah. At one point during the hearing, Sarah told the court, “I love my mother, except when she doesn’t believe I’m me.”
As described in an article by Drs. Jeremy Matuszak and Matthew Parra in the journal Psychiatric Times, Mary was suffering from Capgras Syndrome. This is a rare variant of a group of neuropsychiatric conditions called delusional misidentification disorders. First identified in 1923 by the French psychiatrists Joseph Capgras and Reboul-Lachaux, individuals with the Capgras delusion come to believe that a person they know has been replaced by an identical-looking impostor. Usually the target of the delusion is a family member or a loved one. In Mary’s case, it was her young daughter.
[...]
What Causes Animal Capgras Delusions?
I admit that nine cases is a small sample, but some interesting patterns did emerge among this group of patients. For example, twice as many women as men thought they were living with impostor animals. And, as group, the patients tended to be on the old side. Six of eight individuals were over 50, and half were in their late 60’s or older. While only two of the patients had suffered identifiable brain damage, nearly all the patients had been diagnosed with a functional psychosis, usually a form of schizophrenia. Finally, all seven of the patients which there was information on treatments were given anti-psychotic drugs. In nearly all of these cases, their pet impostor delusions diminished, and in several cases, they seemed to have disappeared.
Speculations about the causes of Capras syndrome abound. The some researchers argue that impostor delusions are a way of subconsciously dealing with love-hate conflicts. V. S. Ramachandran believes that imposter delusions result from disconnections between emotional and face recognition centers in the brain. Others argue it is usually the result of degenerative diseases such as Alzheimer’s and Lewy body disease. Indeed, impostor delusions have been associated with a wide array of conditions including psychiatric disorders, strokes, tumors, epilepsy, and even vitamin deficiencies and drug use.
Targeted Memory Reactivation During Sleep Improves Next-Day Problem Solving
Targeted Memory Reactivation During Sleep Improves Next-Day Problem Solving. Kristin E. G. Sanders et al. Psychological Science, October 11, 2019. https://doi.org/10.1177/0956797619873344
Abstract: Many people have claimed that sleep has helped them solve a difficult problem, but empirical support for this assertion remains tentative. The current experiment tested whether manipulating information processing during sleep impacts problem incubation and solving. In memory studies, delivering learning-associated sound cues during sleep can reactivate memories. We therefore predicted that reactivating previously unsolved problems could help people solve them. In the evening, we presented 57 participants with puzzles, each arbitrarily associated with a different sound. While participants slept overnight, half of the sounds associated with the puzzles they had not solved were surreptitiously presented. The next morning, participants solved 31.7% of cued puzzles, compared with 20.5% of uncued puzzles (a 55% improvement). Moreover, cued-puzzle solving correlated with cued-puzzle memory. Overall, these results demonstrate that cuing puzzle information during sleep can facilitate solving, thus supporting sleep’s role in problem incubation and establishing a new technique to advance understanding of problem solving and sleep cognition.
Keywords problem solving, incubation, sleep, targeted memory reactivation, restructuring, creative cognition
Abstract: Many people have claimed that sleep has helped them solve a difficult problem, but empirical support for this assertion remains tentative. The current experiment tested whether manipulating information processing during sleep impacts problem incubation and solving. In memory studies, delivering learning-associated sound cues during sleep can reactivate memories. We therefore predicted that reactivating previously unsolved problems could help people solve them. In the evening, we presented 57 participants with puzzles, each arbitrarily associated with a different sound. While participants slept overnight, half of the sounds associated with the puzzles they had not solved were surreptitiously presented. The next morning, participants solved 31.7% of cued puzzles, compared with 20.5% of uncued puzzles (a 55% improvement). Moreover, cued-puzzle solving correlated with cued-puzzle memory. Overall, these results demonstrate that cuing puzzle information during sleep can facilitate solving, thus supporting sleep’s role in problem incubation and establishing a new technique to advance understanding of problem solving and sleep cognition.
Keywords problem solving, incubation, sleep, targeted memory reactivation, restructuring, creative cognition
When recipients open a gift from a friend, they like it less when the giver has wrapped it neatly as opposed to sloppily and we draw on expectation disconfirmation theory to explain the effect
Presentation Matters: The Effect of Wrapping Neatness on Gift Attitudes. Jessica M. Rixom Erick M. Mas Brett A. Rixom. Journal of Consumer Psychology, October 11 2019. https://doi.org/10.1002/jcpy.1140
Abstract: While gift‐givers typically wrap gifts prior to presenting them, little is known about the effect of how the gift is wrapped on recipients’ expectations and attitudes toward the gift inside. We propose that when recipients open a gift from a friend, they like it less when the giver has wrapped it neatly as opposed to sloppily and we draw on expectation disconfirmation theory to explain the effect. Specifically, recipients set higher (lower) expectations for neatly (sloppily)‐wrapped gifts, making it harder (easier) for the gifts to meet these expectations, resulting in contrast effects that lead to less (more) positive attitudes toward the gifts once unwrapped. However, when the gift‐giver is an acquaintance, there is ambiguity in the relationship status and wrapping neatness serves as a cue about the relationship rather than the gift itself. This leads to assimilation effects where the recipient likes the gift more when neatly wrapped. We assess these effects across three studies and find that they hold for desirable, neutral, and undesirable gifts, as well as with both hypothetical and real gifts.
Abstract: While gift‐givers typically wrap gifts prior to presenting them, little is known about the effect of how the gift is wrapped on recipients’ expectations and attitudes toward the gift inside. We propose that when recipients open a gift from a friend, they like it less when the giver has wrapped it neatly as opposed to sloppily and we draw on expectation disconfirmation theory to explain the effect. Specifically, recipients set higher (lower) expectations for neatly (sloppily)‐wrapped gifts, making it harder (easier) for the gifts to meet these expectations, resulting in contrast effects that lead to less (more) positive attitudes toward the gifts once unwrapped. However, when the gift‐giver is an acquaintance, there is ambiguity in the relationship status and wrapping neatness serves as a cue about the relationship rather than the gift itself. This leads to assimilation effects where the recipient likes the gift more when neatly wrapped. We assess these effects across three studies and find that they hold for desirable, neutral, and undesirable gifts, as well as with both hypothetical and real gifts.
Saturday, October 12, 2019
Random effects meta-analyses showed that social media use was significantly & positively related to affective empathy; effects were generally small in size and do not establish causality
Social Media Use and Empathy: A Mini Meta-Analysis. Shu-Sha Angie Guan, Sophia Hain, Jennifer Cabrera, Andrea Rodarte. Computer Science & Communications, Vol.8 No.4, October 2019, pp. 147-157. DOI: 10.4236/sn.2019.84010
ABSTRACT: Concerns about the effects of social media or social networking site (SNS) use on prosocial development are increasing. The aim of the current study is to meta-analytically summarize the research to date (k = 5) about the relationship between general SNS use and two components of empathy (i.e., empathic concern and perspective-taking). Random effects meta-analyses showed that SNS use was significantly and positively related to affective empathy though only marginally related to cognitive empathy. These effects were generally small in size and do not establish causality. Future research should explore how specific behaviors are related to different forms of empathy.
KEYWORDS: Social Media, Empathic Concern, Perspective-Taking
4. Discussion
Despite the decreases in empathy coupled with increases in media use at the societal level [13] , individual social media use in terms of frequency or time spent per day appears to be related to higher levels of empathy, particularly affective empathy. Even though the associations were small, they trended positive. However, there may be some online behaviors that cultivate empathy (e.g., sharing emotions, expressing support [21] ) more than others (e.g., updating profile photos [20] ). In combination with emerging longitudinal evidence that social media use at one time point is predictive of higher levels of cognitive and affective empathy one year later among adolescents [42] and experimental work that shows that interdependent Facebook use can promote relational orientation [37] , this study contributes to the growing literature on how social media can facilitate positive psychosocial development.
Although promising, there are limitations of the current meta-analysis to consider. This study aimed to look only at global measures of social media use in everyday life and, because of this inclusion parameter, includes a small sample of studies and effect sizes. This likely limits the generalizability of the results and our ability to detect differences by moderators (gender, age). Also, the results are correlational and do not establish causality. Previous research suggests that individuals who are prosocial offline are often prosocial online [29]. Despite our attempts to narrow the scope, there remained variability in the measures of media use and study parameters as indicated by the heterogeneity index. Given the wide range of online activities, future studies should explore how specific behaviors are related to different forms of empathy (e.g., helping strangers vs. family or friends [25] ). Additionally, the social media landscape is constantly evolving and this study captures media use as assessed by recent studies in one moment in time. Cultural psychologists suggest that changes in technology use, as part of larger shifting sociodemographic and ecological changes, can shape cultural values and learning environments in ways that directly affect human development across time [43].
It is also important to note that all of the studies included, and much of media research in general, have been conducted in industrialized, individualistic countries like the United States. This limited our ability to detect cultural differences. On the one hand, the most popular SNSs are often developed in Western cultures and can reflect the highly individualistic values of their developers and users [37] [44]. On the other hand, the Internet is a “global village” of individuals from various nationalities and cultural backgrounds with nearly 60% of the online population residing outside of the U.S. [44]. These diverse offline cultural values can be reflected in the online [45] - [52]. Additionally, there may be values and goals specific to the SNS context outside of the values that users bring with them [53]. Previous meta-analyses suggest that the effects of media use may be stronger in non-Western countries [26]. Future research should explore how cultural values in the online and offline interact in shaping development.
Although limited, this meta-analysis provides useful insights into the media-empathy paradox [13]. Additionally, it may be informative in better understanding growing generations of adolescents and young adults who have become the first generations to have grown up fully immersed in digital media (i.e., “digital natives”) having been born around or after the 1990s when the Internet was first commercially launched. This may mean that psychosocial development for these “digital natives” differs from prior generations of “digital immigrants” [9]. For example, greater face-to-face communication with family members, close friends, and acquaintances was associated with higher levels of psychological well-being (e.g., life meaning, relationship quality) for older adults age 35 - 54 but not for young adults age 18 - 34 [54]. As technology transforms society, social relationships, and media landscapes, it will become ever important to track how these changes affect individuals and their development.
ABSTRACT: Concerns about the effects of social media or social networking site (SNS) use on prosocial development are increasing. The aim of the current study is to meta-analytically summarize the research to date (k = 5) about the relationship between general SNS use and two components of empathy (i.e., empathic concern and perspective-taking). Random effects meta-analyses showed that SNS use was significantly and positively related to affective empathy though only marginally related to cognitive empathy. These effects were generally small in size and do not establish causality. Future research should explore how specific behaviors are related to different forms of empathy.
KEYWORDS: Social Media, Empathic Concern, Perspective-Taking
4. Discussion
Despite the decreases in empathy coupled with increases in media use at the societal level [13] , individual social media use in terms of frequency or time spent per day appears to be related to higher levels of empathy, particularly affective empathy. Even though the associations were small, they trended positive. However, there may be some online behaviors that cultivate empathy (e.g., sharing emotions, expressing support [21] ) more than others (e.g., updating profile photos [20] ). In combination with emerging longitudinal evidence that social media use at one time point is predictive of higher levels of cognitive and affective empathy one year later among adolescents [42] and experimental work that shows that interdependent Facebook use can promote relational orientation [37] , this study contributes to the growing literature on how social media can facilitate positive psychosocial development.
Although promising, there are limitations of the current meta-analysis to consider. This study aimed to look only at global measures of social media use in everyday life and, because of this inclusion parameter, includes a small sample of studies and effect sizes. This likely limits the generalizability of the results and our ability to detect differences by moderators (gender, age). Also, the results are correlational and do not establish causality. Previous research suggests that individuals who are prosocial offline are often prosocial online [29]. Despite our attempts to narrow the scope, there remained variability in the measures of media use and study parameters as indicated by the heterogeneity index. Given the wide range of online activities, future studies should explore how specific behaviors are related to different forms of empathy (e.g., helping strangers vs. family or friends [25] ). Additionally, the social media landscape is constantly evolving and this study captures media use as assessed by recent studies in one moment in time. Cultural psychologists suggest that changes in technology use, as part of larger shifting sociodemographic and ecological changes, can shape cultural values and learning environments in ways that directly affect human development across time [43].
It is also important to note that all of the studies included, and much of media research in general, have been conducted in industrialized, individualistic countries like the United States. This limited our ability to detect cultural differences. On the one hand, the most popular SNSs are often developed in Western cultures and can reflect the highly individualistic values of their developers and users [37] [44]. On the other hand, the Internet is a “global village” of individuals from various nationalities and cultural backgrounds with nearly 60% of the online population residing outside of the U.S. [44]. These diverse offline cultural values can be reflected in the online [45] - [52]. Additionally, there may be values and goals specific to the SNS context outside of the values that users bring with them [53]. Previous meta-analyses suggest that the effects of media use may be stronger in non-Western countries [26]. Future research should explore how cultural values in the online and offline interact in shaping development.
Although limited, this meta-analysis provides useful insights into the media-empathy paradox [13]. Additionally, it may be informative in better understanding growing generations of adolescents and young adults who have become the first generations to have grown up fully immersed in digital media (i.e., “digital natives”) having been born around or after the 1990s when the Internet was first commercially launched. This may mean that psychosocial development for these “digital natives” differs from prior generations of “digital immigrants” [9]. For example, greater face-to-face communication with family members, close friends, and acquaintances was associated with higher levels of psychological well-being (e.g., life meaning, relationship quality) for older adults age 35 - 54 but not for young adults age 18 - 34 [54]. As technology transforms society, social relationships, and media landscapes, it will become ever important to track how these changes affect individuals and their development.
Is intense pleasure necessary for intense beauty? If so, the inability to experience pleasure (anhedonia) should prevent the experience of intense beauty
Intense beauty requires intense pleasure. Aenne A. Brielmann1 and Denis Pelli. Front. Psychol., Oct 11 2019 (provisionally accepted, no full text available). doi: 10.3389/fpsyg.2019.02420
Abstract: At the beginning of psychology, Fechner (1876) claimed that beauty is immediate pleasure, and that an object’s pleasure determines its value. In our earlier work, we found that intense pleasure always results in intense beauty. Here, we focus on the complementary question: Is intense pleasure necessary for intense beauty? If so, the inability to experience pleasure (anhedonia) should prevent the experience of intense beauty. We asked 757 participants to rate how intensely they felt beauty from each image. We used 900 OASIS images along with their available valence (pleasure vs. displeasure) and arousal ratings. We then obtained self-reports of anhedonia (TEPS), mood, and depression (PHQ-9). Across images, beauty ratings were closely related to pleasure ratings (r = 0.75), yet unrelated to arousal ratings. Only images with an average pleasure rating above 4 (of a possible 7) often achieved (> 10%) beauty averages exceeding the overall median beauty. For normally beautiful images (average rating > 4.5), the beauty ratings were correlated with anhedonia (r ~ -0.3) and mood (r ~ 0.3), yet unrelated to depression. Comparing each participant’s average beauty rating to the overall median, none of the most anhedonic participants exceeded the median, whereas 50% of the remaining participants did. Thus, both general and anhedonic results support the claim that intense beauty requires intense pleasure. In addition, follow-up repeated measures showed that shared taste contributed 19% to beauty-rating variance, only one third as much as personal taste (58%). Addressing age-old questions, these results indicate that beauty is a kind of pleasure, and that beauty is more personal than universal, i.e., 1.7 times more correlated with individual than with shared taste.
Keywords: beauty, aesthetics, Pleasure, Anhedonia, Depression
Abstract: At the beginning of psychology, Fechner (1876) claimed that beauty is immediate pleasure, and that an object’s pleasure determines its value. In our earlier work, we found that intense pleasure always results in intense beauty. Here, we focus on the complementary question: Is intense pleasure necessary for intense beauty? If so, the inability to experience pleasure (anhedonia) should prevent the experience of intense beauty. We asked 757 participants to rate how intensely they felt beauty from each image. We used 900 OASIS images along with their available valence (pleasure vs. displeasure) and arousal ratings. We then obtained self-reports of anhedonia (TEPS), mood, and depression (PHQ-9). Across images, beauty ratings were closely related to pleasure ratings (r = 0.75), yet unrelated to arousal ratings. Only images with an average pleasure rating above 4 (of a possible 7) often achieved (> 10%) beauty averages exceeding the overall median beauty. For normally beautiful images (average rating > 4.5), the beauty ratings were correlated with anhedonia (r ~ -0.3) and mood (r ~ 0.3), yet unrelated to depression. Comparing each participant’s average beauty rating to the overall median, none of the most anhedonic participants exceeded the median, whereas 50% of the remaining participants did. Thus, both general and anhedonic results support the claim that intense beauty requires intense pleasure. In addition, follow-up repeated measures showed that shared taste contributed 19% to beauty-rating variance, only one third as much as personal taste (58%). Addressing age-old questions, these results indicate that beauty is a kind of pleasure, and that beauty is more personal than universal, i.e., 1.7 times more correlated with individual than with shared taste.
Keywords: beauty, aesthetics, Pleasure, Anhedonia, Depression
Effects of heroin on rat prosocial behavior: They stopped freeing the trapped cagemate, continued to self-administer the drug
From 2018... Effects of heroin on rat prosocial behavior. Seven E. Tomek Gabriela M. Stegmann M. Foster Olive. Addiction Biology, May 4 2018. https://doi.org/10.1111/adb.12633
Abstract: Opioid use disorders are characterized in part by impairments in social functioning. Previous research indicates that laboratory rats, which are frequently used as animal models of addiction‐related behaviors, are capable of prosocial behavior. For example, under normal conditions, when a ‘free’ rat is placed in the vicinity of rat trapped in a plastic restrainer, the rat will release or ‘rescue’ the other rat from confinement. The present study was conducted to determine the effects of heroin on prosocial behavior in rats. For 2 weeks, rats were given the opportunity to rescue their cagemate from confinement, and the occurrence of and latency to free the confined rat was recorded. After baseline rescuing behavior was established, rats were randomly selected to self‐administer heroin (0.06 mg/kg/infusion i.v.) or sucrose pellets (orally) for 14 days. Next, rats were retested for rescuing behavior once daily for 3 days, during which they were provided with a choice between freeing the trapped cagemate and continuing to self‐administer their respective reinforcer. Our results indicate that rats self‐administering sucrose continued to rescue their cagemate, whereas heroin rats chose to self‐administer heroin and not rescue their cagemate. These findings suggest that rats with a history of heroin self‐administration show deficits in prosocial behavior, consistent with specific diagnostic criteria for opioid use disorder. Behavioral paradigms providing a choice between engaging in prosocial behavior and continuing drug use may be useful in modeling and investigating the neural basis of social functioning deficits in opioid addiction.
Abstract: Opioid use disorders are characterized in part by impairments in social functioning. Previous research indicates that laboratory rats, which are frequently used as animal models of addiction‐related behaviors, are capable of prosocial behavior. For example, under normal conditions, when a ‘free’ rat is placed in the vicinity of rat trapped in a plastic restrainer, the rat will release or ‘rescue’ the other rat from confinement. The present study was conducted to determine the effects of heroin on prosocial behavior in rats. For 2 weeks, rats were given the opportunity to rescue their cagemate from confinement, and the occurrence of and latency to free the confined rat was recorded. After baseline rescuing behavior was established, rats were randomly selected to self‐administer heroin (0.06 mg/kg/infusion i.v.) or sucrose pellets (orally) for 14 days. Next, rats were retested for rescuing behavior once daily for 3 days, during which they were provided with a choice between freeing the trapped cagemate and continuing to self‐administer their respective reinforcer. Our results indicate that rats self‐administering sucrose continued to rescue their cagemate, whereas heroin rats chose to self‐administer heroin and not rescue their cagemate. These findings suggest that rats with a history of heroin self‐administration show deficits in prosocial behavior, consistent with specific diagnostic criteria for opioid use disorder. Behavioral paradigms providing a choice between engaging in prosocial behavior and continuing drug use may be useful in modeling and investigating the neural basis of social functioning deficits in opioid addiction.
From 2001... Need to hire a private certified arborist at a cost of $500-$2,000 to take pictures, prepare a report & perhaps to recommend protective pruning or other measures before a tree can be cut
From 2001... The (Almost) Untouchables of California. Todd S. Purdum. , Aug. 29, 2001, Section A, Page 1. https://www.nytimes.com/2001/08/29/us/the-almost-untouchables-of-california.html
'Anything that's going to happen under this tree has to be addressed,' said Mr. Sartain, a third-generation arborist, surveying the tree's 90-foot canopy with the cheerful, clinical detachment of your favorite pediatrician. 'There's a lot of issues.'
Indeed, Mr. Sartain's visit is only the first step in a process that will require the homeowner, who asked not to be named, to hire a private certified arborist at a cost of $500 to $2,000 to take pictures, prepare a report and perhaps to recommend protective pruning or other measures before a permit is issued and construction can proceed. Penalties for removing a tree like this, worth perhaps $100,000 under city guidelines because of its size and age, could force an offender to plant trees worth an equivalent amount.
Santa Clarita is not alone.
In the past 30 years, as development pressures increased, scores of California cities and counties from Thousand Oaks in the south to Santa Rosa in the north have passed ordinances protecting not only various species and sizes of oaks, but also sycamores, walnuts, eucalyptus and other trees with a zeal that might make the poet Joyce Kilmer blush.
The specifics vary widely, but the ordinances have one goal in common: protecting trees that are almost as storied in California as its redwoods and that have long been threatened by ranching, wine-making, suburban sprawl and, more recently, mysterious diseases.
'Anything that's going to happen under this tree has to be addressed,' said Mr. Sartain, a third-generation arborist, surveying the tree's 90-foot canopy with the cheerful, clinical detachment of your favorite pediatrician. 'There's a lot of issues.'
Indeed, Mr. Sartain's visit is only the first step in a process that will require the homeowner, who asked not to be named, to hire a private certified arborist at a cost of $500 to $2,000 to take pictures, prepare a report and perhaps to recommend protective pruning or other measures before a permit is issued and construction can proceed. Penalties for removing a tree like this, worth perhaps $100,000 under city guidelines because of its size and age, could force an offender to plant trees worth an equivalent amount.
Santa Clarita is not alone.
In the past 30 years, as development pressures increased, scores of California cities and counties from Thousand Oaks in the south to Santa Rosa in the north have passed ordinances protecting not only various species and sizes of oaks, but also sycamores, walnuts, eucalyptus and other trees with a zeal that might make the poet Joyce Kilmer blush.
The specifics vary widely, but the ordinances have one goal in common: protecting trees that are almost as storied in California as its redwoods and that have long been threatened by ranching, wine-making, suburban sprawl and, more recently, mysterious diseases.
Children from Namibia and Germany: Being observed increases overimitation in three diverse cultures
Stengelin, R., Hepach, R., & Haun, D. B. M. (2019). Being observed increases overimitation in three diverse cultures. Developmental Psychology, Oct 2019. http://dx.doi.org/10.1037/dev0000832
Abstract: From a young age, children in Western, industrialized societies overimitate others’ actions. However, the underlying motivation and cultural specificity of this behavior have remained unclear. Here, 3- to 8-year-old children (N = 125) from two rural Namibian populations (Hai||om and Ovambo) and one urban German population were tested in two versions of an overimitation paradigm. Across cultures, children selectively imitated more actions when the adult model was present compared to being absent, denoting a social motivation underlying overimitation. At the same time, children’s imitation was not linked to their tendency to reengage the adult in a second, independent measure of social motivation. These results suggest that, across diverse cultures, children’s imitative behavior is actuated by the attentive state of the model.
Abstract: From a young age, children in Western, industrialized societies overimitate others’ actions. However, the underlying motivation and cultural specificity of this behavior have remained unclear. Here, 3- to 8-year-old children (N = 125) from two rural Namibian populations (Hai||om and Ovambo) and one urban German population were tested in two versions of an overimitation paradigm. Across cultures, children selectively imitated more actions when the adult model was present compared to being absent, denoting a social motivation underlying overimitation. At the same time, children’s imitation was not linked to their tendency to reengage the adult in a second, independent measure of social motivation. These results suggest that, across diverse cultures, children’s imitative behavior is actuated by the attentive state of the model.
Friday, October 11, 2019
Age is the most important risk factor for loneliness, which peaks just prior to the peak age of onset for psychotic disorders; contrary to expectations, it declines thereafter over the lifespan
Shovestul, Bridget, Jiayin Han, Laura Germine, and David Dodell-Feder. 2019. “Risk Factors for Loneliness: The High Relative Importance of Age Versus Other Factors.” PsyArXiv. October 11. doi:10.31234/osf.io/t8n2h
Abstract
Background: Loneliness is a potent predictor of negative health outcomes making it important to identify risk factors for loneliness. Though extant studies have identified characteristics that are associated with loneliness, less is known about the cumulative and relative importance of these factors, and how their interaction may impact loneliness. Thus, here, we investigate risk factors for loneliness.
Background: Loneliness is a potent predictor of negative health outcomes making it important to identify risk factors for loneliness. Though extant studies have identified characteristics that are associated with loneliness, less is known about the cumulative and relative importance of these factors, and how their interaction may impact loneliness. Thus, here, we investigate risk factors for loneliness.
Methods: 4,885 individuals ages 10-97 years from the US completed the three-item UCLA Loneliness Survey on TestMyBrain.org. Using census data, we calculated the population and community household income of participants’ census area, and the proportion of individuals in the participant’s census area that shared the participant’s demographic characteristics (i.e., sociodemographic density). We evaluated the relative importance of three classes of variables for loneliness risk: those related to the person (e.g., age), place (e.g., community household income), and the interaction of person X place (sociodemographic density).
Results: We find that loneliness is highly prevalent and best explained by person (age) and place (community household income) characteristics. Of the variance in loneliness accounted for, the overwhelming majority was explained by age. On age, loneliness peaks at 19 years, and declines thereafter. The congruence between one’s sociodemographic characteristics and that of one’s neighborhood had no impact on loneliness.
Conclusions: Age appears to be the most important risk factor for loneliness, which peaks just prior to the peak age of onset for psychotic disorders, and, contrary to popular belief, declines thereafter over the lifespan. These data may have important implications for public health interventions.
Youth marijuana use can have adverse health outcomes; however, reports from Colorado, Oregon, & Washington indicate no statewide increase in youth marijuana use following retail legalization for adults
Ta M, Greto L, Bolt K. Trends and Characteristics in Marijuana Use Among Public School Students — King County, Washington, 2004–2016. MMWR Morb Mortal Wkly Rep 2019;68:845–850. https://www.cdc.gov/mmwr/volumes/68/wr/mm6839a3.htm
Summary
What is already known about this topic?
Youth marijuana use can have adverse health outcomes. However, reports from Colorado, Oregon, and Washington indicate no statewide increase in youth marijuana use following retail legalization for adults.
What is added by this report?
Following 2012 legalization of retail marijuana sale to adults in Washington, past 30–day marijuana use decreased or remained stable through 2016 among King County students in grades 6, 8, 10, and 12. Among grade 10 students, the decline in use occurred among males while the rate among females remained steady. Use of alcohol or other substances was four times as frequent among marijuana users as among nonusers.
What are the implications for public health practice?
Understanding reasons for youth marijuana use, particularly among females, might help inform policy, strategies, and educational campaigns.
Use of marijuana at an early age can affect memory, school performance, attention, and learning; conclusions have been mixed regarding its impact on mental health conditions, including psychosis, depression, and anxiety (1–3). Medical marijuana has been legal in Washington since 1998, and in 2012, voters approved the retail sale of marijuana for recreational use to persons aged ≥21 years. The first retail stores opened for business in July 2014. As more states legalize marijuana use by adults aged ≥21 years, the effect of legalization on use by youths will be important to monitor. To guide planning of activities aimed at reducing marijuana use by youths and to inform ongoing policy development, Public Health—Seattle & King County assessed trends and characteristics of past 30–day marijuana use among King County, Washington, public school students in grades 6, 8, 10, and 12. This report used biennial data for 2004–2016 from the Washington State Healthy Youth Survey. Among grade 6 students there was a decreasing trend in self-reported past 30–day marijuana use from 2004 to 2016, while the percentage of grade 8 students who had used marijuana during the past 30 days did not change during that period. Among students in grades 10 and 12, self-reported past 30–day use of marijuana increased from 2004 to 2012, then declined from 2012 to 2016. In 2016, the percentage of students with past 30–day marijuana use in King County was 0.6% among grade 6, 4.1% among grade 8, 13.9% among grade 10, and 25.5% among grade 12 students. Among grade 10 students, 24.0% of past 30–day marijuana users also smoked cigarettes, compared with 1.3% of nonusers. From 2004 to 2016 the prevalence of perception of great risk of harm from regular marijuana use decreased across all grades. Continued surveillance using consistent measures is needed to monitor the impact of marijuana legalization and emerging public health issues, given variable legislation approaches among jurisdictions.
The Healthy Youth Survey is a school-based, anonymous, self-administered, cross-sectional survey conducted in the fall of even-numbered years in Washington public schools.* Schools with grades 6, 8, 10, and 12 are randomly selected using a clustered sampling design. Schools not selected for the state sample also can choose to participate in the survey. The survey measures risk behaviors, attitudes, and factors that contribute to youth health and safety, including alcohol, marijuana, tobacco, and other drug use; behaviors that result in unintentional and intentional injuries (e.g., violence); dietary behaviors, and physical activity.
This analysis used data from all participating schools, both sampled and nonsampled, representing all 19 King County school districts for biennial survey years 2004 through 2016 (the most currently available year of data at the time of analysis). King County is the largest metropolitan county in the state. Local jurisdictions have authority to regulate land uses and can impose additional time, place, and manner-of-use restrictions on state licensed businesses; thus, considerable variation in the availability of and restrictions on retail marijuana exists across the 39 cities in King County, including Seattle.
Survey response rates varied by grade and survey year, with higher rates in more recent surveys.† During 2004–2016, King County response rates ranged from 60%–80% for grades 6 and 8; 50%–70% for grade 10; and 40%–50% for grade 12. For the 2016 survey, response rates for King County were 80% for grades 6 and 8, 70% for grade 10, 40% for grade 12, and 67% for all grades combined.
Data representing substance use, perception of great risk of harm, risky behaviors, and factors associated with marijuana use were categorized dichotomously. Past 30–day marijuana use was considered use on 1 or more days during the past 30 days. Perceived great risk of harm associated with regular marijuana use (more than one or two times per week) was categorized dichotomously as great risk versus all other options combined (moderate, slight, and no perceived risk). Past 30–day use of alcohol, cigarettes, and electronic cigarettes/vape pens was considered use on 1 or more days in the past 30 days, past 30–day risky driving and riding behaviors,§ were considered one or more occurrences during the past 30 days and past binge drinking¶ was over a 2-week period.
Dichotomous factors generally reported to be associated with other substance use (4) were examined for marijuana use; these factors included whether students’ parents had talked about not using marijuana, use by one or more best friends or by a member in the youth’s household, and having been bullied one or more times in the past month.** Stata survey software (version 13; StataCorp) was used to generate percentage estimates and corresponding 95% confidence intervals (CIs). To account for differential participation among school districts across survey years, percentage estimates were weighted to the school district total enrollment by grade and sex, with the final weights adjusted to sum to the county total public school enrollment by grade and sex. Joinpoint trend analysis software (https://surveillance.cancer.gov/joinpoint/external icon) was used to evaluate statistical significance of trends in survey-weighted percentage estimates by grade and sex. Analyses of trends by sex and examination of factors associated with past 30–day use were restricted to grade 10 students as a result of grade-specific sampling and the need for adequate response rates to accommodate a robust analysis.
During 2004–2016, the prevalence of reported past 30–day marijuana use was lowest among students in grade 6 and increased with school grade level (Figure 1). In 2016, past 30–day marijuana use was reported by 0.6% (CI = 0.4–0.7) of grade 6 students, 4.1% (CI = 3.5–4.8) of grade 8 students, 13.9% (CI = 12.6–15.3) of grade 10 students, and 25.5% (CI = 23.7–27.4) of grade 12 students in King County. Among students in grade 6, past 30–day marijuana use declined significantly, from 1.3% in 2004 to 0.6% in 2016. There was no statistically significant trend among students in grade 8; however, among students in grades 10 and 12, past 30–day use increased from 2004 to 2012, and then declined. Across all grades, the percentage of students reporting great risk of harm from regular marijuana use declined over the survey period, with the lowest perceived great risk of harm reported among older students in all years. In 2016, 26.7% (CI = 25.0–28.5) of students in grade 12 perceived great risk of harm from regular marijuana use, whereas 53.3% (CI = 50.5–56.1) reported this perception in 2004.
Among male students in grade 10, past 30–day marijuana use increased from 17.6% in 2004 to 21.4% in 2010 and subsequently declined to 13.5% in 2016 (Figure 2). Among female students in grade 10, there was no change in the prevalence of past 30–day use, which remained approximately 16% during this period. In 2016, there was no significant difference in past 30–day marijuana use between male and female students in grade 10.
Among past 30–day marijuana users in grade 10, 42.8% reported living with someone who uses marijuana, 88.5% reported having at least one best friend who used marijuana, and 26.3% reported having been bullied at least once in the past 30 days; these prevalences were higher than those among grade 10 nonusers (12.8%, 28.3%, and 16.5%, respectively) (Table). Among grade 10 marijuana users, 92.5% reported that it was not very hard to obtain marijuana, compared with 56.7% of nonusers. No parental discussion about marijuana during the past year was reported by similar percentages of past 30–day marijuana users (39.2%) and nonusers (39.8%).
Among grade 10 students, prevalence of past 30–day use of other substances was four times higher among those who had used marijuana in the past 30 days than among those who had not. Among marijuana users, the prevalences of past 30–day use of other substances were as follows; alcohol (67.0%), cigarettes (24.0%), e-cigarettes or vape pens (43.0%), and of binge drinking (43.5%), compared with 10.3%, 1.3%, 4.0%, and 3.7% among nonusers, respectively. Among grade 10 marijuana users, 36% reported driving within 3 hours of using marijuana at least once in the past month.
Discussion
Despite legalization of the retail sale of marijuana to adults in Washington in 2012, evidence from the biennial Washington State Healthy Youth Survey indicates that the prevalence of past 30–day marijuana use among students in grades 10 and 12 began to decline that year. The decline continued in 2016 among grade 10 students and did not change significantly among grade 12 students. This decline or absence of change in youth marijuana use after legalization of retail sales to adults is consistent with trends reported in Colorado and Oregon,†† states that legalized adult retail sales of marijuana in 2013 and 2014, respectively. However, causality of the observed decrease in youth use following retail sale legalization cannot be inferred, because effects might be delayed and this report does not include data from the timeframe that would capture the more recent surge in e-cigarette use by youth and the use of tetrahydrocannabinol (THC) within electronic cigarette (e-cigarette) devices. Although the relationship between legal adult recreational use and youth use is not well understood, two possible reasons for the observed decline in youth use include reduction of illicit market supply through competition§§ and loss of novelty appeal among youths. Furthermore, it would be important to monitor the long-term role legalization might play to foster a permissive use environment given observed strong associations with use and individual and family factors that influence youth use.
Before initiation of retail marijuana sales in Washington in 2014, the statewide prevalence of use among grade 10 students had not changed significantly since 2002, although reported statewide use prevalence in 2016 was higher among students identifying as non-Hispanic American Indian/Alaska Native and Hispanic than among non-Hispanic white and non-Hispanic Asian students (5). Among grade 10 King County students, past 30–day marijuana use by male students has been decreasing since 2010, while the prevalence among female students has not changed. Continued monitoring is necessary to observe how local trends among males change over time. The narrowing of the sex difference gap reflects national trends (6) and suggests that female users might benefit from tailored prevention messages informed by an understanding of reasons for use.
Although overall youth rates of smoking and alcohol are declining nationally (7), the prevalence of any substance use, including alcohol, cigarettes, or vape pens, was four times higher among grade 10 past 30–day marijuana users than among nonusers. Statewide data from 2016 also show similar higher prevalence of household, peer and individual factors associated with youth substance use among grade 10 marijuana users than nonusers (https://www.askhys.net/library/2016/RecentMarijaunaUseGr10.pdfpdf iconexternal icon). Findings from a 2017 survey of Canadian residents aged 15–24 years found that marijuana users were significantly more likely to be past 30-day e-cigarette users, compared with nonusers (8). Polysubstance use and driving after using marijuana or riding in a car driven by someone who had used marijuana recently are public health issues that are important to monitor. Educational campaigns conveying health risk of marijuana use should also address impaired driving, in light of experimental data showing deteriorating control with increasing task complexity and increased risk for involvement in a motor vehicle crash (9).
The findings in this report are subject to at least six limitations. First, these data predate the recent reported increase in youth e-cigarette use and the use of THC in the newest generation of e-cigarette devices. The marijuana use question does not explicitly define use by method and estimates of youth marijuana use might be underestimated if respondents did not consider vaping or edible consumption of marijuana products when responding to the question. Second, data are from public school students only and might not be generalizable to all youths in this age group. Students who might be at higher risk might not be in school; it is estimated that 95.3% of King County residents aged 14–18 years are in school.¶¶ Third, survey participation is voluntary, and responses are based on self-report, which can be subject to recall or response bias. Fourth, these estimates might differ from other state or nationally representative youth health–surveillance systems, in part because of survey methods, age of participants, survey setting, and period during the year the survey was conducted. Fifth, local historical data for youth marijuana use before 2004 are not available, and the effects of medical marijuana legalization, which occurred in 1998, on use by youths is unknown. Finally, binge drinking is framed as five or more drinks in a row during the preceding 2 weeks for both males and females and would likely underestimate excessive alcohol consumption among females compared with using a sex-specific four-drink threshold (10).
The national goals for substance use set by Healthy People 2020*** include a target of 6% for youths aged 12–17 years with past 30–day marijuana use, and progress toward this target requires evidence-based interventions and policies for preventing and treating substance use and abuse among youths. Although some cross-cutting interventions addressing adolescent health are presented in the Community Preventive Task Force’s Community Guide,††† there currently is no specific category for marijuana use, as there is for alcohol and tobacco. The National Registry of Evidence-based Programs and Practices,§§§ a project of the federal Substance Abuse and Mental Health Services Administration, might be a potential alternative source for strategies that reduce marijuana use and prevent associated harms, but these strategies might not be sufficient for states with newly legalized retail marketplaces. In light of the limited evidence base, there is a need to identify individual, relationship, community, and societal determinants of youth substance use that would allow development of broad-based risk-reduction strategies. Continued surveillance would benefit from having a set of standard measures across jurisdictions to monitor the health impacts of retail marijuana sale legalization among states.
Language has emerged in no other species than humans, suggesting a profound obstacle to its evolution; maybe quite specific social conditions were prerequisite for the evolution of language- and symbol-ready hominins
The Role of Egalitarianism and Gender Ritual in the Evolution of Symbolic Cognition. Camilla Power. August 2019. Chp 19 in Handbook of Cognitive Archaeology, Routledge. https://www.researchgate.net/publication/335062001
Abstract: Are there constraints on the social conditions that could have given rise to language and symbolic cognition? Language has emerged in no other species than humans, suggesting a profound obstacle to its evolution. If language is seen as an aspect of cognition, limitations can be expected in terms of computational capacity. But if it is seen it as fundamentally for communication, then the problems will be found in terms of social relationships. Below a certain threshold of cooperation and trust, no language or symbolic communication could evolve (Knight & Lewis, 2017a); this has been termed a “platform of trust” (Wacewicz, 2017).... In this chapter, I argue that quite specific social conditions were prerequisite for the evolution of language- and symbol-ready hominins. One of the requirements differentiating our ancestors from other African apes was a switch to mainly female philopatry – females living with their relatives, rather than dispersing at sexual maturity – coevolving with an increasing tendency to egalitarianism....How did increasing egalitarianism affect males and potentially “feminize” male behavior for cooperative offspring care? How were male and female relations affected in the evolution of genus Homo and Homo sapiens?
Abstract: Are there constraints on the social conditions that could have given rise to language and symbolic cognition? Language has emerged in no other species than humans, suggesting a profound obstacle to its evolution. If language is seen as an aspect of cognition, limitations can be expected in terms of computational capacity. But if it is seen it as fundamentally for communication, then the problems will be found in terms of social relationships. Below a certain threshold of cooperation and trust, no language or symbolic communication could evolve (Knight & Lewis, 2017a); this has been termed a “platform of trust” (Wacewicz, 2017).... In this chapter, I argue that quite specific social conditions were prerequisite for the evolution of language- and symbol-ready hominins. One of the requirements differentiating our ancestors from other African apes was a switch to mainly female philopatry – females living with their relatives, rather than dispersing at sexual maturity – coevolving with an increasing tendency to egalitarianism....How did increasing egalitarianism affect males and potentially “feminize” male behavior for cooperative offspring care? How were male and female relations affected in the evolution of genus Homo and Homo sapiens?
We are capable of making accurate personality judgements in computer-mediated communication by means of even small cues like nicknames
The Name Is the Game: Nicknames as Predictors of Personality and Mating Strategy in Online Dating. Benjamin P. Lange et al. Front. Commun., February 12 2019. https://doi.org/10.3389/fcomm.2019.00003
Abstract
Objective: We investigated the communicative function of online dating nicknames. Our aim was to assess if it is possible to correctly guess personality traits of a user simply by reading his/her nickname.
Method: We had 69 nickname users (average age: 33.59 years, 36 female) complete questionnaires assessing their personality (Big 5 + narcissism) and mating strategy (short- vs. long-term). We then checked (using a total of 638 participants, average age: 26.83 years, 355 female), whether personality and mating strategy of the nickname users could be assessed correctly based only on the nickname. We also captured the motivation to contact the user behind a nickname and looked at linguistic features of the nicknames.
Results: We found that personality and mating strategy could be inferred from a nickname. Furthermore, going by trends, women were better at intersexual personality judgments, whereas men were better in intrasexual judgements. We also found several correlates of the motivation to contact the person behind the nickname. Among other factors, long nicknames seemed to deter people from contacting the nickname user.
Conclusions: Findings display that humans are capable of making accurate personality judgements in computer-mediated communication by means of even small cues like nicknames.
Introduction
Language-based face-to-face (ftf) interaction can be considered the most natural way of communication (Kock, 2004). New social media have transformed communication, though, as sender and receiver are not necessarily copresent in such a mediated context. However, communication in the digital world is still language-based, even when only in the form of written language (Koch et al., 2005).
Research on such computer-mediated communication (cmc) can be divided into different approaches. Two of them are: (1) the reduced-social-cues approach (rsc) (Sproull and Kiesler, 1986), and (2) the hyperpersonal communication approach (hp) (Walther, 1996). The first assumes that cmc filters out social context cues. The second emphasizes that cmc might surpass ftf communication, as the sender has the opportunity to optimize their self-representation while the receiver idealizes the sender on the basis of the available cues. Here lies the question whether people are able to, and actually do hide their “true selves,” that is their identity (e.g., personality), or whether they, despite being relatively anonymous, inevitably communicate aspects of their respective identity and personality that are in turn perceived by the receiver (Walther and Parks, 2002).
Sex or gender, respectively, are central features of one's identity and personality (e.g., Mealey, 2000; Ellis et al., 2008). As a matter of fact, sex has been central in cmc research. For instance, Guiller and Durndell (2007) found that in cmc men are more dominant than women, whereas women are more supportive than men—findings reminiscent of sex differences in ftf communication (Eckert and McConnell-Ginet, 2003).
A large body of research (e.g., Savicki et al., 1999; Thomson and Murachver, 2001; Koch et al., 2005) shows that only by reading text, people are able to guess the sex of the writers above chance. The same seems to be true for personality judgments (Park et al., 2015). Entire texts are not necessary, though. Lange et al. (2016b) used pseudonyms chosen by students in written exams, and had participants rate them on assumed sex of the user and other attributes. They found that sex could be guessed correctly above chance with a large effect size. Also, participants ascribed typical female and male attributes to the pseudonyms and even tried to retrieve information on the users' personality. It was also found that women, more than men, used diminutive suffixes in their pseudonyms (like -i in “cuti”). In line with these findings, Heisler and Crabill (2006) demonstrated that the majority of their participants considered themselves capable of correctly guessing the sex and age of the users of e-mail usernames. Moreover, their participants attempted to rate the supposed owners of the e-mail addresses also with respect to, among other aspects, their relationship status.
Not only is sex a matter of interest with respect to the digital world, the phenomenon of online dating is, too (Valkenburg and Peter, 2007). Considering that mate choice is one of the most important areas in social life (Buss, 2003) and that people are increasingly shifting their activities from the offline to the online world, it does not surprise that online dating has become a billion-dollar business (Sautter et al., 2010).
Human mating in general and sex differences in human mating have attracted numerous researchers and have produced a veritable deluge of related literature (e.g., Buss and Barnes, 1986; Buss, 1989; Buss and Schmitt, 1993; for an overview, see Buss, 2003, 2016; Schwarz and Hassebrauck, 2012). This research has, on the one hand, identified several characteristics that both sexes prefer in a mate (e.g., healthy), as well as those that are more preferred by women (e.g., good earning capacity, college graduate) and those more preferred by men (e.g., physically attractive) (Buss et al., 1990). The role of language in human mate choice has also been examined recently (e.g., Lange et al., 2014, 2016a). On the other hand, empirical mate choice research has documented that women are more exacting in mate choice decisions, while men face stronger same-sex competition (for an overview, see Buss, 2003). The first process, called intersexual selection, is the actual mate choice, which in most species occurs as female mate choice. That is, women because of having higher obligatory costs (Trivers, 1972), are more selective, while men, whose obligatory costs are lower, compete more strongly with other men in order to be chosen. This is called intrasexual selection (for an overview, see Buss, 2003).
Another area of interest in mate choice research is the distinction between short-term mating (the search for an affair, a one-night stand, etc.) and long-term mating (the search for a committed, steady relationship) (Buss and Schmitt, 1993), which can be referred to as a person's mating strategy (Schmitt, 2005). This distinction is somewhat linked to females being choosier than males. As the costs for males are lower than for females, men show a tendency to be relatively indiscriminate in short-term mating. A bad mate choice imposes higher costs on women than on men—and this applies more to short-term than to long-term mating. Generally, women show a preference for a long-term mate (Buss and Schmitt, 1993). As a result, men for whom short-term mating is a particularly useful strategy might want to pretend to be interested in long-term mating, while in fact they are not. Thus, women should be particularly interested in detecting a man's mating strategy (Buss, 2003).
Not only dating in general but online dating as well has excited some research interest—among others, also with respect to rsc and hp (for an overview, see Finkel et al., 2012). It has been assumed, taking the hp perspective, that the cmc limitations in online dating can be compensated by language style and choice of words (Walther et al., 2005). While physical cues are missing in cmc, the importance of verbal cues might be rising. The question then might very well be, this time with respect to online dating: what about single words instead of entire texts?
As emphasized above, communication only by means of single words is even more limited than communicating through written texts. Still, those single words might communicate crucial information (Lange et al., 2016a). In accordance with findings on mate choice in “real life,” Whitty and Buchanan (2010) found that women were more attracted to online user names (hereinafter called nicknames) (e.g., in terms of the motivation to contact the person behind the name) that signaled intelligence, while men were more attracted to nicknames indicative of physical attractiveness. So the choice of a nickname in online dating can be used for impression management—just like hp would predict. Online dating is indeed an area in the digital world in which making a good first impression is essential (Whitty and Buchanan, 2010).
Apart from classical mate choice criteria, the personality of a potential mate is crucial, too (e.g., Buss et al., 1990; Botwin et al., 1997; Escorial and MartÃn-Buro, 2012). In this context, research by Back et al. (2008) is particularly relevant for the research presented in the article at hand. They retrieved personality scores of 599 participants (Big Five, e.g., extraversion; narcissism) and additionally asked them for their e-mail addresses. Back et al. (2008) then presented the e-mail names to 100 participants who judged the personality dimensions of the e-mail name users on the same personality items used before. Personality dimensions were detected correctly, with results being statistically significant for all dimensions except for extraversion. Back et al. (2008) also showed that personality ratings were linked to certain attributes of the e-mail address. For instance, the perception of conscientiousness was positively correlated with both the number of characters and dots the names consisted of, while number of digits was negatively correlated with it.
The current study had the objective of replicating the findings by Back et al. (2008) with respect to online dating as well as to extend them. Back et al. (2008) used e-mail names and had a general cmc context. We, on our part, wanted to focus more on nicknames. This was inspired by research on the psychology of pseudonyms (e.g., Lange et al., 2016b) as well as based on the following assumption: While e-mail addresses are often created based on the rule “first name.last name” (e.g., john.smith@…), nicknames are assumed to be more creative (cf. Whitty and Buchanan, 2010). Also unlike Back et al. (2008), we were interested in the context of online dating and mate choice. Whitty and Buchanan (2010) have already shown that such an approach is worthwhile. Still, the scarcity of such research calls for more studies of this kind.
The question might also be asked, as to whether people are able to detect the mating strategy of a potential mate. It was also of interest whether the motivations for contacting a person behind a nickname, based only on the nickname, might differ (Whitty and Buchanan, 2010). Furthermore, we wanted additionally to investigate whether one of the two sexes are better at judging women's and men's personality based on their nicknames. Mating is an area of social life, where making a proper choice seems particularly important (Buss, 2003). So, it seemed of practical relevance to elucidate what mate choice-relevant information can be retrieved form an online dating nickname.
Finally, we were interested in the linguistic features of the nicknames, and the subsequent question whether we would find correlations between these features and other variables of interest (Back et al., 2008; Lange et al., 2016b).
We proposed the following hypothesis (cf. Back et al., 2008):
H1: People are able to correctly guess online daters' personality by means only of their nicknames. Under personality, we understood the Big Five dimensions which are: openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism (McCrae and John, 1992). The Big Five have been used quite often in research focusing on personality perceptions by means of certain cues (e.g., Küfner et al., 2010; Qui et al., 2015). As another personality dimensions, we added narcissism following the mentioned study by Back et al. (2008). Other researchers have also included this trait, which is one of three traits of the so-called Dark Triad, into their research in order to elucidate, whether it can be detected (e.g., Buffardi and Campbell, 2008; Vander Molen et al., 2018).
Furthermore, we had four research questions that were derived from mate choice research (see above) and other studies on the psychology of nicknames or usernames (Back et al., 2008; Whitty and Buchanan, 2010; Lange et al., 2016b):
RQ1: Are people able to correctly guess online daters' mating strategy by means only of their nicknames?
RQ2: What are the correlates of the motivation to contact a person behind a nickname?
RQ3: Does one sex show greater accuracy in personality judgments than the other?
RQ4: What are the linguistic correlates of the personality of the nickname users and how are they perceived? In other words, are linguistic features significant mediators of judgments?
Abstract
Objective: We investigated the communicative function of online dating nicknames. Our aim was to assess if it is possible to correctly guess personality traits of a user simply by reading his/her nickname.
Method: We had 69 nickname users (average age: 33.59 years, 36 female) complete questionnaires assessing their personality (Big 5 + narcissism) and mating strategy (short- vs. long-term). We then checked (using a total of 638 participants, average age: 26.83 years, 355 female), whether personality and mating strategy of the nickname users could be assessed correctly based only on the nickname. We also captured the motivation to contact the user behind a nickname and looked at linguistic features of the nicknames.
Results: We found that personality and mating strategy could be inferred from a nickname. Furthermore, going by trends, women were better at intersexual personality judgments, whereas men were better in intrasexual judgements. We also found several correlates of the motivation to contact the person behind the nickname. Among other factors, long nicknames seemed to deter people from contacting the nickname user.
Conclusions: Findings display that humans are capable of making accurate personality judgements in computer-mediated communication by means of even small cues like nicknames.
Introduction
Language-based face-to-face (ftf) interaction can be considered the most natural way of communication (Kock, 2004). New social media have transformed communication, though, as sender and receiver are not necessarily copresent in such a mediated context. However, communication in the digital world is still language-based, even when only in the form of written language (Koch et al., 2005).
Research on such computer-mediated communication (cmc) can be divided into different approaches. Two of them are: (1) the reduced-social-cues approach (rsc) (Sproull and Kiesler, 1986), and (2) the hyperpersonal communication approach (hp) (Walther, 1996). The first assumes that cmc filters out social context cues. The second emphasizes that cmc might surpass ftf communication, as the sender has the opportunity to optimize their self-representation while the receiver idealizes the sender on the basis of the available cues. Here lies the question whether people are able to, and actually do hide their “true selves,” that is their identity (e.g., personality), or whether they, despite being relatively anonymous, inevitably communicate aspects of their respective identity and personality that are in turn perceived by the receiver (Walther and Parks, 2002).
Sex or gender, respectively, are central features of one's identity and personality (e.g., Mealey, 2000; Ellis et al., 2008). As a matter of fact, sex has been central in cmc research. For instance, Guiller and Durndell (2007) found that in cmc men are more dominant than women, whereas women are more supportive than men—findings reminiscent of sex differences in ftf communication (Eckert and McConnell-Ginet, 2003).
A large body of research (e.g., Savicki et al., 1999; Thomson and Murachver, 2001; Koch et al., 2005) shows that only by reading text, people are able to guess the sex of the writers above chance. The same seems to be true for personality judgments (Park et al., 2015). Entire texts are not necessary, though. Lange et al. (2016b) used pseudonyms chosen by students in written exams, and had participants rate them on assumed sex of the user and other attributes. They found that sex could be guessed correctly above chance with a large effect size. Also, participants ascribed typical female and male attributes to the pseudonyms and even tried to retrieve information on the users' personality. It was also found that women, more than men, used diminutive suffixes in their pseudonyms (like -i in “cuti”). In line with these findings, Heisler and Crabill (2006) demonstrated that the majority of their participants considered themselves capable of correctly guessing the sex and age of the users of e-mail usernames. Moreover, their participants attempted to rate the supposed owners of the e-mail addresses also with respect to, among other aspects, their relationship status.
Not only is sex a matter of interest with respect to the digital world, the phenomenon of online dating is, too (Valkenburg and Peter, 2007). Considering that mate choice is one of the most important areas in social life (Buss, 2003) and that people are increasingly shifting their activities from the offline to the online world, it does not surprise that online dating has become a billion-dollar business (Sautter et al., 2010).
Human mating in general and sex differences in human mating have attracted numerous researchers and have produced a veritable deluge of related literature (e.g., Buss and Barnes, 1986; Buss, 1989; Buss and Schmitt, 1993; for an overview, see Buss, 2003, 2016; Schwarz and Hassebrauck, 2012). This research has, on the one hand, identified several characteristics that both sexes prefer in a mate (e.g., healthy), as well as those that are more preferred by women (e.g., good earning capacity, college graduate) and those more preferred by men (e.g., physically attractive) (Buss et al., 1990). The role of language in human mate choice has also been examined recently (e.g., Lange et al., 2014, 2016a). On the other hand, empirical mate choice research has documented that women are more exacting in mate choice decisions, while men face stronger same-sex competition (for an overview, see Buss, 2003). The first process, called intersexual selection, is the actual mate choice, which in most species occurs as female mate choice. That is, women because of having higher obligatory costs (Trivers, 1972), are more selective, while men, whose obligatory costs are lower, compete more strongly with other men in order to be chosen. This is called intrasexual selection (for an overview, see Buss, 2003).
Another area of interest in mate choice research is the distinction between short-term mating (the search for an affair, a one-night stand, etc.) and long-term mating (the search for a committed, steady relationship) (Buss and Schmitt, 1993), which can be referred to as a person's mating strategy (Schmitt, 2005). This distinction is somewhat linked to females being choosier than males. As the costs for males are lower than for females, men show a tendency to be relatively indiscriminate in short-term mating. A bad mate choice imposes higher costs on women than on men—and this applies more to short-term than to long-term mating. Generally, women show a preference for a long-term mate (Buss and Schmitt, 1993). As a result, men for whom short-term mating is a particularly useful strategy might want to pretend to be interested in long-term mating, while in fact they are not. Thus, women should be particularly interested in detecting a man's mating strategy (Buss, 2003).
Not only dating in general but online dating as well has excited some research interest—among others, also with respect to rsc and hp (for an overview, see Finkel et al., 2012). It has been assumed, taking the hp perspective, that the cmc limitations in online dating can be compensated by language style and choice of words (Walther et al., 2005). While physical cues are missing in cmc, the importance of verbal cues might be rising. The question then might very well be, this time with respect to online dating: what about single words instead of entire texts?
As emphasized above, communication only by means of single words is even more limited than communicating through written texts. Still, those single words might communicate crucial information (Lange et al., 2016a). In accordance with findings on mate choice in “real life,” Whitty and Buchanan (2010) found that women were more attracted to online user names (hereinafter called nicknames) (e.g., in terms of the motivation to contact the person behind the name) that signaled intelligence, while men were more attracted to nicknames indicative of physical attractiveness. So the choice of a nickname in online dating can be used for impression management—just like hp would predict. Online dating is indeed an area in the digital world in which making a good first impression is essential (Whitty and Buchanan, 2010).
Apart from classical mate choice criteria, the personality of a potential mate is crucial, too (e.g., Buss et al., 1990; Botwin et al., 1997; Escorial and MartÃn-Buro, 2012). In this context, research by Back et al. (2008) is particularly relevant for the research presented in the article at hand. They retrieved personality scores of 599 participants (Big Five, e.g., extraversion; narcissism) and additionally asked them for their e-mail addresses. Back et al. (2008) then presented the e-mail names to 100 participants who judged the personality dimensions of the e-mail name users on the same personality items used before. Personality dimensions were detected correctly, with results being statistically significant for all dimensions except for extraversion. Back et al. (2008) also showed that personality ratings were linked to certain attributes of the e-mail address. For instance, the perception of conscientiousness was positively correlated with both the number of characters and dots the names consisted of, while number of digits was negatively correlated with it.
The current study had the objective of replicating the findings by Back et al. (2008) with respect to online dating as well as to extend them. Back et al. (2008) used e-mail names and had a general cmc context. We, on our part, wanted to focus more on nicknames. This was inspired by research on the psychology of pseudonyms (e.g., Lange et al., 2016b) as well as based on the following assumption: While e-mail addresses are often created based on the rule “first name.last name” (e.g., john.smith@…), nicknames are assumed to be more creative (cf. Whitty and Buchanan, 2010). Also unlike Back et al. (2008), we were interested in the context of online dating and mate choice. Whitty and Buchanan (2010) have already shown that such an approach is worthwhile. Still, the scarcity of such research calls for more studies of this kind.
The question might also be asked, as to whether people are able to detect the mating strategy of a potential mate. It was also of interest whether the motivations for contacting a person behind a nickname, based only on the nickname, might differ (Whitty and Buchanan, 2010). Furthermore, we wanted additionally to investigate whether one of the two sexes are better at judging women's and men's personality based on their nicknames. Mating is an area of social life, where making a proper choice seems particularly important (Buss, 2003). So, it seemed of practical relevance to elucidate what mate choice-relevant information can be retrieved form an online dating nickname.
Finally, we were interested in the linguistic features of the nicknames, and the subsequent question whether we would find correlations between these features and other variables of interest (Back et al., 2008; Lange et al., 2016b).
We proposed the following hypothesis (cf. Back et al., 2008):
H1: People are able to correctly guess online daters' personality by means only of their nicknames. Under personality, we understood the Big Five dimensions which are: openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism (McCrae and John, 1992). The Big Five have been used quite often in research focusing on personality perceptions by means of certain cues (e.g., Küfner et al., 2010; Qui et al., 2015). As another personality dimensions, we added narcissism following the mentioned study by Back et al. (2008). Other researchers have also included this trait, which is one of three traits of the so-called Dark Triad, into their research in order to elucidate, whether it can be detected (e.g., Buffardi and Campbell, 2008; Vander Molen et al., 2018).
Furthermore, we had four research questions that were derived from mate choice research (see above) and other studies on the psychology of nicknames or usernames (Back et al., 2008; Whitty and Buchanan, 2010; Lange et al., 2016b):
RQ1: Are people able to correctly guess online daters' mating strategy by means only of their nicknames?
RQ2: What are the correlates of the motivation to contact a person behind a nickname?
RQ3: Does one sex show greater accuracy in personality judgments than the other?
RQ4: What are the linguistic correlates of the personality of the nickname users and how are they perceived? In other words, are linguistic features significant mediators of judgments?
A majority of the participants had been deceptive in therapy, and a majority were willing to be deceptive in future therapeutic contexts; participants were more likely to use white lies than other forms of deception in therapy
Deception in psychotherapy: Frequency, typology and relationship. Drew A. Curtis, Christian L. Hart. Counselling and Psychotherapy Research, September 9 2019. https://doi.org/10.1002/capr.12263
Abstract: Deception in therapy has been documented anecdotally through various narratives of therapists. The investigation of its occurrence within therapy has largely been overlooked. We explored the reported frequency of deception within psychotherapy, the types of deception used within therapy, the likelihood of people lying to a therapist compared to other groups of people, and client perceptions of the types of deception that therapists use. Ninety‐one participants were provided with a series of deception examples, asked questions about the use of these types of deception within therapy, and asked generally about their use of deception in therapy. We found that a majority of the participants had been deceptive in therapy, and a majority were willing to be deceptive in future therapeutic contexts. Participants were more likely to use white lies than other forms of deception in therapy. Lastly, participants were less likely to lie to therapists compared to strangers and acquaintances. Implications for research and practice are discussed.
1 INTRODUCTION
When people communicate with each other, there is typically a presumption of honesty; however, people lie (Levine, 2014). In classic diary studies, people report lying, on average, twice a day (DePaulo & Bell, 1996; DePaulo & Kashy, 1998; Kashy & DePaulo, 1996). However, recent research indicates that the distribution of lies is positively skewed, with a small set of people telling many lies and most people telling fewer than two lies per day (Serota & Levine, 2015). Deception takes on a variety of forms such as outright lies, exaggerations, omissions and subtle lies (DePaulo, Kashy, Kirkendol, Wyer, & Epstein, 1996; Vrij, 2000). While there are numerous forms of human deception, the common thread that ties them together is an intent to mislead others. Vrij (2008) discussed various definitions of deception that had been used in the past, noting their shortcomings. He ultimately submitted that deception is “a successful or unsuccessful deliberate attempt, without forewarning, to create in another a belief which the communicator considers to be untrue” (p. 15).
1.1 Background
Over the past several decades, there has been a tremendous amount of basic research investigating human deception (see Vrij, 2008). This research has examined deception in a variety of contexts including intimate relationships (Cole, 2001; Peterson, 1996), in the workplace (Hart, Hudson, Fillmore, & Griffith, 2006; Shulman, 2011) and in forensic areas (Granhag & Strömwall, 2004). However, the prevalence of deception within psychotherapeutic settings has been mostly overlooked. In fact, it has been suggested that “surprisingly little has been written in the counseling journals on the topic of lying” (Miller, 1992, p. 25).
While psychotherapy involves an exchange between a therapist and a client, often perceived as honest (Curtis & Hart, 2015; Kottler & Carlson, 2011), deception is occasionally found woven into components of practice. Deceitfulness is one of the criteria for antisocial personality disorder (301.7) found in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM‐5; American Psychiatric Association; APA, 2013). The DSM‐5 also terms lying, motivated by external incentive, as malingering (V65.2). Within psychometrics, deception has been documented as a measure or scale in some assessments (e.g. Greene, 2000; Guenther & Otto, 2010). The Minnesota Multiphasic Personality Inventory‐II (Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 2001) contains scales that reveal if a client is attempting to lie or be deceptive in different manners (Greene, 2000). The scant research investigating deception in therapy has focused on psychologists’ ability to detect deception, finding that counsellors and psychologists achieve 62%–85% accuracy rates when attempting to discern lies from truths, where 50% would represent chance levels of accuracy (e.g. Briggs, 1992; Ekman, O'Sullivan, & Frank, 1999). However, meta‐analyses and other literature suggest that accuracy for detecting deception is not much higher than chance for laypeople (54%) and law enforcement professionals (56%; Bond & DePaulo, 2006; Vrij, 2000).
More recently, there has been a re‐emergence of research and literature regarding deception in therapy. One study investigated therapists’ beliefs and attitudes towards client deception (Curtis & Hart, 2015). Curtis and Hart (2015) recruited 112 therapists and asked them to identify their beliefs about indicators of deception and subsequently identify their attitudes towards clients who lie. The results found that therapists possessed a number of inaccurate beliefs about actual indicators of deception (e.g. eye gaze aversion when lying), held a number of negative attitudes towards client deception (e.g. liking the client less) and lied to their clients in therapy.
While investigating psychologists’ ability to detect deception and their beliefs and attitudes towards client deception are worthwhile pursuits, the prevalence of client deception within psychotherapy has remained largely unstudied. Some literature has referenced pathological aspects of lying, termed pseudologia phantastica (e.g. Garlipp, 2017; Muzinic, Kozaric‐Kovacic, & Marinic, 2016). Additionally, in their book, Duped, Kottler and Carlson (2011) documented a number of anecdotal accounts of psychotherapists discovering that their clients had lied in therapy. Some of these reports included fabricating an entire therapy experience (Grzegorek, 2011) and intentionally omitting information about having a terminal illness (Rochlan, 2011). Thus, there is clear evidence that some clients do deceive their therapists.
Even though psychologists’ stories provide anecdotal evidence for the presence of deception within psychotherapy, there remains a dearth of empirical investigation. One recent study explored the occurrence of lying in psychotherapy, finding that 93% of 547 psychotherapy patients reported having lied to a therapist (Blanchard & Farber, 2016). Due to the present study having been conducted prior to the Blanchard and Farber (2016) study, it was not designed as a replication or intended for direct comparison.
In the current study, we sought to broaden the understanding of deception in therapy. We collected empirical data on the frequency of deception in therapy, the types of deception used and the influence of relational roles on deception. Given the previously noted research showing that many people report lying in their close relationships and in therapy, we predicted that the majority (>50%) of participants who had been in therapy would report that they had been deceptive within therapy at least once. Further, we predicted that the use of white lies and omissions would be more prevalent than other types of deception. Previous studies have found that people tell fewer lies to people with whom they are in emotionally close relationships (Vrij, 2008). Based on those findings, we predicted that participants would report being more likely to lie to a therapist than a significant other and family member, and we predicted that they would be less likely to lie to a therapist than social acquaintances and complete strangers. Based on the findings of Curtis (2013) that therapists believe clients are more likely to lie in earlier compared to later sessions, we predicted that people would report more willingness to lie to a therapist during the first session compared to subsequent sessions, due to the lack of emotional connection early in the relationship. Lastly, we predicted that people would be more likely to lie to a therapist that they did not like compared to a therapist they did like.
Abstract: Deception in therapy has been documented anecdotally through various narratives of therapists. The investigation of its occurrence within therapy has largely been overlooked. We explored the reported frequency of deception within psychotherapy, the types of deception used within therapy, the likelihood of people lying to a therapist compared to other groups of people, and client perceptions of the types of deception that therapists use. Ninety‐one participants were provided with a series of deception examples, asked questions about the use of these types of deception within therapy, and asked generally about their use of deception in therapy. We found that a majority of the participants had been deceptive in therapy, and a majority were willing to be deceptive in future therapeutic contexts. Participants were more likely to use white lies than other forms of deception in therapy. Lastly, participants were less likely to lie to therapists compared to strangers and acquaintances. Implications for research and practice are discussed.
1 INTRODUCTION
When people communicate with each other, there is typically a presumption of honesty; however, people lie (Levine, 2014). In classic diary studies, people report lying, on average, twice a day (DePaulo & Bell, 1996; DePaulo & Kashy, 1998; Kashy & DePaulo, 1996). However, recent research indicates that the distribution of lies is positively skewed, with a small set of people telling many lies and most people telling fewer than two lies per day (Serota & Levine, 2015). Deception takes on a variety of forms such as outright lies, exaggerations, omissions and subtle lies (DePaulo, Kashy, Kirkendol, Wyer, & Epstein, 1996; Vrij, 2000). While there are numerous forms of human deception, the common thread that ties them together is an intent to mislead others. Vrij (2008) discussed various definitions of deception that had been used in the past, noting their shortcomings. He ultimately submitted that deception is “a successful or unsuccessful deliberate attempt, without forewarning, to create in another a belief which the communicator considers to be untrue” (p. 15).
1.1 Background
Over the past several decades, there has been a tremendous amount of basic research investigating human deception (see Vrij, 2008). This research has examined deception in a variety of contexts including intimate relationships (Cole, 2001; Peterson, 1996), in the workplace (Hart, Hudson, Fillmore, & Griffith, 2006; Shulman, 2011) and in forensic areas (Granhag & Strömwall, 2004). However, the prevalence of deception within psychotherapeutic settings has been mostly overlooked. In fact, it has been suggested that “surprisingly little has been written in the counseling journals on the topic of lying” (Miller, 1992, p. 25).
While psychotherapy involves an exchange between a therapist and a client, often perceived as honest (Curtis & Hart, 2015; Kottler & Carlson, 2011), deception is occasionally found woven into components of practice. Deceitfulness is one of the criteria for antisocial personality disorder (301.7) found in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM‐5; American Psychiatric Association; APA, 2013). The DSM‐5 also terms lying, motivated by external incentive, as malingering (V65.2). Within psychometrics, deception has been documented as a measure or scale in some assessments (e.g. Greene, 2000; Guenther & Otto, 2010). The Minnesota Multiphasic Personality Inventory‐II (Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 2001) contains scales that reveal if a client is attempting to lie or be deceptive in different manners (Greene, 2000). The scant research investigating deception in therapy has focused on psychologists’ ability to detect deception, finding that counsellors and psychologists achieve 62%–85% accuracy rates when attempting to discern lies from truths, where 50% would represent chance levels of accuracy (e.g. Briggs, 1992; Ekman, O'Sullivan, & Frank, 1999). However, meta‐analyses and other literature suggest that accuracy for detecting deception is not much higher than chance for laypeople (54%) and law enforcement professionals (56%; Bond & DePaulo, 2006; Vrij, 2000).
More recently, there has been a re‐emergence of research and literature regarding deception in therapy. One study investigated therapists’ beliefs and attitudes towards client deception (Curtis & Hart, 2015). Curtis and Hart (2015) recruited 112 therapists and asked them to identify their beliefs about indicators of deception and subsequently identify their attitudes towards clients who lie. The results found that therapists possessed a number of inaccurate beliefs about actual indicators of deception (e.g. eye gaze aversion when lying), held a number of negative attitudes towards client deception (e.g. liking the client less) and lied to their clients in therapy.
While investigating psychologists’ ability to detect deception and their beliefs and attitudes towards client deception are worthwhile pursuits, the prevalence of client deception within psychotherapy has remained largely unstudied. Some literature has referenced pathological aspects of lying, termed pseudologia phantastica (e.g. Garlipp, 2017; Muzinic, Kozaric‐Kovacic, & Marinic, 2016). Additionally, in their book, Duped, Kottler and Carlson (2011) documented a number of anecdotal accounts of psychotherapists discovering that their clients had lied in therapy. Some of these reports included fabricating an entire therapy experience (Grzegorek, 2011) and intentionally omitting information about having a terminal illness (Rochlan, 2011). Thus, there is clear evidence that some clients do deceive their therapists.
Even though psychologists’ stories provide anecdotal evidence for the presence of deception within psychotherapy, there remains a dearth of empirical investigation. One recent study explored the occurrence of lying in psychotherapy, finding that 93% of 547 psychotherapy patients reported having lied to a therapist (Blanchard & Farber, 2016). Due to the present study having been conducted prior to the Blanchard and Farber (2016) study, it was not designed as a replication or intended for direct comparison.
In the current study, we sought to broaden the understanding of deception in therapy. We collected empirical data on the frequency of deception in therapy, the types of deception used and the influence of relational roles on deception. Given the previously noted research showing that many people report lying in their close relationships and in therapy, we predicted that the majority (>50%) of participants who had been in therapy would report that they had been deceptive within therapy at least once. Further, we predicted that the use of white lies and omissions would be more prevalent than other types of deception. Previous studies have found that people tell fewer lies to people with whom they are in emotionally close relationships (Vrij, 2008). Based on those findings, we predicted that participants would report being more likely to lie to a therapist than a significant other and family member, and we predicted that they would be less likely to lie to a therapist than social acquaintances and complete strangers. Based on the findings of Curtis (2013) that therapists believe clients are more likely to lie in earlier compared to later sessions, we predicted that people would report more willingness to lie to a therapist during the first session compared to subsequent sessions, due to the lack of emotional connection early in the relationship. Lastly, we predicted that people would be more likely to lie to a therapist that they did not like compared to a therapist they did like.
What are the Price Effects of Trade? Trade with China increased U.S. consumer surplus by about $400,000 per displaced job, and product categories catering to low-income consumers experienced larger price declines
What are the Price Effects of Trade? Evidence from the U.S. and Implications for Quantitative Trade Models. Xavier Jaravel, Erick Sager. Centre for Economic Policy Research, DP13902, August 2019. cepr.org/active/publications/discussion_papers/dp.php?dpno=13902
Abstract: This paper finds that U.S. consumer prices fell substantially due to increased trade with China. With comprehensive price micro-data and two complementary identification strategies, we estimate that a 1pp increase in import penetration from China causes a 1.91% decline in consumer prices. This price response is driven by declining markups for domestically-produced goods, and is one order of magnitude larger than in standard trade models that abstract from strategic price-setting. The estimates imply that trade with China increased U.S. consumer surplus by about $400,000 per displaced job, and that product categories catering to low-income consumers experienced larger price declines.
Keyword(s): Markups, prices, Trade
JEL(s): F10, F13, F14
Abstract: This paper finds that U.S. consumer prices fell substantially due to increased trade with China. With comprehensive price micro-data and two complementary identification strategies, we estimate that a 1pp increase in import penetration from China causes a 1.91% decline in consumer prices. This price response is driven by declining markups for domestically-produced goods, and is one order of magnitude larger than in standard trade models that abstract from strategic price-setting. The estimates imply that trade with China increased U.S. consumer surplus by about $400,000 per displaced job, and that product categories catering to low-income consumers experienced larger price declines.
Keyword(s): Markups, prices, Trade
JEL(s): F10, F13, F14
Some Lie a Lot: Most people are fairly honest, but there are prolific liars among us
Development of the Lying in Everyday Situations Scale. Christian L Hart et al. The American Journal of Psychology 132(3):343-352, September 2019. DOI: 10.5406/amerjpsyc.132.3.0343
Abstract: Deception researchers have developed various scales that measure the use of lying in specific contexts, but there are limited tools that measure the use of lies more broadly across the various contexts of day-today life. We developed a questionnaire that assesses the use of various forms of lying, including protecting others, image enhancement, saving face, avoiding punishment, vindictiveness, privacy, entertainment, avoiding confrontation, instrumental gain, and maintaining and facilitating relationships. The results of a factor analysis brought our original 45-item scale down to a two-dimensional, 14-item scale that we have titled the Lying in Everyday Situations (LiES) scale. In three studies, the concurrent validity of the scale was assessed with several domain-specific lying scales, two Machiavellianism scales, a social desirability scale, and reports of actual lie frequency over a 24-hour period. The scale was also assessed for interitem consistency (Cronbach's α) and test-retest reliability. We found that the LiES scale was a reliable and valid measure of lying. The LiES scale may be a useful tool for assessing the general tendency to lie across various contexts.
Popular version... Some Lie a Lot: Most people are fairly honest, but there are prolific liars among us. Christian L Hart. Psychology Today, Oct 10, 2019. https://www.psychologytoday.com/intl/blog/the-nature-deception/201910/some-lie-lot
Check also Deception in psychotherapy: Frequency, typology and relationship. Drew A. Curtis, Christian L. Hart. Counselling and Psychotherapy Research, September 9 2019. https://www.bipartisanalliance.com/2019/10/a-majority-of-participants-had-been.html
And, from 2009, The Prevalence of Lying in America: Three Studies of Self‐Reported Lies. Kim B. Serota, Timothy Levine, Franklin J. Boster. Human Communication Research 36(1):2 - 25, December 2009. DOI: 10.1111/j.1468-2958.2009.01366.x
And “Sorry, I already have a boyfriend”: Masculine honor beliefs and perceptions of women’s use of deceptive rejection behaviors to avert unwanted romantic advances. Evelyn Stratmoen, Emilio D. Rivera, Donald A. Saucier. Journal of Social and Personal Relationships, August 7, 2019. https://www.bipartisanalliance.com/2019/10/sorry-i-already-have-boyfriend.html
And Parenting by lying in childhood is associated with negative developmental outcomes in adulthood. Peipei Setoh et al. Journal of Experimental Child Psychology, September 26 2019, 104680. https://www.bipartisanalliance.com/2019/09/childhood-experience-of-parents-lying.html
Abstract: Deception researchers have developed various scales that measure the use of lying in specific contexts, but there are limited tools that measure the use of lies more broadly across the various contexts of day-today life. We developed a questionnaire that assesses the use of various forms of lying, including protecting others, image enhancement, saving face, avoiding punishment, vindictiveness, privacy, entertainment, avoiding confrontation, instrumental gain, and maintaining and facilitating relationships. The results of a factor analysis brought our original 45-item scale down to a two-dimensional, 14-item scale that we have titled the Lying in Everyday Situations (LiES) scale. In three studies, the concurrent validity of the scale was assessed with several domain-specific lying scales, two Machiavellianism scales, a social desirability scale, and reports of actual lie frequency over a 24-hour period. The scale was also assessed for interitem consistency (Cronbach's α) and test-retest reliability. We found that the LiES scale was a reliable and valid measure of lying. The LiES scale may be a useful tool for assessing the general tendency to lie across various contexts.
Popular version... Some Lie a Lot: Most people are fairly honest, but there are prolific liars among us. Christian L Hart. Psychology Today, Oct 10, 2019. https://www.psychologytoday.com/intl/blog/the-nature-deception/201910/some-lie-lot
Check also Deception in psychotherapy: Frequency, typology and relationship. Drew A. Curtis, Christian L. Hart. Counselling and Psychotherapy Research, September 9 2019. https://www.bipartisanalliance.com/2019/10/a-majority-of-participants-had-been.html
And, from 2009, The Prevalence of Lying in America: Three Studies of Self‐Reported Lies. Kim B. Serota, Timothy Levine, Franklin J. Boster. Human Communication Research 36(1):2 - 25, December 2009. DOI: 10.1111/j.1468-2958.2009.01366.x
Abstract: This study addresses the frequency and the distribution of reported lying in the adult population. A national survey asked 1,000 U.S. adults to report the number of lies told in a 24-hour period. Sixty percent of subjects report telling no lies at all, and almost half of all lies are told by only 5% of subjects; thus, prevalence varies widely and most reported lies are told by a few prolific liars. The pattern is replicated in a reanalysis of previously published research and with a student sample. Substantial individual differences in lying behavior have implications for the generality of truth-lie base rates in deception detection experiments. Explanations concerning the nature of lying and methods for detecting lies need to account for this variation.And Sexual Coercion by Women: The Influence of Pornography and Narcissistic and Histrionic Personality Disorder Traits. Abigail Hughes, Gayle Brewer, Roxanne Khan. Archives of Sexual Behavior, October 7 2019. https://www.bipartisanalliance.com/2019/10/female-perpetrators-and-postrefusal.html
And “Sorry, I already have a boyfriend”: Masculine honor beliefs and perceptions of women’s use of deceptive rejection behaviors to avert unwanted romantic advances. Evelyn Stratmoen, Emilio D. Rivera, Donald A. Saucier. Journal of Social and Personal Relationships, August 7, 2019. https://www.bipartisanalliance.com/2019/10/sorry-i-already-have-boyfriend.html
And Parenting by lying in childhood is associated with negative developmental outcomes in adulthood. Peipei Setoh et al. Journal of Experimental Child Psychology, September 26 2019, 104680. https://www.bipartisanalliance.com/2019/09/childhood-experience-of-parents-lying.html
Thursday, October 10, 2019
US private equity buyouts 1980-2013: Employment at targets shrinks 13% over two years in buyouts of publicly listed firms but expands 13% in buyouts of privately held firms; labor productivity rises 8% at targets over 2 years
Davis, Steven J. and Haltiwanger, John C. and Handley, Kyle and Lerner, Josh and Lipsius, Ben and Miranda, Javier, The Economic Effects of Private Equity Buyouts (October 7, 2019). SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3465723
Abstract: We examine thousands of U.S. private equity (PE) buyouts from 1980 to 2013, a period that saw huge swings in credit market tightness and GDP growth. Our results show striking, systematic differences in the real-side effects of PE buyouts, depending on buyout type and external conditions. Employment at target firms shrinks 13% over two years in buyouts of publicly listed firms but expands 13% in buyouts of privately held firms, both relative to contemporaneous outcomes at control firms. Labor productivity rises 8% at targets over two years post buyout (again, relative to controls), with large gains for both public-to-private and private-to-private buyouts. Target productivity gains are larger yet for deals executed amidst tight credit conditions. A post-buyout widening of credit spreads or slowdown in GDP growth lowers employment growth at targets and sharply curtails productivity gains in public-to-private and divisional buyouts. Average earnings per worker fall by 1.7% at target firms after buyouts, largely erasing a pre-buyout wage premium relative to controls. Wage effects are also heterogeneous. In these and other respects, the economic effects of private equity vary greatly by buyout type and with external conditions.
Keywords: private equity, buyouts
JEL Classification: G24, G24, )31
Abstract: We examine thousands of U.S. private equity (PE) buyouts from 1980 to 2013, a period that saw huge swings in credit market tightness and GDP growth. Our results show striking, systematic differences in the real-side effects of PE buyouts, depending on buyout type and external conditions. Employment at target firms shrinks 13% over two years in buyouts of publicly listed firms but expands 13% in buyouts of privately held firms, both relative to contemporaneous outcomes at control firms. Labor productivity rises 8% at targets over two years post buyout (again, relative to controls), with large gains for both public-to-private and private-to-private buyouts. Target productivity gains are larger yet for deals executed amidst tight credit conditions. A post-buyout widening of credit spreads or slowdown in GDP growth lowers employment growth at targets and sharply curtails productivity gains in public-to-private and divisional buyouts. Average earnings per worker fall by 1.7% at target firms after buyouts, largely erasing a pre-buyout wage premium relative to controls. Wage effects are also heterogeneous. In these and other respects, the economic effects of private equity vary greatly by buyout type and with external conditions.
Keywords: private equity, buyouts
JEL Classification: G24, G24, )31
From 2018... Some said prehistoric Africa was mankind's cradle & prehistoric Europe human intelligence's cradle; the African Middle Stone Age, going back 300,000 years, is challenging this view
From 2018... Symbolic arts and rituals in the African Middle Stone Age. E. John Collins. UTAFITI, Vol. 13, No. 1, 2018. http://www.journals.udsm.ac.tz/index.php/uj/article/viewFile/2329/2413
Abstract Since the 1950s the huge amount of archaeological research done in Africa has shown that Homo sapiens originally came from Africa rather than Western Eurasia as was previously thought. Nevertheless, some Western scholars retain a Eurocentric bias by suggesting that humans only became fully intelligent after they migrated out of Africa and settled in Europe where, during the ‘Upper Palaeolithic Transition’ around 45,000 years ago, there was an abrupt advance in human neural wiring. Their evidence is the relatively sudden change from Middle Palaeolithic to more advanced Upper Palaeolithic2 tools and the appearance of the spectacular figurative cave art of Europe. This mental revolution was initially believed to have occurred in ‘Cro-Magnon Man’ who lived in Europe and Western Eurasia 45,00040,000 years ago and was considered to be the first human to have the cross-domain cognition and enhanced memory necessary for a sophisticated language and symbolic behaviour. In short, although after the 1950s archaeologists generally have acknowledged that prehistoric Africa was the cradle of mankind, some still insist that prehistoric Europe was the cradle of human intelligence. New research on the African Middle Stone Age (MSA), that itself goes back 300,000 years, is challenging this view. This paper provides some examples of symbolic, ritual and artistic behaviour, and indeed advanced tool making that took place during this period and up to around 60,000 years ago, long before the appearance of CroMagnon Man.
Abstract Since the 1950s the huge amount of archaeological research done in Africa has shown that Homo sapiens originally came from Africa rather than Western Eurasia as was previously thought. Nevertheless, some Western scholars retain a Eurocentric bias by suggesting that humans only became fully intelligent after they migrated out of Africa and settled in Europe where, during the ‘Upper Palaeolithic Transition’ around 45,000 years ago, there was an abrupt advance in human neural wiring. Their evidence is the relatively sudden change from Middle Palaeolithic to more advanced Upper Palaeolithic2 tools and the appearance of the spectacular figurative cave art of Europe. This mental revolution was initially believed to have occurred in ‘Cro-Magnon Man’ who lived in Europe and Western Eurasia 45,00040,000 years ago and was considered to be the first human to have the cross-domain cognition and enhanced memory necessary for a sophisticated language and symbolic behaviour. In short, although after the 1950s archaeologists generally have acknowledged that prehistoric Africa was the cradle of mankind, some still insist that prehistoric Europe was the cradle of human intelligence. New research on the African Middle Stone Age (MSA), that itself goes back 300,000 years, is challenging this view. This paper provides some examples of symbolic, ritual and artistic behaviour, and indeed advanced tool making that took place during this period and up to around 60,000 years ago, long before the appearance of CroMagnon Man.
History backfires: Reminders of past injustices against women undermine support for workplace policies promoting women
History backfires: Reminders of past injustices against women undermine support for workplace policies promoting women. Ivona Hideg, Anne E. Wilson. Organizational Behavior and Human Decision Processes, October 10 2019. https://doi.org/10.1016/j.obhdp.2019.10.001
Highlights
• Reminders of past injustice toward women undermine men’s support for an EE policy.
• Undermined support is due to men’s denial of current gender discrimination.
• Reminders of past injustice toward women do not influence women’s reactions to EE.
• Information about women’s advancement mitigate men’s negative reactions to EE.
• Men’s undermined support for EE is further mediated by lower collective self-esteem.
Abstract: Public discourse on current inequalities often invokes past injustice endured by minorities. This rhetoric also sometimes underlies contemporary equality policies. Drawing on social identity theory and the employment equity literature, we suggest that reminding people about past injustice against a disadvantaged group (e.g., women) can invoke social identity threat among advantaged group members (e.g., men) and undermine support for employment equity (EE) policies by fostering the belief that inequality no longer exists. We find support for our hypotheses in four studies examining Canadian (three studies) and American (one study) EE policies. Overall, we found that reminders of past injustice toward women undermined men’s support for an EE policy promoting women by heightening their denial of current gender discrimination. Supporting a social identity account, men’s responses were mediated by collective self-esteem, and were attenuated when threat was mitigated. Reminders of past injustice did not influence women’s support for the EE policy.
Highlights
• Reminders of past injustice toward women undermine men’s support for an EE policy.
• Undermined support is due to men’s denial of current gender discrimination.
• Reminders of past injustice toward women do not influence women’s reactions to EE.
• Information about women’s advancement mitigate men’s negative reactions to EE.
• Men’s undermined support for EE is further mediated by lower collective self-esteem.
Abstract: Public discourse on current inequalities often invokes past injustice endured by minorities. This rhetoric also sometimes underlies contemporary equality policies. Drawing on social identity theory and the employment equity literature, we suggest that reminding people about past injustice against a disadvantaged group (e.g., women) can invoke social identity threat among advantaged group members (e.g., men) and undermine support for employment equity (EE) policies by fostering the belief that inequality no longer exists. We find support for our hypotheses in four studies examining Canadian (three studies) and American (one study) EE policies. Overall, we found that reminders of past injustice toward women undermined men’s support for an EE policy promoting women by heightening their denial of current gender discrimination. Supporting a social identity account, men’s responses were mediated by collective self-esteem, and were attenuated when threat was mitigated. Reminders of past injustice did not influence women’s support for the EE policy.
Time spent using social media was not related to individual changes in depression or anxiety over 8 years,even in the transition between adolescence and emerging adulthood; no sex differences observed
Does time spent using social media impact mental health?: An eight year longitudinal study. Sarah M. Coyne et al. Computers in Human Behavior, October 10 2019, 106160. https://doi.org/10.1016/j.chb.2019.106160
Highlights
• Time spent using social media was not related to individual changes in depression or anxiety over 8 years.
• This lack of a relationship was found even in the transition between adolescence and emerging adulthood.
• Results were not stronger for girls or boys.
Abstract: Many studies have found a link between time spent using social media and mental health issues, such as depression and anxiety. However, the existing research is plagued by cross-sectional research and lacks analytic techniques examining individual change over time. The current research involves an 8-year longitudinal study examining the association between time spent using social media and depression and anxiety at the intra-individual level. Participants included 500 adolescents who completed once-yearly questionnaires between the ages of 13 and 20. Results revealed that increased time spent on social media was not associated with increased mental health issues across development when examined at the individual level. Hopefully these results can move the field of research beyond its past focus on screen time.
Highlights
• Time spent using social media was not related to individual changes in depression or anxiety over 8 years.
• This lack of a relationship was found even in the transition between adolescence and emerging adulthood.
• Results were not stronger for girls or boys.
Abstract: Many studies have found a link between time spent using social media and mental health issues, such as depression and anxiety. However, the existing research is plagued by cross-sectional research and lacks analytic techniques examining individual change over time. The current research involves an 8-year longitudinal study examining the association between time spent using social media and depression and anxiety at the intra-individual level. Participants included 500 adolescents who completed once-yearly questionnaires between the ages of 13 and 20. Results revealed that increased time spent on social media was not associated with increased mental health issues across development when examined at the individual level. Hopefully these results can move the field of research beyond its past focus on screen time.
Beauty Is in the Eye of the Beholder: The Appraisal of Facial Attractiveness Requires Conscious Awareness, Contrary to Suggestions
Beauty Is in the Eye of the Beholder: The Appraisal of Facial
Attractiveness and Its Relation to Conscious Awareness. Myron
Tsikandilakis, Persefoni Bali, Peter Chapman. Perception, December 19,
2018. https://doi.org/10.1177/0301006618813035
Abstract: Previous research suggests that facial attractiveness relies on features such as symmetry, averageness and above-average sexual dimorphic characteristics. Due to the evolutionary and sociobiological value of these characteristics, it has been suggested that attractiveness can be processed in the absence of conscious awareness. This raises the possibility that attractiveness can also be appraised without conscious awareness. In this study, we addressed this hypothesis. We presented neutral and emotional faces that were rated high, medium and low for attractiveness during a pilot experimental stage. We presented these faces for 33.33 ms with backwards masking to a black and white pattern for 116.67 ms and measured face-detection and emotion-discrimination performance, and attractiveness ratings. We found that high-attractiveness faces were detected and discriminated more accurately and rated higher for attractiveness compared with other appearance types. A Bayesian analysis of signal detection performance indicated that faces were not processed significantly at-chance. Further assessment revealed that correct detection (hits) of a presented face was a necessary condition for reporting higher ratings for high-attractiveness faces. These findings suggest that the appraisal of attractiveness requires conscious awareness.
Keywords attractiveness, masking, awareness
Abstract: Previous research suggests that facial attractiveness relies on features such as symmetry, averageness and above-average sexual dimorphic characteristics. Due to the evolutionary and sociobiological value of these characteristics, it has been suggested that attractiveness can be processed in the absence of conscious awareness. This raises the possibility that attractiveness can also be appraised without conscious awareness. In this study, we addressed this hypothesis. We presented neutral and emotional faces that were rated high, medium and low for attractiveness during a pilot experimental stage. We presented these faces for 33.33 ms with backwards masking to a black and white pattern for 116.67 ms and measured face-detection and emotion-discrimination performance, and attractiveness ratings. We found that high-attractiveness faces were detected and discriminated more accurately and rated higher for attractiveness compared with other appearance types. A Bayesian analysis of signal detection performance indicated that faces were not processed significantly at-chance. Further assessment revealed that correct detection (hits) of a presented face was a necessary condition for reporting higher ratings for high-attractiveness faces. These findings suggest that the appraisal of attractiveness requires conscious awareness.
Keywords attractiveness, masking, awareness
The magnitude of sex differences in verbal episodic memory increases with social progress: Data from 54 countries across 40 years
Asperholm M, Nagar S, Dekhtyar S, Herlitz A (2019) The magnitude of sex differences in verbal episodic memory increases with social progress: Data from 54 countries across 40 years. PLoS ONE 14(4): e0214945. https://doi.org/10.1371/journal.pone.0214945
Abstract: Sex differences in episodic memory have been reported. We investigate (1) the existence of sex differences in verbal and other episodic memory tasks in 54 countries, and (2) the association between the time- and country-specific social progress indicators (a) female to male ratio in education and labor force participation, (b) population education and employment, and (c) GDP per capita, and magnitude of sex differences in verbal episodic memory tasks. Data were retrieved from 612 studies, published 1973–2013. Results showed that females outperformed (Cohen’s d > 0) males in verbal (42 out of 45 countries) and other (28 out of 45 countries) episodic memory tasks. Although all three social progress indicators were, separately, positively associated with the female advantage in verbal episodic memory performance, only population education and employment remained significant when considering the social indicators together. Results suggest that women’s verbal episodic memory performance benefits more than men’s from education and employment.
Abstract: Sex differences in episodic memory have been reported. We investigate (1) the existence of sex differences in verbal and other episodic memory tasks in 54 countries, and (2) the association between the time- and country-specific social progress indicators (a) female to male ratio in education and labor force participation, (b) population education and employment, and (c) GDP per capita, and magnitude of sex differences in verbal episodic memory tasks. Data were retrieved from 612 studies, published 1973–2013. Results showed that females outperformed (Cohen’s d > 0) males in verbal (42 out of 45 countries) and other (28 out of 45 countries) episodic memory tasks. Although all three social progress indicators were, separately, positively associated with the female advantage in verbal episodic memory performance, only population education and employment remained significant when considering the social indicators together. Results suggest that women’s verbal episodic memory performance benefits more than men’s from education and employment.
Wednesday, October 9, 2019
We Are Not Competent Combining Probability Forecasts: 60% and 60% Is 60%, but Likely and Likely Is Very Likely
Mislavsky, Robert and Gaertig, Celia, Combining Probability Forecasts: 60% and 60% Is 60%, but Likely and Likely Is Very Likely (September 16, 2019). SSRN: http://dx.doi.org/10.2139/ssrn.3454796
Abstract: How do we combine others’ probability forecasts? Prior research has shown that when advisors provide numeric probability forecasts, people typically average them (i.e., they move closer to the average advisor’s forecast). However, what if the advisors say that an event is “likely” or “probable?” In 7 studies (N = 6,732), we find that people “count” verbal probabilities (i.e., they move closer to certainty than any individual advisor’s forecast). For example, when the advisors both say an event is “likely,” participants will say that it is “very likely.” This effect occurs for both probabilities above and below 50%, for hypothetical scenarios and real events, and when presenting the others’ forecasts simultaneously or sequentially. We also show that this combination strategy carries over to subsequent consumer decisions that rely on advisors’ likelihood judgments. We find inconsistent evidence on whether people are using a counting strategy because they believe that a verbal forecast from an additional advisor provides more new information than a numerical forecast from an additional advisor. We also discuss and rule out several other candidate mechanisms for our effect.
Keywords: uncertainty, forecasting, verbal probabilities, combining judgments, combining forecasts, predictions
Abstract: How do we combine others’ probability forecasts? Prior research has shown that when advisors provide numeric probability forecasts, people typically average them (i.e., they move closer to the average advisor’s forecast). However, what if the advisors say that an event is “likely” or “probable?” In 7 studies (N = 6,732), we find that people “count” verbal probabilities (i.e., they move closer to certainty than any individual advisor’s forecast). For example, when the advisors both say an event is “likely,” participants will say that it is “very likely.” This effect occurs for both probabilities above and below 50%, for hypothetical scenarios and real events, and when presenting the others’ forecasts simultaneously or sequentially. We also show that this combination strategy carries over to subsequent consumer decisions that rely on advisors’ likelihood judgments. We find inconsistent evidence on whether people are using a counting strategy because they believe that a verbal forecast from an additional advisor provides more new information than a numerical forecast from an additional advisor. We also discuss and rule out several other candidate mechanisms for our effect.
Keywords: uncertainty, forecasting, verbal probabilities, combining judgments, combining forecasts, predictions
Higher levels of physical activity (outdoor play & sport participation) were associated with greater white matter microstructure in children; no association was observed between screen time and white matter microstructure
Associations of physical activity and screen time with white matter microstructure in children from the general population. MarÃa Rodriguez-Ayllon et al. NeuroImage, October 9 2019, 116258. https://doi.org/10.1016/j.neuroimage.2019.116258
Highlights
• Higher levels of physical activity were associated with greater white matter microstructure in children.
• Outdoor play and sport participation were specifically related to white matter microstructure.
• No association was observed between screen time and white matter microstructure.
Abstract: Physical activity and sedentary behaviors have been linked to a variety of general health benefits and problems. However, few studies have examined how physical activity during childhood is related to brain development, with the majority of work to date focusing on cardio-metabolic health. This study examines the association between physical activity and screen time with white matter microstructure in the general pediatric population. In a sample of 2,532 children (10.12 ± 0.58 years; 50.04% boys) from the Generation R Study, a population-based cohort in Rotterdam, the Netherlands, we assessed physical activity and screen time using parent-reported questionnaires. Magnetic resonance imaging of white matter microstructure was conducted using diffusion tensor imaging. Total physical activity was positively associated with global fractional anisotropy (β = 0.057, 95% CI = 0.016, 0.098, p = 0.007) and negatively associated with global mean diffusivity (β = −0.079, 95% CI = −0.120, −0.038, p < 0.001), two commonly derived scalar measures of white matter microstructure. Two components of total physical activity, outdoor play and sport participation, were positively associated with global fractional anisotropy (β = 0.041, 95% CI=(0.000, 0.083), p = 0.047; β = 0.053, 95% CI=(0.010, 0.096), p = 0.015 respectively) and inversely associated with global mean diffusivity (β = −0.074, 95% CI= (−0.114, −0.033), p < 0.001; β = −0.043, 95% CI=(-0.086, 0.000), p = 0.049 respectively). No associations were observed between screen time and white matter microstructure (p > 0.05). This study provides new evidence that physical activity is modestly associated with white matter microstructure in children. In contrast, complementing other recent evidence on cognition, screen time was not associated with white matter microstructure. Causal inferences from these modest associations must be interpreted cautiously in the absence of longitudinal data. However, these data still offer a promising avenue for future work to explore to what extent physical activity may promote healthy white matter development.
Highlights
• Higher levels of physical activity were associated with greater white matter microstructure in children.
• Outdoor play and sport participation were specifically related to white matter microstructure.
• No association was observed between screen time and white matter microstructure.
Abstract: Physical activity and sedentary behaviors have been linked to a variety of general health benefits and problems. However, few studies have examined how physical activity during childhood is related to brain development, with the majority of work to date focusing on cardio-metabolic health. This study examines the association between physical activity and screen time with white matter microstructure in the general pediatric population. In a sample of 2,532 children (10.12 ± 0.58 years; 50.04% boys) from the Generation R Study, a population-based cohort in Rotterdam, the Netherlands, we assessed physical activity and screen time using parent-reported questionnaires. Magnetic resonance imaging of white matter microstructure was conducted using diffusion tensor imaging. Total physical activity was positively associated with global fractional anisotropy (β = 0.057, 95% CI = 0.016, 0.098, p = 0.007) and negatively associated with global mean diffusivity (β = −0.079, 95% CI = −0.120, −0.038, p < 0.001), two commonly derived scalar measures of white matter microstructure. Two components of total physical activity, outdoor play and sport participation, were positively associated with global fractional anisotropy (β = 0.041, 95% CI=(0.000, 0.083), p = 0.047; β = 0.053, 95% CI=(0.010, 0.096), p = 0.015 respectively) and inversely associated with global mean diffusivity (β = −0.074, 95% CI= (−0.114, −0.033), p < 0.001; β = −0.043, 95% CI=(-0.086, 0.000), p = 0.049 respectively). No associations were observed between screen time and white matter microstructure (p > 0.05). This study provides new evidence that physical activity is modestly associated with white matter microstructure in children. In contrast, complementing other recent evidence on cognition, screen time was not associated with white matter microstructure. Causal inferences from these modest associations must be interpreted cautiously in the absence of longitudinal data. However, these data still offer a promising avenue for future work to explore to what extent physical activity may promote healthy white matter development.
Following a prolonged handshake (vs. a normal length or no handshake), participants showed less interactional enjoyment, as indicated by less laughing; also showed anxiety and behavioral freezing
Effects of Handshake Duration on Other Nonverbal Behavior. Emese Nagy et al. Perceptual and Motor Skills, October 8, 2019. https://doi.org/10.1177/0031512519876743
Abstract: Although detailed descriptions of proper handshakes partly comprise many etiquette books, how a normal handshake can be described, its proper duration, and the consequences of violating handshake expectations remain empirically unexplored. This study measured the effect of temporal violations of the expected length of a handshake (less than three seconds according to previous studies) administered unobtrusively in a naturalistic experiment. We compared volunteer participants’ (N = 34; 25 females; 9 males; Mage = 23.76 years, SD = 6.85) nonverbal behavior before and after (a) a prolonged handshake (>3 seconds), (b) a normal length handshake (average length <3 seconds), and (c) a control encounter with no handshake. Frame-by-frame behavioral analyses revealed that, following a prolonged handshake (vs. a normal length or no handshake), participants showed less interactional enjoyment, as indicated by less laughing. They also showed evidence of anxiety and behavioral freezing, indicated by increased hands-on-hands movements, and they showed fewer hands-on-body movements. Normal length handshakes resulted in less subsequent smiling than did prolonged handshakes, but normal length handshakes were also followed by fewer hands-on-face movements than prolonged handshakes. No behavior changes were associated with the no-handshake control condition. We found no differences in participants’ level of empathy or state/trait anxiety related to these conditions. In summary, participants reacted behaviorally to temporal manipulations of handshakes, with relevant implications for interactions in interviews, business, educational, and social settings and for assisting patients with social skills difficulties.
Keywords: behavior, handshake, nonverbal communication, behavioral analysis, phenomenology
Abstract: Although detailed descriptions of proper handshakes partly comprise many etiquette books, how a normal handshake can be described, its proper duration, and the consequences of violating handshake expectations remain empirically unexplored. This study measured the effect of temporal violations of the expected length of a handshake (less than three seconds according to previous studies) administered unobtrusively in a naturalistic experiment. We compared volunteer participants’ (N = 34; 25 females; 9 males; Mage = 23.76 years, SD = 6.85) nonverbal behavior before and after (a) a prolonged handshake (>3 seconds), (b) a normal length handshake (average length <3 seconds), and (c) a control encounter with no handshake. Frame-by-frame behavioral analyses revealed that, following a prolonged handshake (vs. a normal length or no handshake), participants showed less interactional enjoyment, as indicated by less laughing. They also showed evidence of anxiety and behavioral freezing, indicated by increased hands-on-hands movements, and they showed fewer hands-on-body movements. Normal length handshakes resulted in less subsequent smiling than did prolonged handshakes, but normal length handshakes were also followed by fewer hands-on-face movements than prolonged handshakes. No behavior changes were associated with the no-handshake control condition. We found no differences in participants’ level of empathy or state/trait anxiety related to these conditions. In summary, participants reacted behaviorally to temporal manipulations of handshakes, with relevant implications for interactions in interviews, business, educational, and social settings and for assisting patients with social skills difficulties.
Keywords: behavior, handshake, nonverbal communication, behavioral analysis, phenomenology
California income tax 2012 increase of up to 3 pct points for high-income households: Outward migration and behavioral responses by stayers together eroded 45.2% of the windfall tax revenues from the reform
Behavioral Responses to State Income Taxation of High Earners: Evidence from California. Joshua Rauh, Ryan J. Shyu. NBER Working Paper No. 26349, October 2019. https://www.nber.org/papers/w26349
Abstract: Drawing on the universe of California income tax filings and the variation imposed by a 2012 tax increase of up to 3 percentage points for high-income households, we present new findings about the effects of personal income taxation on household location choice and pre-tax income. First, over and above baseline rates of taxpayer departure from California, an additional 0.8% of the California residential tax filing base whose 2012 income would have been in the new top tax bracket moved out from full-year residency of California in 2013, mostly to states with zero income tax. Second, to identify the impact of the California tax policy shift on the pre-tax earnings of high-income California residents, we use as a control group high-earning out-of-state taxpayers who persistently file as California non-residents. Using a differences-in-differences strategy paired with propensity score matching, we estimate an intensive margin elasticity of 2013 income with respect to the marginal net-of-tax rate of 2.5 to 3.3. Among top-bracket California taxpayers, outward migration and behavioral responses by stayers together eroded 45.2% of the windfall tax revenues from the reform.
Abstract: Drawing on the universe of California income tax filings and the variation imposed by a 2012 tax increase of up to 3 percentage points for high-income households, we present new findings about the effects of personal income taxation on household location choice and pre-tax income. First, over and above baseline rates of taxpayer departure from California, an additional 0.8% of the California residential tax filing base whose 2012 income would have been in the new top tax bracket moved out from full-year residency of California in 2013, mostly to states with zero income tax. Second, to identify the impact of the California tax policy shift on the pre-tax earnings of high-income California residents, we use as a control group high-earning out-of-state taxpayers who persistently file as California non-residents. Using a differences-in-differences strategy paired with propensity score matching, we estimate an intensive margin elasticity of 2013 income with respect to the marginal net-of-tax rate of 2.5 to 3.3. Among top-bracket California taxpayers, outward migration and behavioral responses by stayers together eroded 45.2% of the windfall tax revenues from the reform.
Biological systems are fundamentally computational in that they process information in an apparently purposeful fashion rather than just transferring bits of it in a purely syntactical manner
Reflexivity, coding and quantum biology. Peter R Wills. Biosystems, Volume 185, November 2019, 104027. https://doi.org/10.1016/j.biosystems.2019.104027
Abstract: Biological systems are fundamentally computational in that they process information in an apparently purposeful fashion rather than just transferring bits of it in a purely syntactical manner. Biological information, such has genetic information stored in DNA sequences, has semantic content. It carries meaning that is defined by the molecular context of its cellular environment. Information processing in biological systems displays an inherent reflexivity, a tendency for the computational information-processing to be “about” the behaviour of the molecules that participate in the computational process. This is most evident in the operation of the genetic code, where the specificity of the reactions catalysed by the aminoacyl-tRNA synthetase (aaRS) enzymes is required to be self-sustaining. A cell’s suite of aaRS enzymes completes a reflexively autocatalytic set of molecular components capable of making themselves through the operation of the code. This set requires the existence of a body of reflexive information to be stored in an organism’s genome. The genetic code is a reflexively self-organised mapping of the chemical properties of amino acid sidechains onto codon “tokens”. It is a highly evolved symbolic system of chemical self-description. Although molecular biological coding is generally portrayed in terms of classical bit-transfer events, various biochemical events explicitly require quantum coherence for their occurrence. Whether the implicit transfer of quantum information, qbits, is indicative of wide-ranging quantum computation in living systems is currently the subject of extensive investigation and speculation in the field of Quantum Biology.
Abstract: Biological systems are fundamentally computational in that they process information in an apparently purposeful fashion rather than just transferring bits of it in a purely syntactical manner. Biological information, such has genetic information stored in DNA sequences, has semantic content. It carries meaning that is defined by the molecular context of its cellular environment. Information processing in biological systems displays an inherent reflexivity, a tendency for the computational information-processing to be “about” the behaviour of the molecules that participate in the computational process. This is most evident in the operation of the genetic code, where the specificity of the reactions catalysed by the aminoacyl-tRNA synthetase (aaRS) enzymes is required to be self-sustaining. A cell’s suite of aaRS enzymes completes a reflexively autocatalytic set of molecular components capable of making themselves through the operation of the code. This set requires the existence of a body of reflexive information to be stored in an organism’s genome. The genetic code is a reflexively self-organised mapping of the chemical properties of amino acid sidechains onto codon “tokens”. It is a highly evolved symbolic system of chemical self-description. Although molecular biological coding is generally portrayed in terms of classical bit-transfer events, various biochemical events explicitly require quantum coherence for their occurrence. Whether the implicit transfer of quantum information, qbits, is indicative of wide-ranging quantum computation in living systems is currently the subject of extensive investigation and speculation in the field of Quantum Biology.
Tuesday, October 8, 2019
Human players/signallers act as coding intermediaries who use lee-way alongside “a small set of arbitrary rules selected from a potentially unlimited number" to "ensure a specific correspondence between two independent worlds"
Wide coding: Tetris, Morse and, perhaps, language. S J Cowley. Biosystems, Volume 185, November 2019, 104025. https://doi.org/10.1016/j.biosystems.2019.104025
Abstract
Code biology uses protein synthesis to pursue how living systems fabricate themselves. Weight falls on intermediary systems or adaptors that enable translated DNA to function within a cellular apparatus. Specifically, code intermediaries bridge between independent worlds (e.g. those of RNAs and proteins) to grant functional lee-way to the resulting products. Using this Organic Code (OC) model, the paper draws parallels with how people use artificial codes. As illustrated by Tetris and Morse, human players/signallers manage code functionality by using bodies as (or like) adaptors. They act as coding intermediaries who use lee-way alongside “a small set of arbitrary rules selected from a potentially unlimited number in order to ensure a specific correspondence between two independent worlds” (Barbieri, 2015). As with deep learning, networked bodily systems mesh inputs from a coded past with current inputs.
Received models reduce ‘use’ of codes to a run-time or program like process. They overlook how molecular memory is extended by living apparatuses that link codes with functioning adaptors. In applying the OC model to humans, the paper connects Turing’s (1937) view of thinking to Wilson’s (2004) appeal to wide cognition. The approach opens up a new view of Kirsh and Maglio’s (1994) seminal studies on Tetris. As players use an interface that actualizes a code or program, their goal-directed (i.e. ‘pragmatic’) actions co-occur with adaptor-like ‘filling in’ (i.e. ‘epistemic’ moves). In terms of the OC model, flexible functions derive from, not actions, but epistemic dynamics that arise in the human-interface-computer system. Second, I pursue how a Morse radio operator uses dibs and dabs that enable the workings of an artificial code. While using knowledge (‘the rules’) to resemiotize by tapping on a transmission key, bodily dynamics are controlled by adaptor-like resources. Finally, turning to language, I sketch how the model applies to writing and reading. Like Morse operators, writers resemiotize a code-like domain of alphabets, spelling-systems etc. by acting as (or like) bodily adaptors. Further, in attending to a text-interface (symbolizations), a reader relies on filling-in that is (or feels) epistemic. Given that humans enact or mimic adaptor functions, it is likely that the OC model also applies to multi-modal language.
Abstract
Code biology uses protein synthesis to pursue how living systems fabricate themselves. Weight falls on intermediary systems or adaptors that enable translated DNA to function within a cellular apparatus. Specifically, code intermediaries bridge between independent worlds (e.g. those of RNAs and proteins) to grant functional lee-way to the resulting products. Using this Organic Code (OC) model, the paper draws parallels with how people use artificial codes. As illustrated by Tetris and Morse, human players/signallers manage code functionality by using bodies as (or like) adaptors. They act as coding intermediaries who use lee-way alongside “a small set of arbitrary rules selected from a potentially unlimited number in order to ensure a specific correspondence between two independent worlds” (Barbieri, 2015). As with deep learning, networked bodily systems mesh inputs from a coded past with current inputs.
Received models reduce ‘use’ of codes to a run-time or program like process. They overlook how molecular memory is extended by living apparatuses that link codes with functioning adaptors. In applying the OC model to humans, the paper connects Turing’s (1937) view of thinking to Wilson’s (2004) appeal to wide cognition. The approach opens up a new view of Kirsh and Maglio’s (1994) seminal studies on Tetris. As players use an interface that actualizes a code or program, their goal-directed (i.e. ‘pragmatic’) actions co-occur with adaptor-like ‘filling in’ (i.e. ‘epistemic’ moves). In terms of the OC model, flexible functions derive from, not actions, but epistemic dynamics that arise in the human-interface-computer system. Second, I pursue how a Morse radio operator uses dibs and dabs that enable the workings of an artificial code. While using knowledge (‘the rules’) to resemiotize by tapping on a transmission key, bodily dynamics are controlled by adaptor-like resources. Finally, turning to language, I sketch how the model applies to writing and reading. Like Morse operators, writers resemiotize a code-like domain of alphabets, spelling-systems etc. by acting as (or like) bodily adaptors. Further, in attending to a text-interface (symbolizations), a reader relies on filling-in that is (or feels) epistemic. Given that humans enact or mimic adaptor functions, it is likely that the OC model also applies to multi-modal language.
Subscribe to:
Posts (Atom)