Gender Differences in Willingness to Compete: The Role of Culture and Institutions. Alison Booth, Elliott Fan, Xin Meng, Dandan Zhang. The Economic Journal, February 2 2018. https://doi.org/10.1111/ecoj.12583
Abstract: Our Beijing‐based laboratory experiment investigated gender differences in competitive choices across different birth‐cohorts experiencing – during their crucial developmental‐age – different institutions and social norms. To control for general time trends, we use Taipei counterpart subjects with identical original Confucian traditions. Our findings confirm that exposure to different institutions/norms during crucial developmental‐ages significantly changes individuals’ behaviour. In particular, Beijing females growing up during the communist regime are more competitively inclined than their male counterparts; their female counterparts growing up during the market regime; and Taipei females. For Taipei, there are no statistically significant cohort or gender differences in willingness to compete.
4 Mechanisms
Alesina and Fuchs-Schundeln (2007) discussed the endogenous role political regimes play
in forming people’s tastes for public social policies. They provide strong evidence that
there is a feedback effect from the regime on people’s attitudes. In our experiment, the
finding that women in the 1958 Beijing cohort are more willing to compete may well be
related to this feedback effect. In this section we further explore the mechanisms through
which women growing up under different regimes form different preferences.
To examine if there is a long lasting communist indoctrination effect, we follow Alesina
and Fuchs-Schundeln (2007) and check what other evidence there is in our data that
indicate attitudes towards government intervention in social and economic affairs differs
across cohorts and regions. In our exit-survey, we asked subjects if they believe that
government should play a role in reducing income inequality in general and if they support
the view that the less intervention from the government to the economy the better. The
answers are given on a scale of 1 to 5, with 1 being strongly disagree and 5 being strongly
agree. Using these data as the dependent variables we estimate using OLS a modified
version of Equation 2 without the game controls and added in log individual income into
the vector Xij . The selected results are reported in the first two columns of Table 7, where
we report three panels: the top panel reports the selected estimated coefficients from the
regressions of modified Equation 2, the middle panel reports the implied differences across
cohorts and regions based on the top panel coefficients, and the bottom panel reports
differences between Beijing women/men and Taipei women/men based on results from
the regressions that look only at aggregated differences and disregard cohort variations.
The results for the state-intervention preference variables suggest that, in general, the
Mainland Chinese female cohorts, relative to the Taiwanese female cohorts, are much more
likely to support the view that the government should try to reduce income inequality and
much less likely to think that the less government intervention in the economy the better.
On a scale of 1 to 5 points, the three female cohorts are 1.4 to 1.9 points more likely
to support government reducing inequality and 1.6 to 1.8 points less likely to think that
less government intervention in the economy the better. These are very large differences,
between 28% to 38% of the total scores. These results are largely consistent across the
three female cohorts.
If we examine the bottom panel, which treats male and female subjects in Beijing
and Taipei as single groups, we see that this conclusion applies to males as well. Thus,
after 40 years of market oriented economic reform, the society is still largely geared towards equality and government intervention. This finding is in line with that of Alesina
and Fuchs-Schundeln (2007) – that communist propaganda has a long lasting effect. In
addition, our results indicate that individuals who grew up subject to the heavy dose of
indoctrination, and individuals who largely grew up with new ideology, all seem to prefer
more state intervention in social and economic affairs. This latter finding differs slightly
from that of Alesina and Fuchs-Schundeln (2007) which revealed that in East Germany
those born after 1975 have a much weaker preference for government intervention relative
to their older counterparts. This difference between their results and ours may relate to
two broad sets of factors. First, after German reunification Communism was discredited
and there was a large migration from East to West Germany. Second, in mainland China
the Communist Party is still the ruling party, even though the economy largely operates
under market economic rules. One other interesting issue related to our finding is the
following issue. According to our results, only the behaviour of the cohort which grew
up during the communist regime is in line with the communist indoctrination, while the
behaviour has changed for cohorts growing up in the economic reform era. Yet the attitudes difference across cohorts seems to be limited. This may be related to the lack of
continuation in propaganda on gender issues which affect gender willingness to compete as
we will discuss below,13 whereas government role in the economy persisted in the reform
era.
Is the difference in gender willingness to compete and in the preference for government
intervention for social and economic issues really a reflection of indoctrination? There is
a small literature that discusses intergenerational transmission of cultural norms (see for
example, Bisin and Verdier, 2000; Tabellini, 2010; and Nunn and Wantchekon, 2011).14 In
the post-experiment survey, we follow their ideas and ask respondents to pick, from a list
of qualities or attributes, those that their parents and school encouraged when they were
young. We report here the results from our examination of three attributes that are related
to the competitive inclination and preference for inequality in general: 1. Encourage to
believe that men and women are equal; 2. Encourage to be competitive; and 3. Encourage
to be unselfish. We generate two dummy variables for each of these qualities: whether
mother or school encouraged individuals to have these qualities (yes=1, 0 otherwise). The
modified version of Equation 2 is estimated using a linear probability model (LPM) for
ease of interpretation of the coefficients. The results for mothers’ encouragement and
schools’ encouragement are reported in columns [3] to [8] of Table 7. The three panels
separately report the regression coefficients; the implied differences across gender, cohorts
and region; and the aggregated differences between Beijing and Taipei for females and
males.
The results on gender equality (both from mothers’ and schools’ teaching) show that
there is no consistent gender difference within each Beijing cohort. Relative to the 1958
Beijing females, those who were born later are consistently less likely to have either
mother or school encourage them to believe in gender equality. Although the size of the
differences are large (9% to 13%), none of the differences are precisely estimated. Relative
to Taipei women, however, Beijing women in all three cohorts are more likely to have their
parents or school to encourage them to have the view of gender equality. The difference
in probability ranges from 70% to 74% for mothers’ encouragement and 59% to 82% for
schools’ encouragement. This is not only the case for females: the bottom panel shows
that Beijing males were equally more likely to have their mothers and schools encourage
them to adopt the view of gender equality than their Taipei counterparts.
Regarding being competitive, mainland parents do not seem to have prepared their
children any more than their Taiwan counterparts. However, schools in Beijing did,
but this seems to be more so for the later cohorts than for the 1958 cohort. Mainland
parents and schools are more likely to teach their children to be unselfish than their
Taiwan counterparts—further evidence of the communist indoctrination. This is true
for all female and male cohorts. It is also true that the probability is higher for the
1958 Beijing cohort than that for the later Beijing cohort, but the differences are not
statistically significant.
How did propaganda affect individuals’ behaviour and preferences 40 years after the
change of the regime? Psychologists have long been discussing how social norms about
sex roles may affect children’s personality characteristics and behavioural competencies
to prepare them to fulfil the societal expectations so that they can perform those roles
(see, for example, Horner, 1972; Fitzgerald and Betz, 1983).
Our post-experiment survey implemented the “Big Five inventory” (BFI), which consists of 44 questions designed to elicit individual’s personality traits. The psychological
literature has identified the overlap between extroversion and competitiveness (Hogan,
1986; Digman, 1990; Chen et al., 2011). Recent empirical studies on online game players
also identified that individuals who have higher scores on openness, extroversion, and
conscientiousness are more likely to be players (Teng, 2008). We examine across gendercohort-region differences in the ‘Big Five’ personality by estimating Equation 2 without
controlling for Wij and Xij .
We find that females in all Beijing cohorts are more extroverted than their male counterparts, though only the 1977 cohort exhibit statistically significant difference. More importantly, all three cohorts Beijing women are statistically significantly more extroverted than their Taipei counterparts. The aggregated estimation (the bottom panel of the table) also confirms that Beijing females are more extroverted than Taipei males and
females and they are more extroverted than Beijing males. However, it does not seem to
be the case that there are significant cohort difference in extroversion. If anything, the
1958 cohort seems to be slightly less extroverted than their younger counterparts but the
differences are not statistically significant.
With regard to openness, for the 1958 and 1977 birth cohorts we observed statistically
significant gender differences within the Beijing sample. Women are more open than
men. Beijing women are also more open than Taipei women in every cohort, and the
size of the difference is the largest for the 1958 cohort. Further, at the aggregated level,
Beijing women have higher openness scores than their Beijing male, Taipei male, and
Taipei female counterparts. Once again, we fail to detect across-cohort variations among
Beijing females which may help to shed light on why the 1958 Beijing women are more
For agreeableness, neuroticism, and conscientiousness we find similar but less statistically significant patterns. The Beijing women seem to be more agreeable, less neurotic, and more conscientious than Beijing males, Taipei females and Taipei males.
These results provide some weak evidences that perhaps indoctrination at young age could affect an individual’s personality and subsequently affect individuals’ behaviour. To this end, more research is needed.
Bipartisan Alliance, a Society for the Study of the US Constitution, and of Human Nature, where Republicans and Democrats meet.
Saturday, November 30, 2019
An intentionally slower response to an interpersonal request (time-taking) leads to inferences of higher status, but lower judgments of competence and warmth
Ziano, Ignazio, and Vanessa Patrick. 2019. ““Time-takers” Are Important Jerks: Mixed Reputational Consequences of Speed of Response to Requests” PsyArXiv. November 28. doi:10.31234/osf.io/2rjy4
Abstract: How does speed of response influence an individual’s standing and reputation? Five experiments (n = 1,544) demonstrate that “time-takers” are perceived to be high in status but low in competence and warmth, making them important jerks. Despite being a socially dispreferred norm violation, an intentionally slower response to an interpersonal request (time-taking) leads to inferences of higher status, but lower judgments of competence and warmth. We show that observers infer status from response time only when other status information is not accessible. The effects of response speed on status, competence, and warmth are mediated by perceptions of heightened self-orientation: the slow response of time-takers signals higher self-orientation – a feature that observers associate with higher status, but also with lower competence and warmth. We discuss theoretical implications for speed of response, status signalling and inferences of competence and warmth, and practical implications for interpersonal communications in organizations and employee well-being.
General Discussion
Should you respond immediately to a text or email, or, should you take your own sweet time? The focus of this paper is to identify the social consequences of time-taking, the intentionally slow response to an interpersonal request. In a world that is becoming increasingly fast paced, people and organizations are increasingly under pressure to respond as fast as possible. Prompt responses have benefits: a fast responder is seen as both warm and competent. However, a key insight uncovered by the current research is that if a person or an organization swims against the current and responds slowly, there is a silver lining: higher status perceptions, unfortunately however at the cost of competence and warmth perceptions.
Using within-participants and between-participants experiments, we explored the role of response time in a variety of situations, involving parties (people, companies, and institutions) of different nature. In study 1A, we show that people that are intentionally slower in responding to requests are liked less, yet considered of higher status, but of lower competence and warmth. In study 1B, we replicate the results of study 1A using institutions as the signaling parties. In study 2 we show that decision time is only used as a status cue if status information is not accessible. In study 3, we explore the mediating role of heightened self-orientation (lowered other-orientation) in generating impressions of status, competence, and warmth, in both personal and institutional communications.
Contributions, Implications and Directions for Future Research.
We elaborate on the three theoretical contributions previously identified and discuss the implications for theory and practice. Status signaling.
While previous research has looked at the effect of many cues on perception of status (busyness, Bellezza et al. 2017; expensive clothes, Nelissen and Meijers 2011; product size, Dubois et al., 2012), competence (alcohol consumption, Rick and Schweitzer, 2013; non-conforming behavior, Bellezza et al. 2014; 2017) and warmth (luxury consumption, Garcia et al. 2018; moral behavior, Brambilla et al. 2012), response time – despite its prevalence in everyday life and its emergence in the signaling literature (Van de Calseyde et al., 2014)– has been neglected. We fill this gap in the literature and connect these two streams of literature by showing that time taking influences status, competence, and warmth perceptions.
Speed of response. Previous literature demonstrated that observers draw rich inferences from speed of response, for instance in the domains of cooperation (Evans, Dillon, & Rand, 2015; Van de Calseyde et al., 2014) and morality (Critcher et al., 2013). We extend this literature to reputational consequences in three important domains of other perception: status, competence, and warmth. Further, we are the first – to our knowledge - to consider the mixed reputational effects of time taking: the positive effect of slower response times (slower = high status) but the negative perceptions of competence and warmth. This nuance provides increased understanding about how observers interpret speed of response in interpersonal contexts where people may observe response times.
Competence, warmth, and status. By exploring the reactions to slower responses, we show that slow response time is a dominance-based status-gaining strategy (Cheng et al., 2013), best summarized by the quote “It is better to be feared than loved, if you cannot both” (Machiavelli, 1532). In most of the status literature, status and valued personal features such as competence and warmth (Fiske et al., 2007) go hand-in-hand (Anderson & Kilduff, 2009). Manipulations that cause an increase in perceptions of status often also cause an increase in perception of competence and warmth. Strikingly, we find that competence and status can move independently: competence – along with warmth - is lower for slower respondents and status is higher. Our results are in line with this perspective suggested by Cheng and colleagues (2013): dominance does not imply the signaling of desirable qualities in order to signal status, but simply the willingness and the attitude to use coercion.
This paper has substantial practical implications for communications amongst individuals and for companies. When people interact with others, they may draw inferences from response time even where the responding party did not intend to signal it. Therefore, an unexplained delay may be interpreted as intentional and trigger positive (status) but also negative (warmth, competence) inferences. Organizations and workers may be informed about how to intentionally and strategically use response time, in order to enhance what they wish to signal. A key issue is the intentionally of the delayed response which signals greater self-orientation and consequently the neglect to value other people’s time. Future research would need to identify, how long the high status can be maintained, if competence and warmth perceptions are constantly being eroded. Relatedly, prior research has shown that (Tost, Gino, & Larrick, 2013) power can have a negative impact on team performance. One might argue that the dominant route to status that we have documented as a result of timetaking may similarly negatively influence team performance in the long-run.
Second, we show that people interpret status and use it as an inference-making tool only when other status cues are not accessible in the environment. This implies that observers are more likely to infer status from response time in situations such as interactions between colleagues from different departments or employees from different companies, when the status of the responding party is not apparent. Respondents (such as managers and employees) may therefore modulate other people impressions of them by giving (or neglecting to give) the other party status information about them (e.g., managerial function).
Further, employees report that they are under huge pressures to reply faster and faster to an increasing number of emails. This situation is exemplified by an academic worker on Twitter, who reports that they “can’t physically respond to all legitimate email sent to me without spending 3 hours a day on email (which would both cause me to go insane and also get fired)”. Certainly, workers are worried about how their delayed emails are going to make them look to others. We highlight a potential benefit of slow response speed: higher status inferences. Overwhelmed workers may therefore take solace in the fact that their slow speed of response does not have exclusively negative consequences.
However, by the same token, future research might also investigate the silver lining to greater self-orientation and lower other orientation. Indeed, in discussing the laws of power, the author Robert Greene points to self-orientation as essential to gaining and maintaining a position of power and status. For instance, he urges, “Do not accept the roles that society foists on you… Be the master of your own image rather than letting others define if for you. Incorporate dramatic devices into your public gestures and actions – your power will be enhanced and your character will seem larger than life” (p. 191). In this research, the “dramatic device” we identify that enhances status inferences is slower response times. Future research might identify other similar devices that reflect self-orientation and thereby confer status on individuals and organizations.
In our studies, we have operationalized time on a well-defined, unidirectional line that goes from the past to the future. However, the difference between present and future may be blurrier depending on situational (e.g., whether the language spoken possesses a future tense or not; Pérez and Tavits 2017) and individual factors (e.g., self-continuity through time; Hershfield et al. 2009). Future research may therefore investigate ways to moderate the signaling effect of response time by making the time difference between fast and slow seem less definite.
Abstract: How does speed of response influence an individual’s standing and reputation? Five experiments (n = 1,544) demonstrate that “time-takers” are perceived to be high in status but low in competence and warmth, making them important jerks. Despite being a socially dispreferred norm violation, an intentionally slower response to an interpersonal request (time-taking) leads to inferences of higher status, but lower judgments of competence and warmth. We show that observers infer status from response time only when other status information is not accessible. The effects of response speed on status, competence, and warmth are mediated by perceptions of heightened self-orientation: the slow response of time-takers signals higher self-orientation – a feature that observers associate with higher status, but also with lower competence and warmth. We discuss theoretical implications for speed of response, status signalling and inferences of competence and warmth, and practical implications for interpersonal communications in organizations and employee well-being.
General Discussion
Should you respond immediately to a text or email, or, should you take your own sweet time? The focus of this paper is to identify the social consequences of time-taking, the intentionally slow response to an interpersonal request. In a world that is becoming increasingly fast paced, people and organizations are increasingly under pressure to respond as fast as possible. Prompt responses have benefits: a fast responder is seen as both warm and competent. However, a key insight uncovered by the current research is that if a person or an organization swims against the current and responds slowly, there is a silver lining: higher status perceptions, unfortunately however at the cost of competence and warmth perceptions.
Using within-participants and between-participants experiments, we explored the role of response time in a variety of situations, involving parties (people, companies, and institutions) of different nature. In study 1A, we show that people that are intentionally slower in responding to requests are liked less, yet considered of higher status, but of lower competence and warmth. In study 1B, we replicate the results of study 1A using institutions as the signaling parties. In study 2 we show that decision time is only used as a status cue if status information is not accessible. In study 3, we explore the mediating role of heightened self-orientation (lowered other-orientation) in generating impressions of status, competence, and warmth, in both personal and institutional communications.
Contributions, Implications and Directions for Future Research.
We elaborate on the three theoretical contributions previously identified and discuss the implications for theory and practice. Status signaling.
While previous research has looked at the effect of many cues on perception of status (busyness, Bellezza et al. 2017; expensive clothes, Nelissen and Meijers 2011; product size, Dubois et al., 2012), competence (alcohol consumption, Rick and Schweitzer, 2013; non-conforming behavior, Bellezza et al. 2014; 2017) and warmth (luxury consumption, Garcia et al. 2018; moral behavior, Brambilla et al. 2012), response time – despite its prevalence in everyday life and its emergence in the signaling literature (Van de Calseyde et al., 2014)– has been neglected. We fill this gap in the literature and connect these two streams of literature by showing that time taking influences status, competence, and warmth perceptions.
Speed of response. Previous literature demonstrated that observers draw rich inferences from speed of response, for instance in the domains of cooperation (Evans, Dillon, & Rand, 2015; Van de Calseyde et al., 2014) and morality (Critcher et al., 2013). We extend this literature to reputational consequences in three important domains of other perception: status, competence, and warmth. Further, we are the first – to our knowledge - to consider the mixed reputational effects of time taking: the positive effect of slower response times (slower = high status) but the negative perceptions of competence and warmth. This nuance provides increased understanding about how observers interpret speed of response in interpersonal contexts where people may observe response times.
Competence, warmth, and status. By exploring the reactions to slower responses, we show that slow response time is a dominance-based status-gaining strategy (Cheng et al., 2013), best summarized by the quote “It is better to be feared than loved, if you cannot both” (Machiavelli, 1532). In most of the status literature, status and valued personal features such as competence and warmth (Fiske et al., 2007) go hand-in-hand (Anderson & Kilduff, 2009). Manipulations that cause an increase in perceptions of status often also cause an increase in perception of competence and warmth. Strikingly, we find that competence and status can move independently: competence – along with warmth - is lower for slower respondents and status is higher. Our results are in line with this perspective suggested by Cheng and colleagues (2013): dominance does not imply the signaling of desirable qualities in order to signal status, but simply the willingness and the attitude to use coercion.
This paper has substantial practical implications for communications amongst individuals and for companies. When people interact with others, they may draw inferences from response time even where the responding party did not intend to signal it. Therefore, an unexplained delay may be interpreted as intentional and trigger positive (status) but also negative (warmth, competence) inferences. Organizations and workers may be informed about how to intentionally and strategically use response time, in order to enhance what they wish to signal. A key issue is the intentionally of the delayed response which signals greater self-orientation and consequently the neglect to value other people’s time. Future research would need to identify, how long the high status can be maintained, if competence and warmth perceptions are constantly being eroded. Relatedly, prior research has shown that (Tost, Gino, & Larrick, 2013) power can have a negative impact on team performance. One might argue that the dominant route to status that we have documented as a result of timetaking may similarly negatively influence team performance in the long-run.
Second, we show that people interpret status and use it as an inference-making tool only when other status cues are not accessible in the environment. This implies that observers are more likely to infer status from response time in situations such as interactions between colleagues from different departments or employees from different companies, when the status of the responding party is not apparent. Respondents (such as managers and employees) may therefore modulate other people impressions of them by giving (or neglecting to give) the other party status information about them (e.g., managerial function).
Further, employees report that they are under huge pressures to reply faster and faster to an increasing number of emails. This situation is exemplified by an academic worker on Twitter, who reports that they “can’t physically respond to all legitimate email sent to me without spending 3 hours a day on email (which would both cause me to go insane and also get fired)”. Certainly, workers are worried about how their delayed emails are going to make them look to others. We highlight a potential benefit of slow response speed: higher status inferences. Overwhelmed workers may therefore take solace in the fact that their slow speed of response does not have exclusively negative consequences.
However, by the same token, future research might also investigate the silver lining to greater self-orientation and lower other orientation. Indeed, in discussing the laws of power, the author Robert Greene points to self-orientation as essential to gaining and maintaining a position of power and status. For instance, he urges, “Do not accept the roles that society foists on you… Be the master of your own image rather than letting others define if for you. Incorporate dramatic devices into your public gestures and actions – your power will be enhanced and your character will seem larger than life” (p. 191). In this research, the “dramatic device” we identify that enhances status inferences is slower response times. Future research might identify other similar devices that reflect self-orientation and thereby confer status on individuals and organizations.
In our studies, we have operationalized time on a well-defined, unidirectional line that goes from the past to the future. However, the difference between present and future may be blurrier depending on situational (e.g., whether the language spoken possesses a future tense or not; Pérez and Tavits 2017) and individual factors (e.g., self-continuity through time; Hershfield et al. 2009). Future research may therefore investigate ways to moderate the signaling effect of response time by making the time difference between fast and slow seem less definite.
Inbreeding avoidance in primates
A Cul-de-sac of Sexual Evolution. Takeshi Furuichi. Chapter 2 of Bonobo and Chimpanzee: The Lessons of Social Coexistence; pp 37-62. November 23 2019. https://link.springer.com/chapter/10.1007/978-981-13-8059-4_2
Abstract: Incest. It is a word that has an extremely unpleasant echo. Incest taboo (or inbreeding avoidance) between close kin in relation to the societies of numerous ethnic cultures has occupied a significant place in anthropological study. Many people might think that incest taboo is a uniquely human quality, but inbreeding avoidance, in general, is found in almost all animal species. Interestingly, the word “inbreeding avoidance” resonates healthier to our ears than “incest avoidance.”
---
2.1.5 Male-philopatric Society in which Females Leave
Not only the Japanese monkey and other primate species [53] but also many groupliving mammals form matrilineal societies where males transfer between groups. Elephants, lions, and the Japanese Sika deers have similar systems [54]. For this, we consider why females do not transfer.
Animals forage for grasses, fruits and leaves of trees, and other foods in the habitats where they are born. Carnivores must hunt prey animals. Prey animals survive by protecting themselves from predators. To do so, they need to have a thorough knowledge of when and where to go to find which kinds of food, understand where prey animals are located and are easy to hunt, learn where predators roam, and where to escape to defend themselves. There are close relatives in their group who have supported them since they were born, but when they move to another group, they have to be accepted by members of that group and build relationships all over again. While moving, they may have to, temporarily, spend a dangerous period alone. Given all of that, it is better to stay in their natal groups throughout their lives.
However, males and females differ in terms of survival strategies. Females can live a normal life and bear a fixed number of offspring during their lifetimes as long as they respond to males that want to mate with them. Therefore, what is important for females is not the number of offspring they produce, but rather how they obtain a stable food supply and effectively protect themselves from predators to rear offspring with certainties. This is why it is more beneficial for them to stay in their natal group than transferring out. In contrast, the number of offspring males can sire during their lifetime varies from zero to almost infinity depending on their efforts and degrees of success. For them, it is more important to obtain as many mating opportunities as possible and impregnate as many females as they can. To do so, it might be better to find another group that offers possibilities of mating with more females rather than staying in a group with a compromised low-rank position in which they are born. Moreover, females are sexually attracted to novel males coming from outside, rather than closely related males or males that are familiar from their childhoods. Consequently, males that actively move between groups, rather than those males that attempt to stay in natal groups, will have a greater chance of obtaining breeding opportunities.
To live in a group and avoid inbreeding, either a male or a female must transfer between groups. If both sexes transfer, this could incur a possibility that brothers and sisters would encounter in a newly transferred group unbeknownst to them. Therefore, if one sex transfers, individuals of the opposite sex do not normally need to transfer. If males are more likely to greatly benefit from transferring, it seems natural for many species to live in female-philopatric societies where males transfer.
---
However, immigrant females are desperate. They somehow have to build relationships with males in the group, and the only thing they can depend on is sex.
One day I witnessed an interesting scene. Until 1996, to carefully observe the behavior of the E1 group, we sometimes brought their favorite sugarcanes, cut them into about 30 cm in length, and scattered them on the forest floor beneath the trees where they spent the night. The scene happened when I was observing the bonobos that were coming down to take the sugarcanes.
In the center of the feeding area, the male bonobo named Ten that had quite recently climbed up to the top rank position was eating a sugarcane. He had some in his hands and was eating slowly. A young female named Nao that had recently immigrated into the E1 group came closer to him. As Nao approached Ten, so closely as if their faces would stick together, she peered at Ten’s mouth and she reached for his mouth with her hand. This is a behavior displayed when a bonobo begs for food. However, Ten pretended not to notice Nao and continued to eat slowly.
Then Nao changed her tactic. She changed the direction of her body to protrude her buttocks to Ten and began inviting him to copulate. Even though males are usually not interested in young females, there was no reason for Ten to refuse copulation if the buttocks are pressed to his nose. Ten stood up with a few sugarcanes in his hand and copulated with Nao as he clasped her from behind.
Then the power relationship between them completely changed. Nao reached out for the sugarcanes that Ten was holding in his hands and took the longest piece from him as if to say that reciprocating with the sugarcanes was an appropriate thing to do. She finally got the piece and started eating it alongside him.
Readers may think that this is an episode in which food was exchanged for sex, like prostitution. However, if we look at this situation more closely, there was no need for Nao to get sugarcane from Ten, because there were a lot of sugarcane stalks still lying on the forest floor. If she really wanted a sugarcane, she could have just picked any one up.
Nao took the trouble to approach Ten and beg for a sugarcane, but when he ignored her, she made a series of negotiations to get a sugarcane to eat together simply because she wanted to begin a relationship with him. It is analogous to being in the university library and asking to borrow stationery that you do not really need from a person of the opposite sex whom you are interested in getting a chance to talk to.
Abstract: Incest. It is a word that has an extremely unpleasant echo. Incest taboo (or inbreeding avoidance) between close kin in relation to the societies of numerous ethnic cultures has occupied a significant place in anthropological study. Many people might think that incest taboo is a uniquely human quality, but inbreeding avoidance, in general, is found in almost all animal species. Interestingly, the word “inbreeding avoidance” resonates healthier to our ears than “incest avoidance.”
---
2.1.5 Male-philopatric Society in which Females Leave
Not only the Japanese monkey and other primate species [53] but also many groupliving mammals form matrilineal societies where males transfer between groups. Elephants, lions, and the Japanese Sika deers have similar systems [54]. For this, we consider why females do not transfer.
Animals forage for grasses, fruits and leaves of trees, and other foods in the habitats where they are born. Carnivores must hunt prey animals. Prey animals survive by protecting themselves from predators. To do so, they need to have a thorough knowledge of when and where to go to find which kinds of food, understand where prey animals are located and are easy to hunt, learn where predators roam, and where to escape to defend themselves. There are close relatives in their group who have supported them since they were born, but when they move to another group, they have to be accepted by members of that group and build relationships all over again. While moving, they may have to, temporarily, spend a dangerous period alone. Given all of that, it is better to stay in their natal groups throughout their lives.
However, males and females differ in terms of survival strategies. Females can live a normal life and bear a fixed number of offspring during their lifetimes as long as they respond to males that want to mate with them. Therefore, what is important for females is not the number of offspring they produce, but rather how they obtain a stable food supply and effectively protect themselves from predators to rear offspring with certainties. This is why it is more beneficial for them to stay in their natal group than transferring out. In contrast, the number of offspring males can sire during their lifetime varies from zero to almost infinity depending on their efforts and degrees of success. For them, it is more important to obtain as many mating opportunities as possible and impregnate as many females as they can. To do so, it might be better to find another group that offers possibilities of mating with more females rather than staying in a group with a compromised low-rank position in which they are born. Moreover, females are sexually attracted to novel males coming from outside, rather than closely related males or males that are familiar from their childhoods. Consequently, males that actively move between groups, rather than those males that attempt to stay in natal groups, will have a greater chance of obtaining breeding opportunities.
To live in a group and avoid inbreeding, either a male or a female must transfer between groups. If both sexes transfer, this could incur a possibility that brothers and sisters would encounter in a newly transferred group unbeknownst to them. Therefore, if one sex transfers, individuals of the opposite sex do not normally need to transfer. If males are more likely to greatly benefit from transferring, it seems natural for many species to live in female-philopatric societies where males transfer.
---
However, immigrant females are desperate. They somehow have to build relationships with males in the group, and the only thing they can depend on is sex.
One day I witnessed an interesting scene. Until 1996, to carefully observe the behavior of the E1 group, we sometimes brought their favorite sugarcanes, cut them into about 30 cm in length, and scattered them on the forest floor beneath the trees where they spent the night. The scene happened when I was observing the bonobos that were coming down to take the sugarcanes.
In the center of the feeding area, the male bonobo named Ten that had quite recently climbed up to the top rank position was eating a sugarcane. He had some in his hands and was eating slowly. A young female named Nao that had recently immigrated into the E1 group came closer to him. As Nao approached Ten, so closely as if their faces would stick together, she peered at Ten’s mouth and she reached for his mouth with her hand. This is a behavior displayed when a bonobo begs for food. However, Ten pretended not to notice Nao and continued to eat slowly.
Then Nao changed her tactic. She changed the direction of her body to protrude her buttocks to Ten and began inviting him to copulate. Even though males are usually not interested in young females, there was no reason for Ten to refuse copulation if the buttocks are pressed to his nose. Ten stood up with a few sugarcanes in his hand and copulated with Nao as he clasped her from behind.
Then the power relationship between them completely changed. Nao reached out for the sugarcanes that Ten was holding in his hands and took the longest piece from him as if to say that reciprocating with the sugarcanes was an appropriate thing to do. She finally got the piece and started eating it alongside him.
Readers may think that this is an episode in which food was exchanged for sex, like prostitution. However, if we look at this situation more closely, there was no need for Nao to get sugarcane from Ten, because there were a lot of sugarcane stalks still lying on the forest floor. If she really wanted a sugarcane, she could have just picked any one up.
Nao took the trouble to approach Ten and beg for a sugarcane, but when he ignored her, she made a series of negotiations to get a sugarcane to eat together simply because she wanted to begin a relationship with him. It is analogous to being in the university library and asking to borrow stationery that you do not really need from a person of the opposite sex whom you are interested in getting a chance to talk to.
Beliefs need to be pragmatic, grounded in reality, but there are other incentives influencing our beliefs and behavior, like being a good group member, the need to appear as beneficent & effective
On the Function of Beliefs in Strategic Social Interactions. Arnaud Wolff. Bureau d’Économie
Théorique et Appliquée, Document de Travail n° 2019 – 41, Oct 2019. http://www.beta-umr7522.fr/productions/publications/2019/2019-41.pdf
Abstract: We review the way beliefs have traditionally been formalized in game-theoretic settings, and argue that this formalization has its limits, especially in the realm of strategic social interactions. Normative game theory, with its emphasis on equilibrium concepts and its concern about how rational and intelligent players should play, has left little room for a formal characterization of the role of players’ beliefs. Given that beliefs determine play, we argue that a case can be made for a deeper understanding of their nature. We draw on the literature in evolutionary psychology and biology to decipher underlying, not readily apparent, incentives that might influence belief adoption. In fact, we take the view that beliefs are themselves subject to incentives, and that agents’ beliefs may therefore take on a predictable form if we are able to decipher the underlying incentives that they face. This predictable form might then be used to justify specific modelling assumptions, and accordingly improve the models’ predictive power.
Keywords: Beliefs, Game Theory, Social Incentives, Evolution, Coordination
JEL: B40, C70
Discussion
The key point we want to underline is that in strategic interactions, we might be able to predict the direction of people’s beliefs based on the underlying incentives that they face. Nonetheless, it is important to remember that all these incentives probably act in concert. For instance, one might want to be seen as acting consistently with respect to a particular (core) belief adopted by one’s own group. That is, the core belief, around which other related beliefs are tailored, is endogenously determined by one’s group belonging. Also, one might want to stay consistent with respect to a cherished belief, in order to appear as effective to others. A politician, for instance, might want to stay true to her principles in order to appear as confident, or in charge, thereby persuading others of her convictions.
The primary task of the theorist should be to decipher which incentives are the most relevant in the particular context studied (and why?). In fact, very little is known about the interplay of the different incentives individuals face, and about which ones are more binding (and when?). Pecuniary incentives might well be as important as social incentives (think about oil company managers that are convinced that their activities are not harmful for the environment). When, for instance, do pecuniary incentives take the ascendant over social incentives? Or, conversely, in which cases do social incentives (for instance, the incentive to appear as a good coalition member), trump financial incentives? It will likely be very important to determine how individuals deal with the numerous (hidden, underlying) trade-offs that they face, if we want to reap more interesting insights about human behavior. But to be successful, this endeavor requires that we take more seriously the evolutionary approach to human motivations.
Conclusion
Aumann (1976) has shown that if individuals start with the same priors beliefs, and their posteriors about the occurrence of some event are common knowledge, then these posteriors must be equal (i.e., their beliefs must be the same). One however often observes persistent disagreement. Is it because the respective posteriors only rarely fully become common knowledge, or is it because people don’t start off with the same priors? Geanakoplos and Polemarchakis (1982) have shown that in a finite period of time, honest, truth-seeking individuals will reach an agreement by communicating back and forth their posteriors, even if the event was not common knowledge to start with. It must therefore be the priors. Or is it because people are not honest, truth-seekers? We believe that the answer is a mix of both, and that the key to understanding why people are not honest, truth-seekers, and therefore do not converge towards a common posterior, is to decipher the underlying incentives they face to adopt their respective beliefs.
It has traditionally been assumed in economics and game theory that the agents in our models want to be right. They strive to get closer to truth, and they will sometimes undertake costly actions to reach their objective. This assumption is evidently uncontroversial when we consider agents taking decisions whose outcomes depend on the true state of nature. A trader investing in the stock market will want the best available information, while environmentally-friendly consumers will want to know everything about the products they buy. The beliefs need to be pragmatic, grounded in reality. But social life is complex, and there are other (maybe even more important) incentives influencing our beliefs and behavior. We have argued, based on the literature in evolutionary psychology and biology, that some additional, not readily apparent motivations presumably play a large role. These motivations comprise the need to appear as a good coalition (group) member, the need to appear as consistent, and the need to appear as beneficent and effective to others. These motivations are distinct (and without any doubt not exhaustive), but they probably act in concert. In all these cases, pragmatic beliefs are not very useful, because agents have no incentives to be right. Their decisions in these areas do not directly depend on the true state of nature. Instead, their incentives are social, and their beliefs will bear this sign.
Social beliefs have not been much studied, and they are not well understood. They do not respond to evidence as pragmatic beliefs do, and the reason is that they shouldn’t. We have conjectured that much of the apparent disagreement on largely factual matters is due to the above mentioned motivations (together with pecuniary incentives), and that without a deeper appreciation of the underlying (hidden) incentives that agents face, we will not be able to improve our understanding of how to tackle pressing issues such as climate denial, conspiracy theories, or anti-science movements. We believe that the tools of game theory can be successfully applied in those areas, helping us decipher which incentives are the more stringent and binding in different contexts. Nonetheless, this new endeavor requires a deeper comprehension of human motivations and of the particular (often hidden) incentives that we face, at the risk of being stuck in the study of proximate mechanisms, without grasping what the ultimate motives are.
Théorique et Appliquée, Document de Travail n° 2019 – 41, Oct 2019. http://www.beta-umr7522.fr/productions/publications/2019/2019-41.pdf
Abstract: We review the way beliefs have traditionally been formalized in game-theoretic settings, and argue that this formalization has its limits, especially in the realm of strategic social interactions. Normative game theory, with its emphasis on equilibrium concepts and its concern about how rational and intelligent players should play, has left little room for a formal characterization of the role of players’ beliefs. Given that beliefs determine play, we argue that a case can be made for a deeper understanding of their nature. We draw on the literature in evolutionary psychology and biology to decipher underlying, not readily apparent, incentives that might influence belief adoption. In fact, we take the view that beliefs are themselves subject to incentives, and that agents’ beliefs may therefore take on a predictable form if we are able to decipher the underlying incentives that they face. This predictable form might then be used to justify specific modelling assumptions, and accordingly improve the models’ predictive power.
Keywords: Beliefs, Game Theory, Social Incentives, Evolution, Coordination
JEL: B40, C70
Discussion
The key point we want to underline is that in strategic interactions, we might be able to predict the direction of people’s beliefs based on the underlying incentives that they face. Nonetheless, it is important to remember that all these incentives probably act in concert. For instance, one might want to be seen as acting consistently with respect to a particular (core) belief adopted by one’s own group. That is, the core belief, around which other related beliefs are tailored, is endogenously determined by one’s group belonging. Also, one might want to stay consistent with respect to a cherished belief, in order to appear as effective to others. A politician, for instance, might want to stay true to her principles in order to appear as confident, or in charge, thereby persuading others of her convictions.
The primary task of the theorist should be to decipher which incentives are the most relevant in the particular context studied (and why?). In fact, very little is known about the interplay of the different incentives individuals face, and about which ones are more binding (and when?). Pecuniary incentives might well be as important as social incentives (think about oil company managers that are convinced that their activities are not harmful for the environment). When, for instance, do pecuniary incentives take the ascendant over social incentives? Or, conversely, in which cases do social incentives (for instance, the incentive to appear as a good coalition member), trump financial incentives? It will likely be very important to determine how individuals deal with the numerous (hidden, underlying) trade-offs that they face, if we want to reap more interesting insights about human behavior. But to be successful, this endeavor requires that we take more seriously the evolutionary approach to human motivations.
Conclusion
Aumann (1976) has shown that if individuals start with the same priors beliefs, and their posteriors about the occurrence of some event are common knowledge, then these posteriors must be equal (i.e., their beliefs must be the same). One however often observes persistent disagreement. Is it because the respective posteriors only rarely fully become common knowledge, or is it because people don’t start off with the same priors? Geanakoplos and Polemarchakis (1982) have shown that in a finite period of time, honest, truth-seeking individuals will reach an agreement by communicating back and forth their posteriors, even if the event was not common knowledge to start with. It must therefore be the priors. Or is it because people are not honest, truth-seekers? We believe that the answer is a mix of both, and that the key to understanding why people are not honest, truth-seekers, and therefore do not converge towards a common posterior, is to decipher the underlying incentives they face to adopt their respective beliefs.
It has traditionally been assumed in economics and game theory that the agents in our models want to be right. They strive to get closer to truth, and they will sometimes undertake costly actions to reach their objective. This assumption is evidently uncontroversial when we consider agents taking decisions whose outcomes depend on the true state of nature. A trader investing in the stock market will want the best available information, while environmentally-friendly consumers will want to know everything about the products they buy. The beliefs need to be pragmatic, grounded in reality. But social life is complex, and there are other (maybe even more important) incentives influencing our beliefs and behavior. We have argued, based on the literature in evolutionary psychology and biology, that some additional, not readily apparent motivations presumably play a large role. These motivations comprise the need to appear as a good coalition (group) member, the need to appear as consistent, and the need to appear as beneficent and effective to others. These motivations are distinct (and without any doubt not exhaustive), but they probably act in concert. In all these cases, pragmatic beliefs are not very useful, because agents have no incentives to be right. Their decisions in these areas do not directly depend on the true state of nature. Instead, their incentives are social, and their beliefs will bear this sign.
Social beliefs have not been much studied, and they are not well understood. They do not respond to evidence as pragmatic beliefs do, and the reason is that they shouldn’t. We have conjectured that much of the apparent disagreement on largely factual matters is due to the above mentioned motivations (together with pecuniary incentives), and that without a deeper appreciation of the underlying (hidden) incentives that agents face, we will not be able to improve our understanding of how to tackle pressing issues such as climate denial, conspiracy theories, or anti-science movements. We believe that the tools of game theory can be successfully applied in those areas, helping us decipher which incentives are the more stringent and binding in different contexts. Nonetheless, this new endeavor requires a deeper comprehension of human motivations and of the particular (often hidden) incentives that we face, at the risk of being stuck in the study of proximate mechanisms, without grasping what the ultimate motives are.
Positive “good” information is more frequent compared to negative “bad” information despite our great focusing in bad behaviors/events; bad events/traits/etc. have more diversity of descriptions
The evaluative information ecology: On the frequency and diversity of “good” and “bad”. Christian Unkelbach, Alex Koch & Hans Alves. European Review of Social Psychology, Volume 30, 2019 - Issue 1, Pages 216-270. Nov 24 2019. https://doi.org/10.1080/10463283.2019.1688474
Abstract: We propose the Evaluative Information Ecology (EvIE) model as a model of the social environment. It makes two assumptions: Positive “good” information is more frequent compared to negative “bad” information and positive information is more similar and less diverse compared to negative information. We review support for these two properties based on psycho-lexical studies (e.g., negative trait words are used less frequently but they are more diverse), studies on affective reactions (e.g., people experience positive emotions more frequently but negative emotions are more diverse), and studies using direct similarity assessments (i.e., people rate positive information as more similar/less diverse compared to negative information). Next, we suggest explanations for the two properties building on potential adaptive advantages, reinforcement learning, hedonistic sampling processes, similarity from co-occurrence, and similarity from restricted ranges. Finally, we provide examples of how the EvIE model refines well-established effects (e.g., intergroup biases; preferences for groups without motivation or intent) and how it leads to the discovery of novel phenomena (e.g., the common good phenomenon; people share positive traits but negative traits make them distinct). We close by discussing the benefits relative to the drawbacks of ecological approaches in social psychology and how an ecological and cognitive level of analysis may complement each other.
Keywords: Evaluation, ecology, halo effects, person perception, intergroup biases
Excerpts. Check the full paper for figures, tables, references, etc.
Implications
So far, we provided evidence and explanations for the higher frequency of
positive information relative to negative information (i.e., traits, experiences,
behaviours), and the higher similarity of positive information to other positive
information. In the remainder, we aim to back up our claim that the interaction of these properties with well-established social-cognitive principles within
the organism may lead to the discovery of novel phenomena and alternative
explanations for classic social psychological findings. We will address halo
effects, the relation of similarity and liking, the relation of frequency and liking,
as well as the field of intergroup biases. In the following review of our empirical
findings, anything that is reported as a difference is significant (i.e., probability
of the test statistic under the H0 is p < . 05), unless indicated otherwise; all
reported experiments had proper power considerations and reported all conditions, all data exclusions, and all variables. In addition, we predicted the
empirical findings from the assumed properties and did not derive the EvIE’s
properties from these studies; thus, the following experimental work supports
the EvIE as a general model for people’s social reality.
Halo effects: being honest makes you industrious, but lying does not make you lazy
Halo effects are among the best-established findings in psychology. Thorndike
(1920) coined the term when he observed a “constant error in psychological
ratings”: When army officers were evaluated by their superiors, theoretically
independent dimensions constantly correlated more highly than they should.
Thus, raters either used information on one dimension to rate another dimension or made inferences from a global impression about the to-be-rated target
(Cooper, 1981). Probably the most famous halo effect is from ratings of
physical attractiveness to ratings of intelligence or morality, famous under
the “What is beautiful is good” label (Dion, Berscheid, & Walster, 1972).
Based on our assumptions about the EvIE, an intriguing prediction
follows from the similarity property, namely that halo effects should be
most apparent given positive traits and rating dimensions, but less pronounced given negative traits. This is a strong prediction insofar as there is
consensus in the literature that negative information has more impact than
positive information on social evaluations (e.g., Kanouse & Hanson, 1972;
Peeters & Czapinski, 1990; Skowronski & Carlston, 1989).
To test this idea, Gräf and Unkelbach (2016) presented participants with
targets’ positive or negative traits as well as behaviours from two dimensions
of social perception (Bakan, 1966; see also Abele & Wojciszke, 2018), namely
communion (e.g., being honest) and agency (e.g., being industrious), and
asked participants to rate the targets on other traits either from the same or
the other dimension. Across three experiments, Gräf and Unkelbach investigated halo effects on 30 traits and 48 different behaviours. Participants
observed a target showing either a trait label or a behavioural description and
were asked how likely it was that the target would possess another trait
(Experiments 1 and 2) or would show another behaviour (Experiment 3).
Importantly, they varied the valence and the social perception dimension.
For example, participants saw a lying target (i.e., a negative communion trait)
and answered how likely this person was to also be lazy (i.e., a negative
agency trait), or in another trial, how likely this person was also to be egoistic
(i.e., negative communion trait). Similarly, they would see an honest target
and answer how likely this person was to also be industrious (or, in another
trial: helpful). Thus, the trials tested whether halo effects, an inference from
one behaviour/trait to another behaviour/trait, vary as a function of trait/
behaviour valence and as a function of within/between dimension inferences
on the two fundamental dimensions of social perception.
Figure 5 shows the data from these three experiments. As predicted from
the EvIE’s similarity property, positive traits and behaviours lead to substantially stronger halo effects, both within and across the dimensions of communion and agency (Gräf & Unkelbach, 2016; Exp. 1 to 3; see also, 2018, for
a conceptual replication). These findings are difficult to reconcile with classic
assumptions about the unconditional higher impact of negative information
on social evaluations, but they follow from the EvIE’s similarity property. The
results may also explain apparent features in the literature, namely why there
are few published studies showing “negative” halo effects (i.e., “horn” effects),
simply because they usually do not exist (i.e., lying does not make you lazy).
The EvIE’s frequency property also suggests an intriguing point; namely,
that the observed halo effects might not be an error in ratings (Thorndike,
1920), but a true property of the ecology. Similar to our argument concerning
how higher similarity follows from a higher frequency, the higher frequency
of occurrence of positive traits and behaviours also implies that any positive
trait or behaviour is more likely to co-occur. People should, therefore, learn
that positive traits and positive behaviours appear together on a person-level.
If our assumption about the EvIE’s frequency property is correct, then the
personality profile of being both honest and industrious is factually more
likely than the profile of being dishonest and lazy. From an ecological view,
the constant error in ratings observed by Thorndike might not be entirely an
error after all, but a generalisation of observed ecological co-occurrence to
a task involving trait ratings in a psychology experiment. Investigating this
alternative source for halo effects provides a fascinating venue for future
research.
Similarity and liking: your friends are all alike
The EvIE model states that positive information is more similar and less
diverse compared to negative information; as Figure 4 illustrates, there is
only one way (or fewer ways) to be good compared to the many ways
someone might be bad. One implication of this ecological property is that
liked people (i.e., someone’s friends) should be more similar to one another
compared to disliked people.
This is an interesting prediction, because, based on the hedonic sampling
principle discussed above, people should spend more time with other people
they like compared to people they do not like (Denrell, 2005). This increase
in spent time should lead to more knowledge about liked people, and thereby
to a more differentiated representation of these liked others. Smallman and
Roese (2008) explicitly stated this as follows: “to cherish a loved one is to
relish the fine nuances of his or her personality” while “the rejected and
forsaken are construed on a relatively surface level” (p. 1228). However, if we
assume that people like each other because they possess positive traits,
attributes, or qualities which makes them likeable, the EvIE’s assumed
similarity property predicts that these people should be very similar, particularly in comparison to disliked people. Their mental representation might
be highly differentiated as proposed by Smallman and Roese, but this differentiation does not make them dissimilar, just because the properties (i.e.,
traits and behaviours) that lead to liking are factually highly alike.
Alves, Koch, and Unkelbach (2016) conducted seven experiments to test
whether people see other people they like as more similar to one another
compared to people they dislike. We discuss five of these experiments in the
following. The basic paradigm was straightforward. Participants generated
names of target persons they liked and of targets they disliked. Then, they
used the spatial arrangement method described above (see Figure 1‘s right
panel; Hout et al., 2013) or pairwise similarity ratings (see Figure 1‘s left
panel) to arrange these targets on the screen according to the similarities of
their personalities. They also provided ratings of the time spent together with
these people and of how much they knew about them. As expected, participants reported having spent more time with liked compared to disliked
targets, and they reported knowing more about the liked compared to the
disliked targets. Yet, in line with the prediction from the EvIE, participants
consistently reported higher similarity for liked and disliked targets.
Figure 6 provides a summary of the similarity judgements from
Experiments 1, 3 and 5. Experiment 1 used target persons participants knew
personally with spatial arrangement to assess similarity. Experiment 3 used
target persons participants knew personally with pairwise comparisons to
assess similarity. Experiment 5 used celebrity targets with pairwise comparisons. As Figure 6 shows, participants consistently reported liked targets to be
more similar than disliked targets, despite spending more time with them. We
omit Experiments 2 and 6 here; Experiment 2 replicated Experiment 1 with
target valence manipulated between participants and Experiment 6 replicated
Experiment 5 with a larger set of celebrity targets.
Experiment 4 tested the underlying EvIE structure directly. Participants
generated as many traits as they could for each of the four liked and disliked
targets they named. First, in line with the assumed greater knowledge for liked
targets, participants generated on average 6.9 traits for liked, but only 3.9 traits
for disliked targets. Second, we computed the probability that a trait was shared
among the targets. Figure 7‘s left panel shows the relevant data. The probability
that participants generated shared traits among liked targets was substantially
higher compared to disliked targets. This was true within participants’ eight
targets, but also across participants; that is, even across participants, liked targets
were more likely to share traits and therefore be more similar, providing support
for the assumption that there are ecologically fewer ways to be liked than to be
disliked. This difference in shared traits also held when controlling for the
number of generated traits in a regression analysis.
Experiment 7 then flipped the paradigm and asked participants to generate the names of two people they personally knew without specifying
whether they had to be liked or disliked. Instead, we asked them to generate
either positive traits or negative traits that described each of the two targets.
After providing as many traits as they could, we asked participants to rate the
similarity of the two targets. First, as expected, participants showed the
reversed effect as well – generating positive traits made the two targets appear
more similar compared to generating negative traits. As the targets were
selected in both conditions before we asked for positive or negative traits, any
alternative explanation in terms of differential target generation is taken care
of. In addition, participants generated more traits in the positive traits
condition, 6.4 on average, compared to the negative traits condition, where
they generated only 3.8 traits on average. Replicating Experiment 4, as shown
in Figure 7‘s right panel, the probability that participants generated shared
traits among positive traits was substantially higher compared to negative
traits. This was again true within and also across participants, and also when
controlling for the absolute number of traits generated.
Across seven experiments, of which we summarised five here, we found that
positive traits are more frequently generated and these generated traits also are
more likely to be found across targets, leading to the conclusion that liked
people tend to be seen as alike. In particular, the within-participant comparisons might partially follow from intra-psychic mechanisms (e.g., motivated
reasoning to see your friends as similar and good); however, the effects acrossparticipants are difficult to explain without the presented EvIE model (see
Figure 7).
Frequency and valence: the common good in person perception
In another series of experiments (Alves, Koch, & Unkelbach, 2017b), we
tested a prediction from the frequency property discussed above: If positive
information is more frequent, then it should more likely co-occur with other
positive information compared to negative information. Across people, this
implies that people have positive traits in common, but their negative traits
make them distinct: “Those attributes that connect different people and that
define their similarities are usually good attributes. Those attributes that
distinguish different people and make them unique are often bad attributes.”
(p. 512). This prediction follows solely from the frequency property and does
not depend on the similarity of the information.
For illustration, let us again consider the formal relation of shared and
unshared positive and negative attributes, as we did above for personality
traits. For example, positive attributes may have the probability of being
present in any person of p(pos) = 0.6, and negative attributes may have
a probability of being present of p(neg) = 0.2. The probability of a shared
attribute (i.e., being simultaneously present in two persons) being positive is
then p(positive|shared) = p(pos)*p(pos) = 0.36, while the probability for the
negative attribute is p(negative|shared) = p(neg)*p(neg) = 0.04. In other
words, if a positive trait is three times more likely in the ecology than
a negative trait, it is nine times more likely to be shared than a negative
trait. This leads to two hypotheses: positive traits should be more likely to be
shared amongst targets compared to negative traits, p(shared|positive) > p
(shared|negative), and shared traits should be more likely to be positive
compared to negative traits, p(positive|shared) > p(negative|shared).
To test these hypotheses, Alves et al. (2017b) asked participants to
sample traits of target persons. Experiments 1a and 1b tested the first
prediction, p(shared|positive) > p(shared|negative). In Experiment 1a
(n = 41), participants generated two people they knew personally and
then generated four positive traits and four negative traits for one of the
two. Then, we asked them which of the eight traits also described the other
person. In line with our first prediction, participants assigned on average
3.4 positive traits (i.e., almost all) two both targets. Out of the four negative
traits, they assigned only 1.1 to both targets. Figure 8‘s left panel reports the
respective conditional probabilities for positive and negative traits.
To generalise this result, Experiment 1b (n = 82) asked participants to
generate 10 target persons. Then, we randomly sampled a given set of four
positive and four negative traits from Experiment 1a and participants had to
indicate to which of the 10 targets each of the traits applied. Replicating 1a,
participants assigned on average 3.1 of the positive traits to a target from
their own sample, but only 1.2 of the negative traits. Figure 8 shows the
resulting conditional probabilities. As the left panel shows, positive traits
were much more likely to be shared across participants compared to negative
traits. And as the trait and target generation were separated in Experiment
1b, this replication provides support for our ecological argument.
Experiment 2 in this series of “common good” experiments (Alves et al.,
2017b) tested the second prediction: if a trait is shared as opposed to
unshared, it should be more likely positive, and thus, p(positive|shared) > p
(negative|shared). Participants again generated two target names; then, we
asked them for either shared or unshared traits. We asked for four shared
traits in the former, and two traits that belonged uniquely to the first target,
and two traits that belonged uniquely to the second target, in the latter
condition. Then, participants rated the valence of the generated traits.
Figure 8‘s right panel shows the probabilities: Overall, participants generated
more positive traits than negative traits in both conditions, reflecting the
general positivity prevalence. Yet, in the shared condition, 3.5 traits were
positive on average, and only 0.2 traits were negative. In the unshared
condition, 2.3 traits were positive and 1.3 traits were negative. Thus, the traits
people have in common are usually positive.
Experiment 4a (n = 176) in Alves et al. (2017b) aimed to show that
searching for similarities (i.e., shared traits) amplifies the ecological default,
and searching for differences (i.e., unique traits) attenuates it. Thus, the
experiment replicated Experiment 2 but included a “natural” condition, in
addition to the “shared” and “unshared” conditions. The “natural” condition
asked participants to generate traits for two target persons without specifying
whether these should be shared or unshared traits. Again, across conditions,
participants generated substantially more positive traits: about 4.8 traits out
of six were positive. However, the probability of generating a positive trait
varied as a function of the traits being generated as “shared”, “unshared”, or
“natural” (i.e., without specific instructions). Figure 9 shows these probabilities of a trait being positive. The probability of a trait being positive was
smaller in the natural condition compared to the “shared” condition, and
smaller in the “unshared” condition compared to the “natural” condition.
Thus, as predicted, looking for similarities amplifies the prevalence of positive traits, while looking for differences attenuates it.
A basic drawback in the reported “common good” studies so far is that
participants self-generated targets, which makes the observed “common good”
effect less surprising, as most people might generate people they know and also
like, and the phenomenon might follow from the “my friends are all alike”
effect described above. However, the present approach is different as it is solely
based on the proposed EvIE’s frequency property. The similarity property
implies that positive information should always be more similar to other
positive information (again; there is only one way to be good), and thus, as
long as people have friends they like, these should be alike.
The present “common good” effect, however, follows only if the available
information is predominantly positive. This leads to the reverse prediction if
the available information is predominantly negative. Thus, in Experiments 5
and 6 in Alves et al. (2017b) “common good” series, participants did not
generate targets, but we provided liked and disliked targets for which the
available trait information should be either predominantly positive or negative,
respectively. To do so, Experiment 5 took advantage of the US’s bipartisan
political structure of Democrats and Republicans and recruited 310 US participants online. Half of the participants generated either shared or unshared
traits for Mitt Romney and George W. Bush, two well-known republicans, and
the other half did the same for Bill Clinton and Barack Obama, two wellknown democrats. To divide the sample, we asked participants how much they
liked these political figures; 160 participants reported liking the politicians in
their respective conditions, and 143 participants reported disliking them.
Seven participants reported neither liking nor disliking them and were
excluded from the analysis.
In Experiment 6 (n = 307), we sampled the target persons from a list of the
10 most popular and most unpopular people other participants generated.
The 10 most popular people for US citizens were Abraham Lincoln,
John F. Kennedy, Elvis Presley, Martin Luther King, Oprah Winfrey, Taylor
Swift, George Washington, Michael Jordan, Beyoncé Knowles, and Jesus
Christ. The 10 most unpopular people were Adolf Hitler, Donald Trump,
George W. Bush, Osama Bin Laden, Saddam Hussein, Joseph Stalin, Kim
Jong Un, Justin Bieber, Fidel Castro, and Kanye West. For example, participants generated four traits that Abraham Lincoln and Elvis Presley shared or
two traits that were unique to Lincoln and Presley, respectively. Each pairing
was randomly created for each participant. In the negative targets condition,
for example, participants generated traits that Adolf Hitler and Justin Biber
shared, or two traits that were unique to each of these targets.
Figure 10 shows the results for these two studies, plotting the frequency of
traits being positive and negative as a function of being shared or unshared
among the target persons. For liked targets, the trait frequencies replicate the
previous studies. Both for liked political figures of that time as well as
consensually liked persons, looking for similarities yielded many positive
traits, and few negative traits. Looking for differences yielded fewer positive
traits and more negative traits. However, when participants disliked the
targets, that is, when operating in an ecology of predominantly negative
information, they generated more negative traits in the shared compared to
the unshared condition. Conversely, they provided fewer positive traits in the
shared compared to the unshared condition. This pattern of results provided
distinct evidence for the “common good” implication of the EvIE’s assumed
frequency property. Looking for similarities between targets amplifies, and
looking for differences between targets attenuates, the underlying base-rate;
and this base-rate is, in most cases, marked by a high frequency of positive
information, leading to a “Common Good” phenomenon.
Thus, based on the assumption that positive information is more frequent,
we predicted and found a novel phenomenon in person perception – the
common good effect. The attributes people have in common are usually good
attributes, and negative attributes are rather unique. In addition, searching
for similarities leads to the discovery of the common good, while searching
for differences subjectively attenuates the prevalence of positive information.
Intergroup biases: a cognitive-ecological explanation
Having shown implications of positive information’s higher similarity
(strong halo effects from positive traits; friends are more alike than enemies)
and positive information’s higher frequency (the common good phenomenon), our final example provides a genuinely new explanation for intergroup biases (Alves, Koch, & Unkelbach, 2018), by combining basic
cognitive processes with our assumptions about the EvIE.
One of the most prominent effects in social psychology is that people tend to
devalue minorities (e.g., refugees, immigrants) and out-groups (e.g., rival sport
teams, other states). There is a wealth of models and theories to explain these
biases (e.g., Tajfel and Turner’s Social Identity Theory, 1979; or Brewer’s
theory of optimal distinctiveness, 1991). However, taking the assumed EvIE
properties offers a novel explanation.
For this explanation, we only need the assumption that out-groups and
minorities are “novel” groups in comparison to ingroups and majorities. This
is highly plausible, as people usually come in contact first with their ingroups
(e.g., family, fellow citizens) and majorities (e.g., Whites, Christians); they
learn about outgroups and minorities later and these groups are then novel
in comparison to the former.
On the cognitive side, novel groups are defined in relation to existing groups
(i.e., ingroups, majorities) by the attributes that make them unique, rather than
by the attributes they have in common with existing groups (Hodges, 2005;
Sherman et al., 2009; Tversky & Gati, 1978). On the ecological side, as the
presented evidence suggests, positive attributes are less diverse or more similar
than negative information, and positive information is more frequent than
negative information. Consequently, unique attributes that differentiate
a novel group from already-known groups are likely to be negative.
Thus, the argument is as follows: Minorities and outgroups are most likely
novel groups to social perceivers, compared to majorities and ingroups.
Novel groups are defined by their unique attributes (i.e., the cognitive part)
and unique attributes are most likely negative (i.e., the ecological part),
leading to an association between outgroups and minorities with negative
attributes, which in turn may cause negative stereotypes and prejudice.
To test this explanation, we invited participants to take the role of space
explorers. On a novel planet, they would encounter members of two alien
tribes. We used the neutral aliens provided by Gupta et al. (2004) as stimuli.
Participants would encounter one member of the first tribe and receive
information about one of the alien’s trait; that is, they saw a picture of the
alien and the alien’s respective trait (e.g., helpful, intelligent, anxious, or
aggressive). After participants had encountered six members of the alien
tribe, we instructed participants to imagine that they would now continue
their travels and encounter another alien tribe. Then, they would learn about
the traits of six members of the second tribe. In the real world, people should
probabilistically learn first about members of their ingroup before learning
about members of outgroups. Similarly, they are more likely to meet majority
group members before meeting minority group members. Thus, the first
tribe is functionally similar to a majority or ingroup, and the second tribe is
functionally similar to minorities or out-groups. After these learning phases,
participants chose which group they preferred.
The central manipulation across three experiments was the trait pool from
which we assigned the two tribes’ traits. After learning, we asked participants
which tribe they prefer; that is, we elicited a binary preference choice
between the first and the second tribe as the central dependent variable.
Experiment 1 manipulated whether the positive or whether the negative
attributes were shared or unshared among the two groups. That is, in one
condition, the groups’ positive attributes were identical, while their negative
attributes differed, and this was reversed in the other condition. Table 4‘s left
section presents the resulting preference frequencies. As predicted from our
cognitive-ecological explanation, participants preferred the first group when
the positive attributes were shared and negative attributes were unique, but
preferred the second group when positive attributes were unique and negative attributes were shared. In other words, although the distribution of
positive and negative traits was identical, there was a bias against the novel
group in a standard ecology (i.e., where negative information is unique),
which reversed as a function of the trait ecology.
Experiment 2 then manipulated the similarity of evaluative information in
the ecology. We created two attribute ecologies. In the standard ecology,
positive attributes were less diverse compared to negative attributes. In the
reversed ecology, negative attributes were less diverse. We manipulated
diversity by the number of unique traits in a given ecology. In the standard
ecology condition, we randomly sampled each alien tribe’s three positive
traits from a set of four traits, while we sampled the three negative traits from
a set of 16 traits (i.e., there were more ways to be negative). In the reversed
ecology condition, we sampled the alien tribes’ three negative traits from a set
of four traits, and their positive traits from a set of 16 traits (i.e., there were
more ways to be positive). Consequently, in the standard ecology, the
positive traits were likely to be shared and the first tribe should be preferred.
In the reversed ecology, the negative traits were likely to be shared and
the second tribe should be preferred. As Table 4‘s middle panel shows, the
preference frequencies replicated Experiment 1. Participants preferred the
first group in the standard ecology (i.e., when negative attributes were likely
unique), but in the reversed ecology they preferred the second group (i.e.,
when positive attributes were likely unique).
Experiment 3 then manipulated the EvIE’s second property, the frequency
of evaluative information. In the standard ecology, both groups possessed
more positive than negative attributes, while in the reversed ecology, negative attributes were more frequent. Specifically, in the standard ecology, both
tribes displayed four positive traits and one negative trait. Both positive and
negative traits were randomly sampled from a set of six positive and six
negative traits. In the reversed ecology, both tribes displayed four positive
and one negative trait. Consequently, in the standard ecology (positive
frequent), unique attributes were likely to be negative, while in the reversed
ecology, unique attributes were likely to be negative.
Table 4‘s right section shows the respective preference frequencies.
Replicating Experiments 1 and 2, participants preferred the first group in
the standard ecology, but they preferred the second group in the reversed
ecology. One apparent feature of Table 4 is that the standard ecologies (i.e.,
when negative information is unique) yield stronger differences between the
tribes, while the preference differential is less strong when positive information is unique. This is actually in line with our overall assumptions about the
EvIE. We did not control for the connotative similarity of the positive and
negative traits, but research on the similarity of personality traits
(Bruckmüller & Abele, 2013; Gräf & Unkelbach, 2016; Leising et al., 2012)
shows that positive traits are more similar to each other compared to
negative traits. By implication, the positive unique traits were, less “unique”
compared to the negative unique traits. This differential valence asymmetry
explains at least part of the differential impact of the ordering.
Thus, across three experiments, participants associated a novel group
with its unique attributes, which differentiate the group from previously
encountered groups. Depending on the ecology’s properties, unique attributes were more likely to be positive or negative, and participants’ preferences followed accordingly. As the general structural properties of the EvIE
make unique attributes more likely negative, p(negative|unique) > p(positive|unique), an evaluative disadvantage for novel groups, and thereby for
minorities and outgroups, follows. In other words, people do not need
a real conflict (Sherif, Harvey, White, Hood, & Sherif, 1961), motivated
reasoning (Kunda, 1990), or a hostile personality structure to show differential preferences for minorities and outgroups (Altemeyer, 1998). Rather,
all they need is a cognitive system that tries to differentiate different groups
in an ecology that is marked by high similarity and a high frequency of
positive information
Summary of the implications
We have provided two examples of how our EvIE model refines our knowledge
about classic and important social psychological phenomena. First, halo effects;
we have delineated and shown that halo effects appear predominantly for
positive traits, but are largely absent for negative traits, despite the typically
assumed stronger impact of negative information (Baumeister et al., 2001; Ito,
Larsen, Smith, & Cacioppo, 1998). Second, intergroup biases; we have provided a cognitive-ecological explanation for intergroup biases that do not rely
on motivated reasoning (Kunda, 1990; Tajfel & Turner, 1979), but builds solely
on cognitive processes that interact with the EvIE’s properties.
We have also provided two examples that illustrate the discovery of genuinely new phenomena. First, people’s friends are all alike. Based on the proposed
similarity property, we have shown that people perceive others they know and
like as more similar to one another, just because there is not much room for
variety on the positive side. Second, the common good phenomenon; based on
the proposed frequency property, we have shown that what people have in
common are usually positive attributes, just because negative attributes are
infrequent, and their joint occurrence is therefore unlikely.
Abstract: We propose the Evaluative Information Ecology (EvIE) model as a model of the social environment. It makes two assumptions: Positive “good” information is more frequent compared to negative “bad” information and positive information is more similar and less diverse compared to negative information. We review support for these two properties based on psycho-lexical studies (e.g., negative trait words are used less frequently but they are more diverse), studies on affective reactions (e.g., people experience positive emotions more frequently but negative emotions are more diverse), and studies using direct similarity assessments (i.e., people rate positive information as more similar/less diverse compared to negative information). Next, we suggest explanations for the two properties building on potential adaptive advantages, reinforcement learning, hedonistic sampling processes, similarity from co-occurrence, and similarity from restricted ranges. Finally, we provide examples of how the EvIE model refines well-established effects (e.g., intergroup biases; preferences for groups without motivation or intent) and how it leads to the discovery of novel phenomena (e.g., the common good phenomenon; people share positive traits but negative traits make them distinct). We close by discussing the benefits relative to the drawbacks of ecological approaches in social psychology and how an ecological and cognitive level of analysis may complement each other.
Keywords: Evaluation, ecology, halo effects, person perception, intergroup biases
Excerpts. Check the full paper for figures, tables, references, etc.
Implications
So far, we provided evidence and explanations for the higher frequency of
positive information relative to negative information (i.e., traits, experiences,
behaviours), and the higher similarity of positive information to other positive
information. In the remainder, we aim to back up our claim that the interaction of these properties with well-established social-cognitive principles within
the organism may lead to the discovery of novel phenomena and alternative
explanations for classic social psychological findings. We will address halo
effects, the relation of similarity and liking, the relation of frequency and liking,
as well as the field of intergroup biases. In the following review of our empirical
findings, anything that is reported as a difference is significant (i.e., probability
of the test statistic under the H0 is p < . 05), unless indicated otherwise; all
reported experiments had proper power considerations and reported all conditions, all data exclusions, and all variables. In addition, we predicted the
empirical findings from the assumed properties and did not derive the EvIE’s
properties from these studies; thus, the following experimental work supports
the EvIE as a general model for people’s social reality.
Halo effects: being honest makes you industrious, but lying does not make you lazy
Halo effects are among the best-established findings in psychology. Thorndike
(1920) coined the term when he observed a “constant error in psychological
ratings”: When army officers were evaluated by their superiors, theoretically
independent dimensions constantly correlated more highly than they should.
Thus, raters either used information on one dimension to rate another dimension or made inferences from a global impression about the to-be-rated target
(Cooper, 1981). Probably the most famous halo effect is from ratings of
physical attractiveness to ratings of intelligence or morality, famous under
the “What is beautiful is good” label (Dion, Berscheid, & Walster, 1972).
Based on our assumptions about the EvIE, an intriguing prediction
follows from the similarity property, namely that halo effects should be
most apparent given positive traits and rating dimensions, but less pronounced given negative traits. This is a strong prediction insofar as there is
consensus in the literature that negative information has more impact than
positive information on social evaluations (e.g., Kanouse & Hanson, 1972;
Peeters & Czapinski, 1990; Skowronski & Carlston, 1989).
To test this idea, Gräf and Unkelbach (2016) presented participants with
targets’ positive or negative traits as well as behaviours from two dimensions
of social perception (Bakan, 1966; see also Abele & Wojciszke, 2018), namely
communion (e.g., being honest) and agency (e.g., being industrious), and
asked participants to rate the targets on other traits either from the same or
the other dimension. Across three experiments, Gräf and Unkelbach investigated halo effects on 30 traits and 48 different behaviours. Participants
observed a target showing either a trait label or a behavioural description and
were asked how likely it was that the target would possess another trait
(Experiments 1 and 2) or would show another behaviour (Experiment 3).
Importantly, they varied the valence and the social perception dimension.
For example, participants saw a lying target (i.e., a negative communion trait)
and answered how likely this person was to also be lazy (i.e., a negative
agency trait), or in another trial, how likely this person was also to be egoistic
(i.e., negative communion trait). Similarly, they would see an honest target
and answer how likely this person was to also be industrious (or, in another
trial: helpful). Thus, the trials tested whether halo effects, an inference from
one behaviour/trait to another behaviour/trait, vary as a function of trait/
behaviour valence and as a function of within/between dimension inferences
on the two fundamental dimensions of social perception.
Figure 5 shows the data from these three experiments. As predicted from
the EvIE’s similarity property, positive traits and behaviours lead to substantially stronger halo effects, both within and across the dimensions of communion and agency (Gräf & Unkelbach, 2016; Exp. 1 to 3; see also, 2018, for
a conceptual replication). These findings are difficult to reconcile with classic
assumptions about the unconditional higher impact of negative information
on social evaluations, but they follow from the EvIE’s similarity property. The
results may also explain apparent features in the literature, namely why there
are few published studies showing “negative” halo effects (i.e., “horn” effects),
simply because they usually do not exist (i.e., lying does not make you lazy).
The EvIE’s frequency property also suggests an intriguing point; namely,
that the observed halo effects might not be an error in ratings (Thorndike,
1920), but a true property of the ecology. Similar to our argument concerning
how higher similarity follows from a higher frequency, the higher frequency
of occurrence of positive traits and behaviours also implies that any positive
trait or behaviour is more likely to co-occur. People should, therefore, learn
that positive traits and positive behaviours appear together on a person-level.
If our assumption about the EvIE’s frequency property is correct, then the
personality profile of being both honest and industrious is factually more
likely than the profile of being dishonest and lazy. From an ecological view,
the constant error in ratings observed by Thorndike might not be entirely an
error after all, but a generalisation of observed ecological co-occurrence to
a task involving trait ratings in a psychology experiment. Investigating this
alternative source for halo effects provides a fascinating venue for future
research.
Similarity and liking: your friends are all alike
The EvIE model states that positive information is more similar and less
diverse compared to negative information; as Figure 4 illustrates, there is
only one way (or fewer ways) to be good compared to the many ways
someone might be bad. One implication of this ecological property is that
liked people (i.e., someone’s friends) should be more similar to one another
compared to disliked people.
This is an interesting prediction, because, based on the hedonic sampling
principle discussed above, people should spend more time with other people
they like compared to people they do not like (Denrell, 2005). This increase
in spent time should lead to more knowledge about liked people, and thereby
to a more differentiated representation of these liked others. Smallman and
Roese (2008) explicitly stated this as follows: “to cherish a loved one is to
relish the fine nuances of his or her personality” while “the rejected and
forsaken are construed on a relatively surface level” (p. 1228). However, if we
assume that people like each other because they possess positive traits,
attributes, or qualities which makes them likeable, the EvIE’s assumed
similarity property predicts that these people should be very similar, particularly in comparison to disliked people. Their mental representation might
be highly differentiated as proposed by Smallman and Roese, but this differentiation does not make them dissimilar, just because the properties (i.e.,
traits and behaviours) that lead to liking are factually highly alike.
Alves, Koch, and Unkelbach (2016) conducted seven experiments to test
whether people see other people they like as more similar to one another
compared to people they dislike. We discuss five of these experiments in the
following. The basic paradigm was straightforward. Participants generated
names of target persons they liked and of targets they disliked. Then, they
used the spatial arrangement method described above (see Figure 1‘s right
panel; Hout et al., 2013) or pairwise similarity ratings (see Figure 1‘s left
panel) to arrange these targets on the screen according to the similarities of
their personalities. They also provided ratings of the time spent together with
these people and of how much they knew about them. As expected, participants reported having spent more time with liked compared to disliked
targets, and they reported knowing more about the liked compared to the
disliked targets. Yet, in line with the prediction from the EvIE, participants
consistently reported higher similarity for liked and disliked targets.
Figure 6 provides a summary of the similarity judgements from
Experiments 1, 3 and 5. Experiment 1 used target persons participants knew
personally with spatial arrangement to assess similarity. Experiment 3 used
target persons participants knew personally with pairwise comparisons to
assess similarity. Experiment 5 used celebrity targets with pairwise comparisons. As Figure 6 shows, participants consistently reported liked targets to be
more similar than disliked targets, despite spending more time with them. We
omit Experiments 2 and 6 here; Experiment 2 replicated Experiment 1 with
target valence manipulated between participants and Experiment 6 replicated
Experiment 5 with a larger set of celebrity targets.
Experiment 4 tested the underlying EvIE structure directly. Participants
generated as many traits as they could for each of the four liked and disliked
targets they named. First, in line with the assumed greater knowledge for liked
targets, participants generated on average 6.9 traits for liked, but only 3.9 traits
for disliked targets. Second, we computed the probability that a trait was shared
among the targets. Figure 7‘s left panel shows the relevant data. The probability
that participants generated shared traits among liked targets was substantially
higher compared to disliked targets. This was true within participants’ eight
targets, but also across participants; that is, even across participants, liked targets
were more likely to share traits and therefore be more similar, providing support
for the assumption that there are ecologically fewer ways to be liked than to be
disliked. This difference in shared traits also held when controlling for the
number of generated traits in a regression analysis.
Experiment 7 then flipped the paradigm and asked participants to generate the names of two people they personally knew without specifying
whether they had to be liked or disliked. Instead, we asked them to generate
either positive traits or negative traits that described each of the two targets.
After providing as many traits as they could, we asked participants to rate the
similarity of the two targets. First, as expected, participants showed the
reversed effect as well – generating positive traits made the two targets appear
more similar compared to generating negative traits. As the targets were
selected in both conditions before we asked for positive or negative traits, any
alternative explanation in terms of differential target generation is taken care
of. In addition, participants generated more traits in the positive traits
condition, 6.4 on average, compared to the negative traits condition, where
they generated only 3.8 traits on average. Replicating Experiment 4, as shown
in Figure 7‘s right panel, the probability that participants generated shared
traits among positive traits was substantially higher compared to negative
traits. This was again true within and also across participants, and also when
controlling for the absolute number of traits generated.
Across seven experiments, of which we summarised five here, we found that
positive traits are more frequently generated and these generated traits also are
more likely to be found across targets, leading to the conclusion that liked
people tend to be seen as alike. In particular, the within-participant comparisons might partially follow from intra-psychic mechanisms (e.g., motivated
reasoning to see your friends as similar and good); however, the effects acrossparticipants are difficult to explain without the presented EvIE model (see
Figure 7).
Frequency and valence: the common good in person perception
In another series of experiments (Alves, Koch, & Unkelbach, 2017b), we
tested a prediction from the frequency property discussed above: If positive
information is more frequent, then it should more likely co-occur with other
positive information compared to negative information. Across people, this
implies that people have positive traits in common, but their negative traits
make them distinct: “Those attributes that connect different people and that
define their similarities are usually good attributes. Those attributes that
distinguish different people and make them unique are often bad attributes.”
(p. 512). This prediction follows solely from the frequency property and does
not depend on the similarity of the information.
For illustration, let us again consider the formal relation of shared and
unshared positive and negative attributes, as we did above for personality
traits. For example, positive attributes may have the probability of being
present in any person of p(pos) = 0.6, and negative attributes may have
a probability of being present of p(neg) = 0.2. The probability of a shared
attribute (i.e., being simultaneously present in two persons) being positive is
then p(positive|shared) = p(pos)*p(pos) = 0.36, while the probability for the
negative attribute is p(negative|shared) = p(neg)*p(neg) = 0.04. In other
words, if a positive trait is three times more likely in the ecology than
a negative trait, it is nine times more likely to be shared than a negative
trait. This leads to two hypotheses: positive traits should be more likely to be
shared amongst targets compared to negative traits, p(shared|positive) > p
(shared|negative), and shared traits should be more likely to be positive
compared to negative traits, p(positive|shared) > p(negative|shared).
To test these hypotheses, Alves et al. (2017b) asked participants to
sample traits of target persons. Experiments 1a and 1b tested the first
prediction, p(shared|positive) > p(shared|negative). In Experiment 1a
(n = 41), participants generated two people they knew personally and
then generated four positive traits and four negative traits for one of the
two. Then, we asked them which of the eight traits also described the other
person. In line with our first prediction, participants assigned on average
3.4 positive traits (i.e., almost all) two both targets. Out of the four negative
traits, they assigned only 1.1 to both targets. Figure 8‘s left panel reports the
respective conditional probabilities for positive and negative traits.
To generalise this result, Experiment 1b (n = 82) asked participants to
generate 10 target persons. Then, we randomly sampled a given set of four
positive and four negative traits from Experiment 1a and participants had to
indicate to which of the 10 targets each of the traits applied. Replicating 1a,
participants assigned on average 3.1 of the positive traits to a target from
their own sample, but only 1.2 of the negative traits. Figure 8 shows the
resulting conditional probabilities. As the left panel shows, positive traits
were much more likely to be shared across participants compared to negative
traits. And as the trait and target generation were separated in Experiment
1b, this replication provides support for our ecological argument.
Experiment 2 in this series of “common good” experiments (Alves et al.,
2017b) tested the second prediction: if a trait is shared as opposed to
unshared, it should be more likely positive, and thus, p(positive|shared) > p
(negative|shared). Participants again generated two target names; then, we
asked them for either shared or unshared traits. We asked for four shared
traits in the former, and two traits that belonged uniquely to the first target,
and two traits that belonged uniquely to the second target, in the latter
condition. Then, participants rated the valence of the generated traits.
Figure 8‘s right panel shows the probabilities: Overall, participants generated
more positive traits than negative traits in both conditions, reflecting the
general positivity prevalence. Yet, in the shared condition, 3.5 traits were
positive on average, and only 0.2 traits were negative. In the unshared
condition, 2.3 traits were positive and 1.3 traits were negative. Thus, the traits
people have in common are usually positive.
Experiment 4a (n = 176) in Alves et al. (2017b) aimed to show that
searching for similarities (i.e., shared traits) amplifies the ecological default,
and searching for differences (i.e., unique traits) attenuates it. Thus, the
experiment replicated Experiment 2 but included a “natural” condition, in
addition to the “shared” and “unshared” conditions. The “natural” condition
asked participants to generate traits for two target persons without specifying
whether these should be shared or unshared traits. Again, across conditions,
participants generated substantially more positive traits: about 4.8 traits out
of six were positive. However, the probability of generating a positive trait
varied as a function of the traits being generated as “shared”, “unshared”, or
“natural” (i.e., without specific instructions). Figure 9 shows these probabilities of a trait being positive. The probability of a trait being positive was
smaller in the natural condition compared to the “shared” condition, and
smaller in the “unshared” condition compared to the “natural” condition.
Thus, as predicted, looking for similarities amplifies the prevalence of positive traits, while looking for differences attenuates it.
A basic drawback in the reported “common good” studies so far is that
participants self-generated targets, which makes the observed “common good”
effect less surprising, as most people might generate people they know and also
like, and the phenomenon might follow from the “my friends are all alike”
effect described above. However, the present approach is different as it is solely
based on the proposed EvIE’s frequency property. The similarity property
implies that positive information should always be more similar to other
positive information (again; there is only one way to be good), and thus, as
long as people have friends they like, these should be alike.
The present “common good” effect, however, follows only if the available
information is predominantly positive. This leads to the reverse prediction if
the available information is predominantly negative. Thus, in Experiments 5
and 6 in Alves et al. (2017b) “common good” series, participants did not
generate targets, but we provided liked and disliked targets for which the
available trait information should be either predominantly positive or negative,
respectively. To do so, Experiment 5 took advantage of the US’s bipartisan
political structure of Democrats and Republicans and recruited 310 US participants online. Half of the participants generated either shared or unshared
traits for Mitt Romney and George W. Bush, two well-known republicans, and
the other half did the same for Bill Clinton and Barack Obama, two wellknown democrats. To divide the sample, we asked participants how much they
liked these political figures; 160 participants reported liking the politicians in
their respective conditions, and 143 participants reported disliking them.
Seven participants reported neither liking nor disliking them and were
excluded from the analysis.
In Experiment 6 (n = 307), we sampled the target persons from a list of the
10 most popular and most unpopular people other participants generated.
The 10 most popular people for US citizens were Abraham Lincoln,
John F. Kennedy, Elvis Presley, Martin Luther King, Oprah Winfrey, Taylor
Swift, George Washington, Michael Jordan, Beyoncé Knowles, and Jesus
Christ. The 10 most unpopular people were Adolf Hitler, Donald Trump,
George W. Bush, Osama Bin Laden, Saddam Hussein, Joseph Stalin, Kim
Jong Un, Justin Bieber, Fidel Castro, and Kanye West. For example, participants generated four traits that Abraham Lincoln and Elvis Presley shared or
two traits that were unique to Lincoln and Presley, respectively. Each pairing
was randomly created for each participant. In the negative targets condition,
for example, participants generated traits that Adolf Hitler and Justin Biber
shared, or two traits that were unique to each of these targets.
Figure 10 shows the results for these two studies, plotting the frequency of
traits being positive and negative as a function of being shared or unshared
among the target persons. For liked targets, the trait frequencies replicate the
previous studies. Both for liked political figures of that time as well as
consensually liked persons, looking for similarities yielded many positive
traits, and few negative traits. Looking for differences yielded fewer positive
traits and more negative traits. However, when participants disliked the
targets, that is, when operating in an ecology of predominantly negative
information, they generated more negative traits in the shared compared to
the unshared condition. Conversely, they provided fewer positive traits in the
shared compared to the unshared condition. This pattern of results provided
distinct evidence for the “common good” implication of the EvIE’s assumed
frequency property. Looking for similarities between targets amplifies, and
looking for differences between targets attenuates, the underlying base-rate;
and this base-rate is, in most cases, marked by a high frequency of positive
information, leading to a “Common Good” phenomenon.
Thus, based on the assumption that positive information is more frequent,
we predicted and found a novel phenomenon in person perception – the
common good effect. The attributes people have in common are usually good
attributes, and negative attributes are rather unique. In addition, searching
for similarities leads to the discovery of the common good, while searching
for differences subjectively attenuates the prevalence of positive information.
Intergroup biases: a cognitive-ecological explanation
Having shown implications of positive information’s higher similarity
(strong halo effects from positive traits; friends are more alike than enemies)
and positive information’s higher frequency (the common good phenomenon), our final example provides a genuinely new explanation for intergroup biases (Alves, Koch, & Unkelbach, 2018), by combining basic
cognitive processes with our assumptions about the EvIE.
One of the most prominent effects in social psychology is that people tend to
devalue minorities (e.g., refugees, immigrants) and out-groups (e.g., rival sport
teams, other states). There is a wealth of models and theories to explain these
biases (e.g., Tajfel and Turner’s Social Identity Theory, 1979; or Brewer’s
theory of optimal distinctiveness, 1991). However, taking the assumed EvIE
properties offers a novel explanation.
For this explanation, we only need the assumption that out-groups and
minorities are “novel” groups in comparison to ingroups and majorities. This
is highly plausible, as people usually come in contact first with their ingroups
(e.g., family, fellow citizens) and majorities (e.g., Whites, Christians); they
learn about outgroups and minorities later and these groups are then novel
in comparison to the former.
On the cognitive side, novel groups are defined in relation to existing groups
(i.e., ingroups, majorities) by the attributes that make them unique, rather than
by the attributes they have in common with existing groups (Hodges, 2005;
Sherman et al., 2009; Tversky & Gati, 1978). On the ecological side, as the
presented evidence suggests, positive attributes are less diverse or more similar
than negative information, and positive information is more frequent than
negative information. Consequently, unique attributes that differentiate
a novel group from already-known groups are likely to be negative.
Thus, the argument is as follows: Minorities and outgroups are most likely
novel groups to social perceivers, compared to majorities and ingroups.
Novel groups are defined by their unique attributes (i.e., the cognitive part)
and unique attributes are most likely negative (i.e., the ecological part),
leading to an association between outgroups and minorities with negative
attributes, which in turn may cause negative stereotypes and prejudice.
To test this explanation, we invited participants to take the role of space
explorers. On a novel planet, they would encounter members of two alien
tribes. We used the neutral aliens provided by Gupta et al. (2004) as stimuli.
Participants would encounter one member of the first tribe and receive
information about one of the alien’s trait; that is, they saw a picture of the
alien and the alien’s respective trait (e.g., helpful, intelligent, anxious, or
aggressive). After participants had encountered six members of the alien
tribe, we instructed participants to imagine that they would now continue
their travels and encounter another alien tribe. Then, they would learn about
the traits of six members of the second tribe. In the real world, people should
probabilistically learn first about members of their ingroup before learning
about members of outgroups. Similarly, they are more likely to meet majority
group members before meeting minority group members. Thus, the first
tribe is functionally similar to a majority or ingroup, and the second tribe is
functionally similar to minorities or out-groups. After these learning phases,
participants chose which group they preferred.
The central manipulation across three experiments was the trait pool from
which we assigned the two tribes’ traits. After learning, we asked participants
which tribe they prefer; that is, we elicited a binary preference choice
between the first and the second tribe as the central dependent variable.
Experiment 1 manipulated whether the positive or whether the negative
attributes were shared or unshared among the two groups. That is, in one
condition, the groups’ positive attributes were identical, while their negative
attributes differed, and this was reversed in the other condition. Table 4‘s left
section presents the resulting preference frequencies. As predicted from our
cognitive-ecological explanation, participants preferred the first group when
the positive attributes were shared and negative attributes were unique, but
preferred the second group when positive attributes were unique and negative attributes were shared. In other words, although the distribution of
positive and negative traits was identical, there was a bias against the novel
group in a standard ecology (i.e., where negative information is unique),
which reversed as a function of the trait ecology.
Experiment 2 then manipulated the similarity of evaluative information in
the ecology. We created two attribute ecologies. In the standard ecology,
positive attributes were less diverse compared to negative attributes. In the
reversed ecology, negative attributes were less diverse. We manipulated
diversity by the number of unique traits in a given ecology. In the standard
ecology condition, we randomly sampled each alien tribe’s three positive
traits from a set of four traits, while we sampled the three negative traits from
a set of 16 traits (i.e., there were more ways to be negative). In the reversed
ecology condition, we sampled the alien tribes’ three negative traits from a set
of four traits, and their positive traits from a set of 16 traits (i.e., there were
more ways to be positive). Consequently, in the standard ecology, the
positive traits were likely to be shared and the first tribe should be preferred.
In the reversed ecology, the negative traits were likely to be shared and
the second tribe should be preferred. As Table 4‘s middle panel shows, the
preference frequencies replicated Experiment 1. Participants preferred the
first group in the standard ecology (i.e., when negative attributes were likely
unique), but in the reversed ecology they preferred the second group (i.e.,
when positive attributes were likely unique).
Experiment 3 then manipulated the EvIE’s second property, the frequency
of evaluative information. In the standard ecology, both groups possessed
more positive than negative attributes, while in the reversed ecology, negative attributes were more frequent. Specifically, in the standard ecology, both
tribes displayed four positive traits and one negative trait. Both positive and
negative traits were randomly sampled from a set of six positive and six
negative traits. In the reversed ecology, both tribes displayed four positive
and one negative trait. Consequently, in the standard ecology (positive
frequent), unique attributes were likely to be negative, while in the reversed
ecology, unique attributes were likely to be negative.
Table 4‘s right section shows the respective preference frequencies.
Replicating Experiments 1 and 2, participants preferred the first group in
the standard ecology, but they preferred the second group in the reversed
ecology. One apparent feature of Table 4 is that the standard ecologies (i.e.,
when negative information is unique) yield stronger differences between the
tribes, while the preference differential is less strong when positive information is unique. This is actually in line with our overall assumptions about the
EvIE. We did not control for the connotative similarity of the positive and
negative traits, but research on the similarity of personality traits
(Bruckmüller & Abele, 2013; Gräf & Unkelbach, 2016; Leising et al., 2012)
shows that positive traits are more similar to each other compared to
negative traits. By implication, the positive unique traits were, less “unique”
compared to the negative unique traits. This differential valence asymmetry
explains at least part of the differential impact of the ordering.
Thus, across three experiments, participants associated a novel group
with its unique attributes, which differentiate the group from previously
encountered groups. Depending on the ecology’s properties, unique attributes were more likely to be positive or negative, and participants’ preferences followed accordingly. As the general structural properties of the EvIE
make unique attributes more likely negative, p(negative|unique) > p(positive|unique), an evaluative disadvantage for novel groups, and thereby for
minorities and outgroups, follows. In other words, people do not need
a real conflict (Sherif, Harvey, White, Hood, & Sherif, 1961), motivated
reasoning (Kunda, 1990), or a hostile personality structure to show differential preferences for minorities and outgroups (Altemeyer, 1998). Rather,
all they need is a cognitive system that tries to differentiate different groups
in an ecology that is marked by high similarity and a high frequency of
positive information
Summary of the implications
We have provided two examples of how our EvIE model refines our knowledge
about classic and important social psychological phenomena. First, halo effects;
we have delineated and shown that halo effects appear predominantly for
positive traits, but are largely absent for negative traits, despite the typically
assumed stronger impact of negative information (Baumeister et al., 2001; Ito,
Larsen, Smith, & Cacioppo, 1998). Second, intergroup biases; we have provided a cognitive-ecological explanation for intergroup biases that do not rely
on motivated reasoning (Kunda, 1990; Tajfel & Turner, 1979), but builds solely
on cognitive processes that interact with the EvIE’s properties.
We have also provided two examples that illustrate the discovery of genuinely new phenomena. First, people’s friends are all alike. Based on the proposed
similarity property, we have shown that people perceive others they know and
like as more similar to one another, just because there is not much room for
variety on the positive side. Second, the common good phenomenon; based on
the proposed frequency property, we have shown that what people have in
common are usually positive attributes, just because negative attributes are
infrequent, and their joint occurrence is therefore unlikely.
Is Orgasmic Meditation a Form of Sex?
Siegel, Vivian, Caryn Roth, Elisabeth Bolaza, and Benjamin Emmert-Aronson. 2019. “Is Orgasmic Meditation a Form of Sex?” SocArXiv. November 27. doi:10.31235/osf.io/89fvt
Abstract: Orgasmic Meditation(OM) is a structured, partnered meditative practice in which one person, who can be any gender, strokes the clitoris of their partner for 15 minutes. As such, it resembles a sexual activity. OM is taught as a practice that is distinct from sex, and we wondered whether people who engage in OM actually maintain that distinction themselves. We conducted an online convenience sample survey including qualitative open-ended text questions and quantitative Likert-style questions that was distributed to email listservs for practitioners of OM. The 30-item questionnaire included questions designed to differentiate the potentially related concepts of OM, seated meditation, fondling, and sex, as bases for comparison. The quantitative results of this mixed method study show that OM practitioners view the practice as significantly more similar to meditation than to sex or fondling. These results were consistent, regardless of whether the question was asked in the positive or negative and whether OM was being compared to one behavior individually or to multiple behaviors at the same time. The distinction between OM and sex/fondling rapidly becomes more pronounced as practitioners complete more OMs. This suggests that the novelty of genital touching in meditation may diminish over time, as practitioners get used to the more alternative point of focus. The results of this study have implications for the practice and how it is approached and regulated.
Abstract: Orgasmic Meditation(OM) is a structured, partnered meditative practice in which one person, who can be any gender, strokes the clitoris of their partner for 15 minutes. As such, it resembles a sexual activity. OM is taught as a practice that is distinct from sex, and we wondered whether people who engage in OM actually maintain that distinction themselves. We conducted an online convenience sample survey including qualitative open-ended text questions and quantitative Likert-style questions that was distributed to email listservs for practitioners of OM. The 30-item questionnaire included questions designed to differentiate the potentially related concepts of OM, seated meditation, fondling, and sex, as bases for comparison. The quantitative results of this mixed method study show that OM practitioners view the practice as significantly more similar to meditation than to sex or fondling. These results were consistent, regardless of whether the question was asked in the positive or negative and whether OM was being compared to one behavior individually or to multiple behaviors at the same time. The distinction between OM and sex/fondling rapidly becomes more pronounced as practitioners complete more OMs. This suggests that the novelty of genital touching in meditation may diminish over time, as practitioners get used to the more alternative point of focus. The results of this study have implications for the practice and how it is approached and regulated.
Discussion
This is the first study of its kind on the topic of Orgasmic Meditation and how practitioners
perceive this practice. There is little research on Orgasmic Meditation in general, and this study
helps place it in the larger context of meditation and sexuality, two fields with much ongoing
research. The quantitative results of this mixed method study show that OM practitioners view
the practice as significantly more similar to meditation than to sex or fondling. These results
were consistent, regardless of whether the question was asked in the positive or negative (i.e.
disagreeing with the question OM is sex, agreeing with the question OM is not sex) – and
whether OM was being compared to one behavior individually or to multiple behaviors at the
same time.
The results of this study also show that the distinction between OM and sex/fondling rapidly
becomes more pronounced as practitioners complete more OMs. This suggests that the novelty
of genital touching in meditation may diminish over time, as practitioners get used to the more
alternative point of focus. If OM is viewed differently by different groups of practitioners, there
may be programmatic and policy implications in the management of OM instruction. For
example, if new OM practitioners are more likely to conflate OM and sex, there is a heightened
likelihood of unintended outcomes related to sexual stigma, trauma, or perceived sexual
harassment at that stage. Such sexuality-related side-effects should be addressed in the
instruction and in communications with participants, and additional supports may be necessary
to help new practitioners navigate these complexities until they are clear on the practice and
how to internalize their experiences.
In addition, of the gender and sexual orientation combinations with large enough sample sizes
to study, bisexual women viewed the practice as most different from sex/fondling. This was
surprising because of the scientific research showing that lesbians were more likely to classify
manual-genital contact as sex. The finding suggests that the context highly impacts how an act
of a sexual nature is perceived. The same physical behavior that in the bedroom may be
considered sex, is considered meditation in the container of a practice setting. The fact that this
subgroup was more adamant that the practice is not sex highlights how much the intention
behind the act makes a difference. That is, the practice is not differentiated from sex because of
the actual physical action, but because of the intention behind it.
Future Research
Given the fact that OM involves genital stroking, it is therefore interesting to ask the question
why the response to the survey is so clear. One possibility is that there are certain aspects of
the practice itself that clearly divides it from sex and fondling. For example, the stroker in OM is
fully clothed and wears gloves. There is no eye gazing or kissing. Practitioners generally do the
practice away from their beds, usually on the floor with meditation and other cushions that do
not resemble bed pillows. Practitioners are also taught that if the practice stirs desires for sex,
that they complete the practice and put away the practice supplies before deciding whether to
have sex. If practitioners follow this guidance, then OM will effectively be separated from sex or
from activities that might lead to sex.
It is also important to note that OM is not the only practice that confounds the traditional
conceptualizations of sex and meditation. Tantra, for example, seeks to use sexual energy to
reach a meditative state (Nagaraj 2013). It would be interesting to know how practitioners of
tantra would respond to a similar survey.
Individuals with dark traits have the ability to empathize, but have a low disposition to do so
Individuals with dark traits have the ability but not the disposition to empathize. Petri J. Kajonius, Therese Björkman. Personality and Individual Differences, November 30 2019, 109716. https://doi.org/10.1016/j.paid.2019.109716
Highlights
• We tested if the Dark Triad was best described by ability- or trait-empathy.
• Dark Triad had no relationship with ability-empathy.
• Dark Triad had a strong negative relationship with trait-empathy.
• Cognitive ability explained ability-empathy.
• Dark personalities seem cognizant, but not inclined to empathize.
Abstract: Empathy is fundamental to social cognition and societal values. Empathy is theorized as having both the ability as well as the disposition to imagine the content of other people's minds. We tested whether the notorious low empathy in dark personalities (Machiavellianism, psychopathy, and narcissism; the Dark Triad) is best characterized by a lack of capacity (ability) or lack of disposition (trait). Data was collected for 278 international participants through an anonymous online survey shared on the online platform LinkedIn, consisting of trait-based Dark Triad personality (SD3) and empathy (IRI), and cognitive ability (ICAR16) and ability-based empathy (MET). Dark personality traits had no relationship with ability-based empathy, but strongly so with trait-based empathy (β = -0.47). Instead, cognitive ability explained ability-based empathy (β = 0.31). The finding is that dark personalities in a community sample is normally cognizant to empathize but has a low disposition to do so. This finding may help shed further light on how personality is interlinked with ability.
Keywords: PersonalityDark triadEmpathyCognitive ability
From the introduction: Empathy is a core feature of human beings in social interaction (Myyrya, Juujärvi, & Pesso, 2010). No matter where in the world we live, individuals in any given community are expected to be able to cognizant and sensitive to other people's minds. Individuals who violate such values are often looked down upon in society (Persson & Kajonius, 2016). In personality psychology, there has been an increase of interest in so-called dark personality traits (Moshagen, Hilbig, & Zettler, 2018), which are characterized by violating social values (Kajonius, Persson, & Jonason, 2015). The idea behind the most used Dark Triad personality model (DT; Paulhus & Williams, 2002) is to capture the multidimensionality of complex traits leading up to this, and that these can be described by subclinical Machiavellianism (tendency to manipulate), psychopathy (callousness), and narcissism (grandiosity) (Jonason & Kroll, 2015). It is still unclear whether individuals scoring high on these dark personality traits are mostly lacking the capacity (ability) or mostly lacking the disposition (trait) to feel what others feel (see Keysers & Gazzola, 2014). The purpose of the present study is to explore empathy and to test the idea that it is not inability but more a lack of disposition that drives dark personalities’ low empathy.
4. Discussion
The present study aimed at investigating whether persons scoring high on the Dark Triad (Machiavellianism, psychopathy, and narcissism) mostly relate to the lack of ability or the lack of disposition to empathize. The results showed that it was more a lack of empathic disposition than inability that characterizes dark personalities. First, the Dark Triad had a very strong relationship with dispositional trait-based empathy. Second, the Dark Triad had a weak (almost non-existent) relationship with ability-based empathy. Third, cognitive ability explained most of ability-based empathy.
The first hypothesis of the present study were largely in line with previous research, which mainly has shown a consistent negative relationship between dark personality and trait-based empathy (BaronCohen, Wheelwright, Hill, Raste, & Plumb, 2001; Jonason, Lyons, Bethell, & Ross, 2013; Pajevic et al., 2018). The relationship between higher Dark Triad (SD3) and lower trait-based empathy (IRI) was large (Fig. 1). This effect size is trending towards high convergence, even more so if controlled for reliability and subscale variance – According to updated guidelines in psychology research, correlations between r = 0.00–0.09 should be interpreted as trivial to non-existent, r = 0.10–0.19 weak, r = 0.20–0.29 medium, and above r > 0.30 strong (Gignac & Szodorai, 2016). Interestingly, lack of empathy as measured by IRI entails not only lack empathic concern, but also no imagination for others’ minds (FT), not being cognizant in perspective taking (PT), and no distress for others’ welfare (PD). This result confirms the general notion that the Dark Triad and dispositions towards empathy are related negatively (Wai & Tiliopoulos, 2012).
Conceivably, a somewhat novel finding is the non-significant relationship between the Dark Triad and ability-based empathy in the present study, confirming the second hypothesis. Among the sparse studies on the subject, Wai and Tiliopoulos (2012) similarly found no connection. The measurement of MET indicates that dark personalities are more or less normally distributed in relation to the ability of reading emotions in faces. In our community sample, the popular notion of an intelligent, cunning psychopath or narcissist being a master-mind in capacity of reading people was not found in evidence. Similarly, the opposite notion of an impulsive thug incapable of interpreting people's faces cannot be supported. Interestingly, there seems to be almost no relationship between ability-based empathy (MET) and the subscales of trait-based empathy (IRI), as seen in Table 1. The present study seems to support the notion that ability and traits are very different empathy constructs.
Moreover, cognitive ability clearly governed ability-based empathy (Fig. 1). Individuals’ cognitive ability (ICAR) overlapped with the ability to read people's emotions through facial expressions (MET). This may not be all too surprising since general intelligence (aka the Gfactor) is known to permeate most psychological domains related to mental performance (Nisbett et al., 2012). Perhaps somewhat unexpected, a small positive relationship between the Dark Triad and cognitive ability was also found in the present study. If anything, this should according to literature be close to zero or even negative. Apart from having been a spurious result (n.b. this was only marginally significant, p = .04), one explanation is that being smart and slightly antagonistic may very well have been one of the characteristics of someone choosing to partake in a study on dark personalities, slightly increasing this correlation. Based on the tested model, it seems clear that the higher the cognitive ability, the higher the ability to read other's emotions, but also that this is likely unrelated to dark personalities.
Highlights
• We tested if the Dark Triad was best described by ability- or trait-empathy.
• Dark Triad had no relationship with ability-empathy.
• Dark Triad had a strong negative relationship with trait-empathy.
• Cognitive ability explained ability-empathy.
• Dark personalities seem cognizant, but not inclined to empathize.
Abstract: Empathy is fundamental to social cognition and societal values. Empathy is theorized as having both the ability as well as the disposition to imagine the content of other people's minds. We tested whether the notorious low empathy in dark personalities (Machiavellianism, psychopathy, and narcissism; the Dark Triad) is best characterized by a lack of capacity (ability) or lack of disposition (trait). Data was collected for 278 international participants through an anonymous online survey shared on the online platform LinkedIn, consisting of trait-based Dark Triad personality (SD3) and empathy (IRI), and cognitive ability (ICAR16) and ability-based empathy (MET). Dark personality traits had no relationship with ability-based empathy, but strongly so with trait-based empathy (β = -0.47). Instead, cognitive ability explained ability-based empathy (β = 0.31). The finding is that dark personalities in a community sample is normally cognizant to empathize but has a low disposition to do so. This finding may help shed further light on how personality is interlinked with ability.
Keywords: PersonalityDark triadEmpathyCognitive ability
From the introduction: Empathy is a core feature of human beings in social interaction (Myyrya, Juujärvi, & Pesso, 2010). No matter where in the world we live, individuals in any given community are expected to be able to cognizant and sensitive to other people's minds. Individuals who violate such values are often looked down upon in society (Persson & Kajonius, 2016). In personality psychology, there has been an increase of interest in so-called dark personality traits (Moshagen, Hilbig, & Zettler, 2018), which are characterized by violating social values (Kajonius, Persson, & Jonason, 2015). The idea behind the most used Dark Triad personality model (DT; Paulhus & Williams, 2002) is to capture the multidimensionality of complex traits leading up to this, and that these can be described by subclinical Machiavellianism (tendency to manipulate), psychopathy (callousness), and narcissism (grandiosity) (Jonason & Kroll, 2015). It is still unclear whether individuals scoring high on these dark personality traits are mostly lacking the capacity (ability) or mostly lacking the disposition (trait) to feel what others feel (see Keysers & Gazzola, 2014). The purpose of the present study is to explore empathy and to test the idea that it is not inability but more a lack of disposition that drives dark personalities’ low empathy.
4. Discussion
The present study aimed at investigating whether persons scoring high on the Dark Triad (Machiavellianism, psychopathy, and narcissism) mostly relate to the lack of ability or the lack of disposition to empathize. The results showed that it was more a lack of empathic disposition than inability that characterizes dark personalities. First, the Dark Triad had a very strong relationship with dispositional trait-based empathy. Second, the Dark Triad had a weak (almost non-existent) relationship with ability-based empathy. Third, cognitive ability explained most of ability-based empathy.
The first hypothesis of the present study were largely in line with previous research, which mainly has shown a consistent negative relationship between dark personality and trait-based empathy (BaronCohen, Wheelwright, Hill, Raste, & Plumb, 2001; Jonason, Lyons, Bethell, & Ross, 2013; Pajevic et al., 2018). The relationship between higher Dark Triad (SD3) and lower trait-based empathy (IRI) was large (Fig. 1). This effect size is trending towards high convergence, even more so if controlled for reliability and subscale variance – According to updated guidelines in psychology research, correlations between r = 0.00–0.09 should be interpreted as trivial to non-existent, r = 0.10–0.19 weak, r = 0.20–0.29 medium, and above r > 0.30 strong (Gignac & Szodorai, 2016). Interestingly, lack of empathy as measured by IRI entails not only lack empathic concern, but also no imagination for others’ minds (FT), not being cognizant in perspective taking (PT), and no distress for others’ welfare (PD). This result confirms the general notion that the Dark Triad and dispositions towards empathy are related negatively (Wai & Tiliopoulos, 2012).
Conceivably, a somewhat novel finding is the non-significant relationship between the Dark Triad and ability-based empathy in the present study, confirming the second hypothesis. Among the sparse studies on the subject, Wai and Tiliopoulos (2012) similarly found no connection. The measurement of MET indicates that dark personalities are more or less normally distributed in relation to the ability of reading emotions in faces. In our community sample, the popular notion of an intelligent, cunning psychopath or narcissist being a master-mind in capacity of reading people was not found in evidence. Similarly, the opposite notion of an impulsive thug incapable of interpreting people's faces cannot be supported. Interestingly, there seems to be almost no relationship between ability-based empathy (MET) and the subscales of trait-based empathy (IRI), as seen in Table 1. The present study seems to support the notion that ability and traits are very different empathy constructs.
Moreover, cognitive ability clearly governed ability-based empathy (Fig. 1). Individuals’ cognitive ability (ICAR) overlapped with the ability to read people's emotions through facial expressions (MET). This may not be all too surprising since general intelligence (aka the Gfactor) is known to permeate most psychological domains related to mental performance (Nisbett et al., 2012). Perhaps somewhat unexpected, a small positive relationship between the Dark Triad and cognitive ability was also found in the present study. If anything, this should according to literature be close to zero or even negative. Apart from having been a spurious result (n.b. this was only marginally significant, p = .04), one explanation is that being smart and slightly antagonistic may very well have been one of the characteristics of someone choosing to partake in a study on dark personalities, slightly increasing this correlation. Based on the tested model, it seems clear that the higher the cognitive ability, the higher the ability to read other's emotions, but also that this is likely unrelated to dark personalities.
Friday, November 29, 2019
Hard Problems in Cryptocurrency: Cryptographic (expected to be solvable with purely mathematical techniques), consensus theory (improvements to proof of work and proof of stake), and economic
Hard Problems in Cryptocurrency: Five Years Later. Vitalik Buterin. No 22 2019. https://vitalik.ca/general/2019/11/22/progress.html
[Check original post for lots of links]
Special thanks to Justin Drake and Jinglan Wang for feedback
In 2014, I made a post and a presentation with a list of hard problems in math, computer science and economics that I thought were important for the cryptocurrency space (as I then called it) to be able to reach maturity. In the last five years, much has changed. But exactly how much progress on what we thought then was important has been achieved? Where have we succeeded, where have we failed, and where have we changed our minds about what is important? In this post, I'll go through the 16 problems from 2014 one by one, and see just where we are today on each one. At the end, I’ll include my new picks for hard problems of 2019.
The problems are broken down into three categories: (i) cryptographic, and hence expected to be solvable with purely mathematical techniques if they are to be solvable at all, (ii) consensus theory, largely improvements to proof of work and proof of stake, and (iii) economic, and hence having to do
with creating structures involving incentives given to different participants, and often involving the application layer more than the protocol layer. We see significant progress in all categories, though some more than others.
Cryptographic problems
1 Blockchain Scalability
One of the largest problems facing the cryptocurrency space today is the issue of scalability ... The main concern with [oversized blockchains] is trust: if there are only a few entities capable of running full nodes, then those entities can conspire and agree to give themselves a large number of additional bitcoins, and there would be no way for other users to see for themselves that a block is invalid without processing an entire block themselves.
Problem: create a blockchain design that maintains Bitcoin-like security guarantees, but where the maximum size of the most powerful node that needs to exist for the network to keep functioning is substantially sublinear in the number of transactions.
Status: Great theoretical progress, pending more real-world evaluation.
Scalability is one technical problem that we have had a huge amount of progress on theoretically. Five years ago, almost no one was thinking about sharding; now, sharding designs are commonplace. Aside from ethereum 2.0, we have OmniLedger, LazyLedger, Zilliqa and research papers seemingly coming out every month. In my own view, further progress at this point is incremental. Fundamentally, we already have a number of techniques that allow groups of validators to securely come to consensus on much more data than an individual validator can process, as well as techniques allow clients to indirectly verify the full validity and availability of blocks even under 51% attack conditions.
These are probably the most important technologies:
. Random sampling, allowing a small randomly selected committee to statistically stand in for the full validator set: https://github.com/ethereum/wiki/wiki/Sharding-FAQ#how-can-we-solve-the-single-shard-takeover-attack-in-an-uncoordinated-majority-model
. Fraud proofs, allowing individual nodes that learn of an error to broadcast its presence to everyone else: https://bitcoin.stackexchange.com/questions/49647/what-is-a-fraud-proof
. Proofs of custody, allowing validators to probabilistically prove that they individually downloaded and verified some piece of data: https://ethresear.ch/t/1-bit-aggregation-friendly-custody-bonds/2236
. Data availability proofs, allowing clients to detect when the bodies of blocks that they have headers for are unavailable: https://arxiv.org/abs/1809.09044. See also the newer coded Merkle trees proposal.
There are also other smaller developments like Cross-shard communication via receipts as well as "constant-factor" enhancements such as BLS signature aggregation.
That said, fully sharded blockchains have still not been seen in live operation (the partially sharded Zilliqa has recently started running). On the theoretical side, there are mainly disputes about details remaining, along with challenges having to do with stability of sharded networking, developer experience and mitigating risks of centralization; fundamental technical possibility no longer seems in doubt. But the challenges that do remain are challenges that cannot be solved by just thinking about them; only developing the system and seeing ethereum 2.0 or some similar chain running live will suffice.
2 Timestamping
Problem: create a distributed incentive-compatible system, whether it is an overlay on top of a blockchain or its own blockchain, which maintains the current time to high accuracy. All legitimate users have clocks in a normal distribution around some "real" time with standard deviation 20 seconds ... no two nodes are more than 20 seconds apart The solution is allowed to rely on an existing concept of "N nodes"; this would in practice be enforced with proof-of-stake or non-sybil tokens (see #9). The system should continuously provide a time which is within 120s (or less if possible) of the internal clock of >99% of honestly participating nodes. External systems may end up relying on this system; hence, it should remain secure against attackers controlling < 25% of nodes regardless of incentives.
Status: Some progress.
Ethereum has actually survived just fine with a 13-second block time and no particularly advanced timestamping technology; it uses a simple technique where a client does not accept a block whose stated timestamp is earlier than the client's local time. That said, this has not been tested under serious attacks. The recent network-adjusted timestamps proposal tries to improve on the status quo by allowing the client to determine the consensus on the time in the case where the client does not locally know the current time to high accuracy; this has not yet been tested. But in general, timestamping is not currently at the foreground of perceived research challenges; perhaps this will change once more proof of stake chains (including Ethereum 2.0 but also others) come online as real live systems and we see what the issues are.
3 Arbitrary Proof of Computation
Problem: create programs POC_PROVE(P,I) -> (O,Q) and POC_VERIFY(P,O,Q) -> { 0, 1 } such that POC_PROVE runs program P on input I and returns the program output O and a proof-of-computation Q and POC_VERIFY takes P, O and Q and outputs whether or not Q and O were legitimately produced by the POC_PROVE algorithm using P.
Status: Great theoretical and practical progress.
This is basically saying, build a SNARK (or STARK, or SHARK, or...). And we've done it! SNARKs are now increasingly well understood, and are even already being used in multiple blockchains today (including tornado.cash on Ethereum). And SNARKs are extremely useful, both as a privacy technology (see Zcash and tornado.cash) and as a scalability technology (see ZK Rollup, STARKDEX and STARKing erasure coded data roots).
There are still challenges with efficiency; making arithmetization-friendly hash functions (see here and here for bounties for breaking proposed candidates) is a big one, and efficiently proving random memory accesses is another. Furthermore, there's the unsolved question of whether the O(n * log(n)) blowup in prover time is a fundamental limitation or if there is some way to make a succinct proof with only linear overhead as in bulletproofs (which unfortunately take linear time to verify). There are also ever-present risks that the existing schemes have bugs. In general, the problems are in the details rather than the fundamentals.
4 Code Obfuscation
The holy grail is to create an obfuscator O, such that given any program P the obfuscator can produce a second program O(P) = Q such that P and Q return the same output if given the same input and, importantly, Q reveals no information whatsoever about the internals of P. One can hide inside of Q a password, a secret encryption key, or one can simply use Q to hide the proprietary workings of the algorithm itself.
Status: Slow progress.
In plain English, the problem is saying that we want to come up with a way to "encrypt" a program so that the encrypted program would still give the same outputs for the same inputs, but the "internals" of the program would be hidden. An example use case for obfuscation is a program containing a private key where the program only allows the private key to sign certain messages.
A solution to code obfuscation would be very useful to blockchain protocols. The use cases are subtle, because one must deal with the possibility that an on-chain obfuscated program will be copied and run in an environment different from the chain itself, but there are many possibilities. One that personally interests me is the ability to remove the centralized operator from collusion-resistance gadgets by replacing the operator with an obfuscated program that contains some proof of work, making it very expensive to run more than once with different inputs as part of an attempt to determine individual participants' actions.
Unfortunately this continues to be a hard problem. There is continuing ongoing work in attacking the problem, one side making constructions (eg. this) that try to reduce the number of assumptions on mathematical objects that we do not know practically exist (eg. general cryptographic multilinear maps) and another side trying to make practical implementations of the desired mathematical objects. However, all of these paths are still quite far from creating something viable and known to be secure. See https://eprint.iacr.org/2019/463.pdf for a more general overview to the problem.
5 Hash-Based Cryptography
Problem: create a signature algorithm relying on no security assumption but the random oracle property of hashes that maintains 160 bits of security against classical computers (ie. 80 vs. quantum due to Grover's algorithm) with optimal size and other properties.
Status: Some progress.
There have been two strands of progress on this since 2014. SPHINCS, a "stateless" (meaning, using it multiple times does not require remembering information like a nonce) signature scheme, was released soon after this "hard problems" list was published, and provides a purely hash-based signature scheme of size around 41 kB. Additionally, STARKs have been developed, and one can create signatures of similar size based on them. The fact that not just signatures, but also general-purpose zero knowledge proofs, are possible with just hashes was definitely something I did not expect five years ago; I am very happy that this is the case. That said, size continues to be an issue, and ongoing progress (eg. see the very recent DEEP FRI) is continuing to reduce the size of proofs, though it looks like further progress will be incremental.
The main not-yet-solved problem with hash-based cryptography is aggregate signatures, similar to what BLS aggregation makes possible. It's known that we can just make a STARK over many Lamport signatures, but this is inefficient; a more efficient scheme would be welcome. (In case you're wondering if hash-based public key encryption is possible, the answer is, no, you can't do anything with more than a quadratic attack cost)
Consensus theory problems
6 ASIC-Resistant Proof of Work
One approach at solving the problem is creating a proof-of-work algorithm based on a type of computation that is very difficult to specialize ... For a more in-depth discussion on ASIC-resistant hardware, see https://blog.ethereum.org/2014/06/19/mining/.
Status: Solved as far as we can.
About six months after the "hard problems" list was posted, Ethereum settled on its ASIC-resistant proof of work algorithm: Ethash. Ethash is known as a memory-hard algorithm. The theory is that random-access memory in regular computers is well-optimized already and hence difficult to improve on for specialized applications. Ethash aims to achieve ASIC resistance by making memory access the dominant part of running the PoW computation. Ethash was not the first memory-hard algorithm, but it did add one innovation: it uses pseudorandom lookups over a two-level DAG, allowing for two ways of evaluating the function. First, one could compute it quickly if one has the entire (~2 GB) DAG; this is the memory-hard "fast path". Second, one can compute it much more slowly (still fast enough to check a single provided solution quickly) if one only has the top level of the DAG; this is used for block verification.
Ethash has proven remarkably successful at ASIC resistance; after three years and billions of dollars of block rewards, ASICs do exist but are at best 2-5 times more power and cost-efficient than GPUs. ProgPoW has been proposed as an alternative, but there is a growing consensus that ASIC-resistant algorithms will inevitably have a limited lifespan, and that ASIC resistance has downsides because it makes 51% attacks cheaper (eg. see the 51% attack on Ethereum Classic).
I believe that PoW algorithms that provide a medium level of ASIC resistance can be created, but such resistance is limited-term and both ASIC and non-ASIC PoW have disadvantages; in the long term the better choice for blockchain consensus is proof of stake.
7 Useful Proof of Work
[M]aking the proof of work function something which is simultaneously useful; a common candidate is something like Folding@home, an existing program where users can download software onto their computers to simulate protein folding and provide researchers with a large supply of data to help them cure diseases.
Status: Probably not feasible, with one exception.
The challenge with useful proof of work is that a proof of work algorithm requires many properties:
. Hard to compute
. Easy to verify
. Does not depend on large amounts of external data
. Can be efficiently computed in small "bite-sized" chunks
Unfortunately, there are not many computations that are useful that preserve all of these properties, and most computations that do have all of those properties and are "useful" are only "useful" for far too short a time to build a cryptocurrency around them.
However, there is one possible exception: zero-knowledge-proof generation. Zero knowledge proofs of aspects of blockchain validity (eg. data availability roots for a simple example) are difficult to compute, and easy to verify. Furthermore, they are durably difficult to compute; if proofs of "highly structured" computation become too easy, one can simply switch to verifying a blockchain's entire state transition, which becomes extremely expensive due to the need to model the virtual machine and random memory accesses.
Zero-knowledge proofs of blockchain validity provide great value to users of the blockchain, as they can substitute the need to verify the chain directly; Coda is doing this already, albeit with a simplified blockchain design that is heavily optimized for provability. Such proofs can significantly assist in improving the blockchain's safety and scalability. That said, the total amount of computation that realistically needs to be done is still much less than the amount that's currently done by proof of work miners, so this would at best be an add-on for proof of stake blockchains, not a full-on consensus algorithm.
8 Proof of Stake
Another approach to solving the mining centralization problem is to abolish mining entirely, and move to some other mechanism for counting the weight of each node in the consensus. The most popular alternative under discussion to date is "proof of stake" - that is to say, instead of treating the consensus model as "one unit of CPU power, one vote" it becomes "one currency unit, one vote".
Status: Great theoretical progress, pending more real-world evaluation.
Near the end of 2014, it became clear to the proof of stake community that some form of "weak subjectivity" is unavoidable. To maintain economic security, nodes need to obtain a recent checkpoint extra-protocol when they sync for the first time, and again if they go offline for more than a few months. This was a difficult pill to swallow; many PoW advocates still cling to PoW precisely because in a PoW chain the "head" of the chain can be discovered with the only data coming from a trusted source being the blockchain client software itself. PoS advocates, however, were willing to swallow the pill, seeing the added trust requirements as not being large. From there the path to proof of stake through long-duration security deposits became clear.
Most interesting consensus algorithms today are fundamentally similar to PBFT, but replace the fixed set of validators with a dynamic list that anyone can join by sending tokens into a system-level smart contract with time-locked withdrawals (eg. a withdrawal might in some cases take up to 4 months to complete). In many cases (including ethereum 2.0), these algorithms achieve "economic finality" by penalizing validators that are caught performing actions that violate the protocol in certain ways (see here for a philosophical view on what proof of stake accomplishes).
As of today, we have (among many other algorithms):
Casper FFG: https://arxiv.org/abs/1710.09437
Tendermint: https://tendermint.com/docs/spec/consensus/consensus.html
HotStuff: https://arxiv.org/abs/1803.05069
Casper CBC: https://vitalik.ca/general/2018/12/05/cbc_casper.html
There continues to be ongoing refinement (eg. here and here) . Eth2 phase 0, the chain that will implement FFG, is currently under implementation and enormous progress has been made. Additionally, Tendermint has been running, in the form of the Cosmos chain for several months. Remaining arguments about proof of stake, in my view, have to do with optimizing the economic incentives, and further formalizing the strategy for responding to 51% attacks. Additionally, the Casper CBC spec could still use concrete efficiency improvements.
9 Proof of Storage
A third approach to the problem is to use a scarce computational resource other than computational power or currency. In this regard, the two main alternatives that have been proposed are storage and bandwidth. There is no way in principle to provide an after-the-fact cryptographic proof that bandwidth was given or used, so proof of bandwidth should most accurately be considered a subset of social proof, discussed in later problems, but proof of storage is something that certainly can be done computationally. An advantage of proof-of-storage is that it is completely ASIC-resistant; the kind of storage that we have in hard drives is already close to optimal.
Status: A lot of theoretical progress, though still a lot to go, as well as more real-world evaluation.
There are a number of blockchains planning to use proof of storage protocols, including Chia and Filecoin. That said, these algorithms have not been tested in the wild. My own main concern is centralization: will these algorithms actually be dominated by smaller users using spare storage capacity, or will they be dominated by large mining farms?
Economics
10 Stable-value cryptoassets
One of the main problems with Bitcoin is the issue of price volatility ... Problem: construct a cryptographic asset with a stable price.
Status: Some progress.
MakerDAO is now live, and has been holding stable for nearly two years. It has survived a 93% drop in the value of its underlying collateral asset (ETH), and there is now more than $100 million in DAI issued. It has become a mainstay of the Ethereum ecosystem, and many Ethereum projects have or are integrating with it. Other synthetic token projects, such as UMA, are rapidly gaining steam as well.
However, while the MakerDAO system has survived tough economic conditions in 2019, the conditions were by no means the toughest that could happen. In the past, Bitcoin has fallen by 75% over the course of two days; the same may happen to ether or any other collateral asset some day. Attacks on the underlying blockchain are an even larger untested risk, especially if compounded by price decreases at the same time. Another major challenge, and arguably the larger one, is that the stability of MakerDAO-like systems is dependent on some underlying oracle scheme. Different attempts at oracle systems do exist (see #16), but the jury is still out on how well they can hold up under large amounts of economic stress. So far, the collateral controlled by MakerDAO has been lower than the value of the MKR token; if this relationship reverses MKR holders may have a collective incentive to try to "loot" the MakerDAO system. There are ways to try to protect against such attacks, but they have not been tested in real life.
11 Decentralized Public Goods Incentivization
One of the challenges in economic systems in general is the problem of "public goods". For example, suppose that there is a scientific research project which will cost $1 million to complete, and it is known that if it is completed the resulting research will save one million people $5 each. In total, the social benefit is clear ... [but] from the point of view of each individual person contributing does not make sense ... So far, most problems to public goods have involved centralization
Additional Assumptions And Requirements: A fully trustworthy oracle exists for determining whether or not a certain public good task has been completed (in reality this is false, but this is the domain of another problem)
Status: Some progress.
The problem of funding public goods is generally understood to be split into two problems: the funding problem (where to get funding for public goods from) and the preference aggregation problem (how to determine what is a genuine public good, rather than some single individual's pet project, in the first place). This problem focuses specifically on the former, assuming the latter is solved (see the "decentralized contribution metrics" section below for work on that problem).
In general, there haven't been large new breakthroughs here. There's two major categories of solutions. First, we can try to elicit individual contributions, giving people social rewards for doing so. My own proposal for charity through marginal price discrimination is one example of this; another is the anti-malaria donation badges on Peepeth. Second, we can collect funds from applications that have network effects. Within blockchain land there are several options for doing this:
. Issuing coins
. Taking a portion of transaction fees at protocol level (eg. through EIP 1559)
. Taking a portion of transaction fees from some layer-2 application (eg. Uniswap, or some scaling solution, or even state rent in an execution environment in ethereum 2.0)
. Taking a portion of other kinds of fees (eg. ENS registration)
Outside of blockchain land, this is just the age-old question of how to collect taxes if you're a government, and charge fees if you're a business or other organization.
12 Reputation systems
Problem: design a formalized reputation system, including a score rep(A,B) -> V where V is the reputation of B from the point of view of A, a mechanism for determining the probability that one party can be trusted by another, and a mechanism for updating the reputation given a record of a particular open or finalized interaction.
Status: Slow progress.
There hasn't really been much work on reputation systems since 2014. Perhaps the best is the use of token curated registries to create curated lists of trustable entities/objects; the Kleros ERC20 TCR (yes, that's a token-curated registry of legitimate ERC20 tokens) is one example, and there is even an alternative interface to Uniswap (http://uniswap.ninja) that uses it as the backend to get the list of tokens and ticker symbols and logos from. Reputation systems of the subjective variety have not really been tried, perhaps because there is just not enough information about the "social graph" of people's connections to each other that has already been published to chain in some form. If such information starts to exist for other reasons, then subjective reputation systems may become more popular.
13 Proof of excellence
One interesting, and largely unexplored, solution to the problem of [token] distribution specifically (there are reasons why it cannot be so easily used for mining) is using tasks that are socially useful but require original human-driven creative effort and talent. For example, one can come up with a "proof of proof" currency that rewards players for coming up with mathematical proofs of certain theorems
Status: No progress, problem is largely forgotten.
The main alternative approach to token distribution that has instead become popular is airdrops; typically, tokens are distributed at launch either proportionately to existing holdings of some other token, or based on some other metric (eg. as in the Handshake airdrop). Verifying human creativity directly has not really been attempted, and with recent progress on AI the problem of creating a task that only humans can do but computers can verify may well be too difficult.
15 [sic]. Anti-Sybil systems
A problem that is somewhat related to the issue of a reputation system is the challenge of creating a "unique identity system" - a system for generating tokens that prove that an identity is not part of a Sybil attack ... However, we would like to have a system that has nicer and more egalitarian features than "one-dollar-one-vote"; arguably, one-person-one-vote would be ideal.
Status: Some progress.
There have been quite a few attempts at solving the unique-human problem. Attempts that come to mind include (incomplete list!):
. HumanityDAO: https://www.humanitydao.org/
. Pseudonym parties: https://bford.info/pub/net/sybil.pdf
. POAP ("proof of attendance protocol"): https://www.poap.xyz/
. BrightID: https://www.brightid.org/
With the growing interest in techniques like quadratic voting and quadratic funding, the need for some kind of human-based anti-sybil system continues to grow. Hopefully, ongoing development of these techniques and new ones can come to meet it.
14 [sic]. Decentralized contribution metrics
Incentivizing the production of public goods is, unfortunately, not the only problem that centralization solves. The other problem is determining, first, which public goods are worth producing in the first place and, second, determining to what extent a particular effort actually accomplished the production of the public good. This challenge deals with the latter issue.
Status: Some progress, some change in focus.
More recent work on determining value of public-good contributions does not try to separate determining tasks and determining quality of completion; the reason is that in practice the two are difficult to separate. Work done by specific teams tends to be non-fungible and subjective enough that the most reasonable approach is to look at relevance of task and quality of performance as a single package, and use the same technique to evaluate both.
Fortunately, there has been great progress on this, particularly with the discovery of quadratic funding. Quadratic funding is a mechanism where individuals can make donations to projects, and then based on the number of people who donated and how much they donated, a formula is used to calculate how much they would have donated if they were perfectly coordinated with each other (ie. took each other's interests into account and did not fall prey to the tragedy of the commons). The difference between amount would-have-donated and amount actually donated for any given project is given to that project as a subsidy from some central pool (see #11 for where the central pool funding could come from). Note that this mechanism focuses on satisfying the values of some community, not on satisfying some given goal regardless of whether or not anyone cares about it. Because of the complexity of values problem, this approach is likely to be much more robust to unknown unknowns.
Quadratic funding has even been tried in real life with considerable success in the recent gitcoin quadratic funding round. There has also been some incremental progress on improving quadratic funding and similar mechanisms; particularly, pairwise-bounded quadratic funding to mitigate collusion. There has also been work on specification and implementation of bribe-resistant voting technology, preventing users from proving to third parties who they voted for; this prevents many kinds of collusion and bribe attacks.
16 Decentralized success metrics
Problem: come up with and implement a decentralized method for measuring numerical real-world variables ... the system should be able to measure anything that humans can currently reach a rough consensus on (eg. price of an asset, temperature, global CO2 concentration)
Status: Some progress.
This is now generally just called "the oracle problem". The largest known instance of a decentralized oracle running is Augur, which has processed outcomes for millions of dollars of bets. Token curated registries such as the Kleros TCR for tokens are another example. However, these systems still have not seen a real-world test of the forking mechanism (search for "subjectivocracy" here) either due to a highly controversial question or due to an attempted 51% attack. There is also research on the oracle problem happening outside of the blockchain space in the form of the "peer prediction" literature; see here for a very recent advancement in the space.
Another looming challenge is that people want to rely on these systems to guide transfers of quantities of assets larger than the economic value of the system's native token. In these conditions, token holders in theory have the incentive to collude to give wrong answers to steal the funds. In such a case, the system would fork and the original system token would likely become valueless, but the original system token holders would still get away with the returns from whatever asset transfer they misdirected. Stablecoins (see #10) are a particularly egregious case of this. One approach to solving this would be a system that assumes that altruistically honest data providers do exist, and creating a mechanism to identify them, and only allowing them to churn slowly so that if malicious ones start getting voted in the users of systems that rely on the oracle can first complete an orderly exit. In any case, more development of oracle tech is very much an important problem.
New problems
If I were to write the hard problems list again in 2019, some would be a continuation of the above problems, but there would be significant changes in emphasis, as well as significant new problems. Here are a few picks:
. Cryptographic obfuscation: same as #4 above
. Ongoing work on post-quantum cryptography: both hash-based as well as based on post-quantum-secure "structured" mathematical objects, eg. elliptic curve isogenies, lattices...
. Anti-collusion infrastructure: ongoing work and refinement of https://ethresear.ch/t/minimal-anti-collusion-infrastructure/5413, including adding privacy against the operator, adding multi-party computation in a maximally practical way, etc.
. Oracles: same as #16 above, but removing the emphasis on "success metrics" and focusing on the general "get real-world data" problem
. Unique-human identities (or, more realistically, semi-unique-human identities): same as what was written as #15 above, but with an emphasis on a less "absolute" solution: it should be much harder to get two identities than one, but making it impossible to get multiple identities is both impossible and potentially harmful even if we do succeed
. Homomorphic encryption and multi-party computation: ongoing improvements are still required for practicality
. Decentralized governance mechanisms: DAOs are cool, but current DAOs are still very primitive; we can do better
Fully formalizing responses to PoS 51% attacks: ongoing work and refinement of https://ethresear.ch/t/responding-to-51-attacks-in-casper-ffg/6363
. More sources of public goods funding: the ideal is to charge for congestible resources inside of systems that have network effects (eg. transaction fees), but doing so in decentralized systems requires public legitimacy; hence this is a social problem along with the technical one of finding possible sources
. Reputation systems: same as #12 above
In general, base-layer problems are slowly but surely decreasing, but application-layer problems are only just getting started.
[Check original post for lots of links]
Special thanks to Justin Drake and Jinglan Wang for feedback
In 2014, I made a post and a presentation with a list of hard problems in math, computer science and economics that I thought were important for the cryptocurrency space (as I then called it) to be able to reach maturity. In the last five years, much has changed. But exactly how much progress on what we thought then was important has been achieved? Where have we succeeded, where have we failed, and where have we changed our minds about what is important? In this post, I'll go through the 16 problems from 2014 one by one, and see just where we are today on each one. At the end, I’ll include my new picks for hard problems of 2019.
The problems are broken down into three categories: (i) cryptographic, and hence expected to be solvable with purely mathematical techniques if they are to be solvable at all, (ii) consensus theory, largely improvements to proof of work and proof of stake, and (iii) economic, and hence having to do
with creating structures involving incentives given to different participants, and often involving the application layer more than the protocol layer. We see significant progress in all categories, though some more than others.
Cryptographic problems
1 Blockchain Scalability
One of the largest problems facing the cryptocurrency space today is the issue of scalability ... The main concern with [oversized blockchains] is trust: if there are only a few entities capable of running full nodes, then those entities can conspire and agree to give themselves a large number of additional bitcoins, and there would be no way for other users to see for themselves that a block is invalid without processing an entire block themselves.
Problem: create a blockchain design that maintains Bitcoin-like security guarantees, but where the maximum size of the most powerful node that needs to exist for the network to keep functioning is substantially sublinear in the number of transactions.
Status: Great theoretical progress, pending more real-world evaluation.
Scalability is one technical problem that we have had a huge amount of progress on theoretically. Five years ago, almost no one was thinking about sharding; now, sharding designs are commonplace. Aside from ethereum 2.0, we have OmniLedger, LazyLedger, Zilliqa and research papers seemingly coming out every month. In my own view, further progress at this point is incremental. Fundamentally, we already have a number of techniques that allow groups of validators to securely come to consensus on much more data than an individual validator can process, as well as techniques allow clients to indirectly verify the full validity and availability of blocks even under 51% attack conditions.
These are probably the most important technologies:
. Random sampling, allowing a small randomly selected committee to statistically stand in for the full validator set: https://github.com/ethereum/wiki/wiki/Sharding-FAQ#how-can-we-solve-the-single-shard-takeover-attack-in-an-uncoordinated-majority-model
. Fraud proofs, allowing individual nodes that learn of an error to broadcast its presence to everyone else: https://bitcoin.stackexchange.com/questions/49647/what-is-a-fraud-proof
. Proofs of custody, allowing validators to probabilistically prove that they individually downloaded and verified some piece of data: https://ethresear.ch/t/1-bit-aggregation-friendly-custody-bonds/2236
. Data availability proofs, allowing clients to detect when the bodies of blocks that they have headers for are unavailable: https://arxiv.org/abs/1809.09044. See also the newer coded Merkle trees proposal.
There are also other smaller developments like Cross-shard communication via receipts as well as "constant-factor" enhancements such as BLS signature aggregation.
That said, fully sharded blockchains have still not been seen in live operation (the partially sharded Zilliqa has recently started running). On the theoretical side, there are mainly disputes about details remaining, along with challenges having to do with stability of sharded networking, developer experience and mitigating risks of centralization; fundamental technical possibility no longer seems in doubt. But the challenges that do remain are challenges that cannot be solved by just thinking about them; only developing the system and seeing ethereum 2.0 or some similar chain running live will suffice.
2 Timestamping
Problem: create a distributed incentive-compatible system, whether it is an overlay on top of a blockchain or its own blockchain, which maintains the current time to high accuracy. All legitimate users have clocks in a normal distribution around some "real" time with standard deviation 20 seconds ... no two nodes are more than 20 seconds apart The solution is allowed to rely on an existing concept of "N nodes"; this would in practice be enforced with proof-of-stake or non-sybil tokens (see #9). The system should continuously provide a time which is within 120s (or less if possible) of the internal clock of >99% of honestly participating nodes. External systems may end up relying on this system; hence, it should remain secure against attackers controlling < 25% of nodes regardless of incentives.
Status: Some progress.
Ethereum has actually survived just fine with a 13-second block time and no particularly advanced timestamping technology; it uses a simple technique where a client does not accept a block whose stated timestamp is earlier than the client's local time. That said, this has not been tested under serious attacks. The recent network-adjusted timestamps proposal tries to improve on the status quo by allowing the client to determine the consensus on the time in the case where the client does not locally know the current time to high accuracy; this has not yet been tested. But in general, timestamping is not currently at the foreground of perceived research challenges; perhaps this will change once more proof of stake chains (including Ethereum 2.0 but also others) come online as real live systems and we see what the issues are.
3 Arbitrary Proof of Computation
Problem: create programs POC_PROVE(P,I) -> (O,Q) and POC_VERIFY(P,O,Q) -> { 0, 1 } such that POC_PROVE runs program P on input I and returns the program output O and a proof-of-computation Q and POC_VERIFY takes P, O and Q and outputs whether or not Q and O were legitimately produced by the POC_PROVE algorithm using P.
Status: Great theoretical and practical progress.
This is basically saying, build a SNARK (or STARK, or SHARK, or...). And we've done it! SNARKs are now increasingly well understood, and are even already being used in multiple blockchains today (including tornado.cash on Ethereum). And SNARKs are extremely useful, both as a privacy technology (see Zcash and tornado.cash) and as a scalability technology (see ZK Rollup, STARKDEX and STARKing erasure coded data roots).
There are still challenges with efficiency; making arithmetization-friendly hash functions (see here and here for bounties for breaking proposed candidates) is a big one, and efficiently proving random memory accesses is another. Furthermore, there's the unsolved question of whether the O(n * log(n)) blowup in prover time is a fundamental limitation or if there is some way to make a succinct proof with only linear overhead as in bulletproofs (which unfortunately take linear time to verify). There are also ever-present risks that the existing schemes have bugs. In general, the problems are in the details rather than the fundamentals.
4 Code Obfuscation
The holy grail is to create an obfuscator O, such that given any program P the obfuscator can produce a second program O(P) = Q such that P and Q return the same output if given the same input and, importantly, Q reveals no information whatsoever about the internals of P. One can hide inside of Q a password, a secret encryption key, or one can simply use Q to hide the proprietary workings of the algorithm itself.
Status: Slow progress.
In plain English, the problem is saying that we want to come up with a way to "encrypt" a program so that the encrypted program would still give the same outputs for the same inputs, but the "internals" of the program would be hidden. An example use case for obfuscation is a program containing a private key where the program only allows the private key to sign certain messages.
A solution to code obfuscation would be very useful to blockchain protocols. The use cases are subtle, because one must deal with the possibility that an on-chain obfuscated program will be copied and run in an environment different from the chain itself, but there are many possibilities. One that personally interests me is the ability to remove the centralized operator from collusion-resistance gadgets by replacing the operator with an obfuscated program that contains some proof of work, making it very expensive to run more than once with different inputs as part of an attempt to determine individual participants' actions.
Unfortunately this continues to be a hard problem. There is continuing ongoing work in attacking the problem, one side making constructions (eg. this) that try to reduce the number of assumptions on mathematical objects that we do not know practically exist (eg. general cryptographic multilinear maps) and another side trying to make practical implementations of the desired mathematical objects. However, all of these paths are still quite far from creating something viable and known to be secure. See https://eprint.iacr.org/2019/463.pdf for a more general overview to the problem.
5 Hash-Based Cryptography
Problem: create a signature algorithm relying on no security assumption but the random oracle property of hashes that maintains 160 bits of security against classical computers (ie. 80 vs. quantum due to Grover's algorithm) with optimal size and other properties.
Status: Some progress.
There have been two strands of progress on this since 2014. SPHINCS, a "stateless" (meaning, using it multiple times does not require remembering information like a nonce) signature scheme, was released soon after this "hard problems" list was published, and provides a purely hash-based signature scheme of size around 41 kB. Additionally, STARKs have been developed, and one can create signatures of similar size based on them. The fact that not just signatures, but also general-purpose zero knowledge proofs, are possible with just hashes was definitely something I did not expect five years ago; I am very happy that this is the case. That said, size continues to be an issue, and ongoing progress (eg. see the very recent DEEP FRI) is continuing to reduce the size of proofs, though it looks like further progress will be incremental.
The main not-yet-solved problem with hash-based cryptography is aggregate signatures, similar to what BLS aggregation makes possible. It's known that we can just make a STARK over many Lamport signatures, but this is inefficient; a more efficient scheme would be welcome. (In case you're wondering if hash-based public key encryption is possible, the answer is, no, you can't do anything with more than a quadratic attack cost)
Consensus theory problems
6 ASIC-Resistant Proof of Work
One approach at solving the problem is creating a proof-of-work algorithm based on a type of computation that is very difficult to specialize ... For a more in-depth discussion on ASIC-resistant hardware, see https://blog.ethereum.org/2014/06/19/mining/.
Status: Solved as far as we can.
About six months after the "hard problems" list was posted, Ethereum settled on its ASIC-resistant proof of work algorithm: Ethash. Ethash is known as a memory-hard algorithm. The theory is that random-access memory in regular computers is well-optimized already and hence difficult to improve on for specialized applications. Ethash aims to achieve ASIC resistance by making memory access the dominant part of running the PoW computation. Ethash was not the first memory-hard algorithm, but it did add one innovation: it uses pseudorandom lookups over a two-level DAG, allowing for two ways of evaluating the function. First, one could compute it quickly if one has the entire (~2 GB) DAG; this is the memory-hard "fast path". Second, one can compute it much more slowly (still fast enough to check a single provided solution quickly) if one only has the top level of the DAG; this is used for block verification.
Ethash has proven remarkably successful at ASIC resistance; after three years and billions of dollars of block rewards, ASICs do exist but are at best 2-5 times more power and cost-efficient than GPUs. ProgPoW has been proposed as an alternative, but there is a growing consensus that ASIC-resistant algorithms will inevitably have a limited lifespan, and that ASIC resistance has downsides because it makes 51% attacks cheaper (eg. see the 51% attack on Ethereum Classic).
I believe that PoW algorithms that provide a medium level of ASIC resistance can be created, but such resistance is limited-term and both ASIC and non-ASIC PoW have disadvantages; in the long term the better choice for blockchain consensus is proof of stake.
7 Useful Proof of Work
[M]aking the proof of work function something which is simultaneously useful; a common candidate is something like Folding@home, an existing program where users can download software onto their computers to simulate protein folding and provide researchers with a large supply of data to help them cure diseases.
Status: Probably not feasible, with one exception.
The challenge with useful proof of work is that a proof of work algorithm requires many properties:
. Hard to compute
. Easy to verify
. Does not depend on large amounts of external data
. Can be efficiently computed in small "bite-sized" chunks
Unfortunately, there are not many computations that are useful that preserve all of these properties, and most computations that do have all of those properties and are "useful" are only "useful" for far too short a time to build a cryptocurrency around them.
However, there is one possible exception: zero-knowledge-proof generation. Zero knowledge proofs of aspects of blockchain validity (eg. data availability roots for a simple example) are difficult to compute, and easy to verify. Furthermore, they are durably difficult to compute; if proofs of "highly structured" computation become too easy, one can simply switch to verifying a blockchain's entire state transition, which becomes extremely expensive due to the need to model the virtual machine and random memory accesses.
Zero-knowledge proofs of blockchain validity provide great value to users of the blockchain, as they can substitute the need to verify the chain directly; Coda is doing this already, albeit with a simplified blockchain design that is heavily optimized for provability. Such proofs can significantly assist in improving the blockchain's safety and scalability. That said, the total amount of computation that realistically needs to be done is still much less than the amount that's currently done by proof of work miners, so this would at best be an add-on for proof of stake blockchains, not a full-on consensus algorithm.
8 Proof of Stake
Another approach to solving the mining centralization problem is to abolish mining entirely, and move to some other mechanism for counting the weight of each node in the consensus. The most popular alternative under discussion to date is "proof of stake" - that is to say, instead of treating the consensus model as "one unit of CPU power, one vote" it becomes "one currency unit, one vote".
Status: Great theoretical progress, pending more real-world evaluation.
Near the end of 2014, it became clear to the proof of stake community that some form of "weak subjectivity" is unavoidable. To maintain economic security, nodes need to obtain a recent checkpoint extra-protocol when they sync for the first time, and again if they go offline for more than a few months. This was a difficult pill to swallow; many PoW advocates still cling to PoW precisely because in a PoW chain the "head" of the chain can be discovered with the only data coming from a trusted source being the blockchain client software itself. PoS advocates, however, were willing to swallow the pill, seeing the added trust requirements as not being large. From there the path to proof of stake through long-duration security deposits became clear.
Most interesting consensus algorithms today are fundamentally similar to PBFT, but replace the fixed set of validators with a dynamic list that anyone can join by sending tokens into a system-level smart contract with time-locked withdrawals (eg. a withdrawal might in some cases take up to 4 months to complete). In many cases (including ethereum 2.0), these algorithms achieve "economic finality" by penalizing validators that are caught performing actions that violate the protocol in certain ways (see here for a philosophical view on what proof of stake accomplishes).
As of today, we have (among many other algorithms):
Casper FFG: https://arxiv.org/abs/1710.09437
Tendermint: https://tendermint.com/docs/spec/consensus/consensus.html
HotStuff: https://arxiv.org/abs/1803.05069
Casper CBC: https://vitalik.ca/general/2018/12/05/cbc_casper.html
There continues to be ongoing refinement (eg. here and here) . Eth2 phase 0, the chain that will implement FFG, is currently under implementation and enormous progress has been made. Additionally, Tendermint has been running, in the form of the Cosmos chain for several months. Remaining arguments about proof of stake, in my view, have to do with optimizing the economic incentives, and further formalizing the strategy for responding to 51% attacks. Additionally, the Casper CBC spec could still use concrete efficiency improvements.
9 Proof of Storage
A third approach to the problem is to use a scarce computational resource other than computational power or currency. In this regard, the two main alternatives that have been proposed are storage and bandwidth. There is no way in principle to provide an after-the-fact cryptographic proof that bandwidth was given or used, so proof of bandwidth should most accurately be considered a subset of social proof, discussed in later problems, but proof of storage is something that certainly can be done computationally. An advantage of proof-of-storage is that it is completely ASIC-resistant; the kind of storage that we have in hard drives is already close to optimal.
Status: A lot of theoretical progress, though still a lot to go, as well as more real-world evaluation.
There are a number of blockchains planning to use proof of storage protocols, including Chia and Filecoin. That said, these algorithms have not been tested in the wild. My own main concern is centralization: will these algorithms actually be dominated by smaller users using spare storage capacity, or will they be dominated by large mining farms?
Economics
10 Stable-value cryptoassets
One of the main problems with Bitcoin is the issue of price volatility ... Problem: construct a cryptographic asset with a stable price.
Status: Some progress.
MakerDAO is now live, and has been holding stable for nearly two years. It has survived a 93% drop in the value of its underlying collateral asset (ETH), and there is now more than $100 million in DAI issued. It has become a mainstay of the Ethereum ecosystem, and many Ethereum projects have or are integrating with it. Other synthetic token projects, such as UMA, are rapidly gaining steam as well.
However, while the MakerDAO system has survived tough economic conditions in 2019, the conditions were by no means the toughest that could happen. In the past, Bitcoin has fallen by 75% over the course of two days; the same may happen to ether or any other collateral asset some day. Attacks on the underlying blockchain are an even larger untested risk, especially if compounded by price decreases at the same time. Another major challenge, and arguably the larger one, is that the stability of MakerDAO-like systems is dependent on some underlying oracle scheme. Different attempts at oracle systems do exist (see #16), but the jury is still out on how well they can hold up under large amounts of economic stress. So far, the collateral controlled by MakerDAO has been lower than the value of the MKR token; if this relationship reverses MKR holders may have a collective incentive to try to "loot" the MakerDAO system. There are ways to try to protect against such attacks, but they have not been tested in real life.
11 Decentralized Public Goods Incentivization
One of the challenges in economic systems in general is the problem of "public goods". For example, suppose that there is a scientific research project which will cost $1 million to complete, and it is known that if it is completed the resulting research will save one million people $5 each. In total, the social benefit is clear ... [but] from the point of view of each individual person contributing does not make sense ... So far, most problems to public goods have involved centralization
Additional Assumptions And Requirements: A fully trustworthy oracle exists for determining whether or not a certain public good task has been completed (in reality this is false, but this is the domain of another problem)
Status: Some progress.
The problem of funding public goods is generally understood to be split into two problems: the funding problem (where to get funding for public goods from) and the preference aggregation problem (how to determine what is a genuine public good, rather than some single individual's pet project, in the first place). This problem focuses specifically on the former, assuming the latter is solved (see the "decentralized contribution metrics" section below for work on that problem).
In general, there haven't been large new breakthroughs here. There's two major categories of solutions. First, we can try to elicit individual contributions, giving people social rewards for doing so. My own proposal for charity through marginal price discrimination is one example of this; another is the anti-malaria donation badges on Peepeth. Second, we can collect funds from applications that have network effects. Within blockchain land there are several options for doing this:
. Issuing coins
. Taking a portion of transaction fees at protocol level (eg. through EIP 1559)
. Taking a portion of transaction fees from some layer-2 application (eg. Uniswap, or some scaling solution, or even state rent in an execution environment in ethereum 2.0)
. Taking a portion of other kinds of fees (eg. ENS registration)
Outside of blockchain land, this is just the age-old question of how to collect taxes if you're a government, and charge fees if you're a business or other organization.
12 Reputation systems
Problem: design a formalized reputation system, including a score rep(A,B) -> V where V is the reputation of B from the point of view of A, a mechanism for determining the probability that one party can be trusted by another, and a mechanism for updating the reputation given a record of a particular open or finalized interaction.
Status: Slow progress.
There hasn't really been much work on reputation systems since 2014. Perhaps the best is the use of token curated registries to create curated lists of trustable entities/objects; the Kleros ERC20 TCR (yes, that's a token-curated registry of legitimate ERC20 tokens) is one example, and there is even an alternative interface to Uniswap (http://uniswap.ninja) that uses it as the backend to get the list of tokens and ticker symbols and logos from. Reputation systems of the subjective variety have not really been tried, perhaps because there is just not enough information about the "social graph" of people's connections to each other that has already been published to chain in some form. If such information starts to exist for other reasons, then subjective reputation systems may become more popular.
13 Proof of excellence
One interesting, and largely unexplored, solution to the problem of [token] distribution specifically (there are reasons why it cannot be so easily used for mining) is using tasks that are socially useful but require original human-driven creative effort and talent. For example, one can come up with a "proof of proof" currency that rewards players for coming up with mathematical proofs of certain theorems
Status: No progress, problem is largely forgotten.
The main alternative approach to token distribution that has instead become popular is airdrops; typically, tokens are distributed at launch either proportionately to existing holdings of some other token, or based on some other metric (eg. as in the Handshake airdrop). Verifying human creativity directly has not really been attempted, and with recent progress on AI the problem of creating a task that only humans can do but computers can verify may well be too difficult.
15 [sic]. Anti-Sybil systems
A problem that is somewhat related to the issue of a reputation system is the challenge of creating a "unique identity system" - a system for generating tokens that prove that an identity is not part of a Sybil attack ... However, we would like to have a system that has nicer and more egalitarian features than "one-dollar-one-vote"; arguably, one-person-one-vote would be ideal.
Status: Some progress.
There have been quite a few attempts at solving the unique-human problem. Attempts that come to mind include (incomplete list!):
. HumanityDAO: https://www.humanitydao.org/
. Pseudonym parties: https://bford.info/pub/net/sybil.pdf
. POAP ("proof of attendance protocol"): https://www.poap.xyz/
. BrightID: https://www.brightid.org/
With the growing interest in techniques like quadratic voting and quadratic funding, the need for some kind of human-based anti-sybil system continues to grow. Hopefully, ongoing development of these techniques and new ones can come to meet it.
14 [sic]. Decentralized contribution metrics
Incentivizing the production of public goods is, unfortunately, not the only problem that centralization solves. The other problem is determining, first, which public goods are worth producing in the first place and, second, determining to what extent a particular effort actually accomplished the production of the public good. This challenge deals with the latter issue.
Status: Some progress, some change in focus.
More recent work on determining value of public-good contributions does not try to separate determining tasks and determining quality of completion; the reason is that in practice the two are difficult to separate. Work done by specific teams tends to be non-fungible and subjective enough that the most reasonable approach is to look at relevance of task and quality of performance as a single package, and use the same technique to evaluate both.
Fortunately, there has been great progress on this, particularly with the discovery of quadratic funding. Quadratic funding is a mechanism where individuals can make donations to projects, and then based on the number of people who donated and how much they donated, a formula is used to calculate how much they would have donated if they were perfectly coordinated with each other (ie. took each other's interests into account and did not fall prey to the tragedy of the commons). The difference between amount would-have-donated and amount actually donated for any given project is given to that project as a subsidy from some central pool (see #11 for where the central pool funding could come from). Note that this mechanism focuses on satisfying the values of some community, not on satisfying some given goal regardless of whether or not anyone cares about it. Because of the complexity of values problem, this approach is likely to be much more robust to unknown unknowns.
Quadratic funding has even been tried in real life with considerable success in the recent gitcoin quadratic funding round. There has also been some incremental progress on improving quadratic funding and similar mechanisms; particularly, pairwise-bounded quadratic funding to mitigate collusion. There has also been work on specification and implementation of bribe-resistant voting technology, preventing users from proving to third parties who they voted for; this prevents many kinds of collusion and bribe attacks.
16 Decentralized success metrics
Problem: come up with and implement a decentralized method for measuring numerical real-world variables ... the system should be able to measure anything that humans can currently reach a rough consensus on (eg. price of an asset, temperature, global CO2 concentration)
Status: Some progress.
This is now generally just called "the oracle problem". The largest known instance of a decentralized oracle running is Augur, which has processed outcomes for millions of dollars of bets. Token curated registries such as the Kleros TCR for tokens are another example. However, these systems still have not seen a real-world test of the forking mechanism (search for "subjectivocracy" here) either due to a highly controversial question or due to an attempted 51% attack. There is also research on the oracle problem happening outside of the blockchain space in the form of the "peer prediction" literature; see here for a very recent advancement in the space.
Another looming challenge is that people want to rely on these systems to guide transfers of quantities of assets larger than the economic value of the system's native token. In these conditions, token holders in theory have the incentive to collude to give wrong answers to steal the funds. In such a case, the system would fork and the original system token would likely become valueless, but the original system token holders would still get away with the returns from whatever asset transfer they misdirected. Stablecoins (see #10) are a particularly egregious case of this. One approach to solving this would be a system that assumes that altruistically honest data providers do exist, and creating a mechanism to identify them, and only allowing them to churn slowly so that if malicious ones start getting voted in the users of systems that rely on the oracle can first complete an orderly exit. In any case, more development of oracle tech is very much an important problem.
New problems
If I were to write the hard problems list again in 2019, some would be a continuation of the above problems, but there would be significant changes in emphasis, as well as significant new problems. Here are a few picks:
. Cryptographic obfuscation: same as #4 above
. Ongoing work on post-quantum cryptography: both hash-based as well as based on post-quantum-secure "structured" mathematical objects, eg. elliptic curve isogenies, lattices...
. Anti-collusion infrastructure: ongoing work and refinement of https://ethresear.ch/t/minimal-anti-collusion-infrastructure/5413, including adding privacy against the operator, adding multi-party computation in a maximally practical way, etc.
. Oracles: same as #16 above, but removing the emphasis on "success metrics" and focusing on the general "get real-world data" problem
. Unique-human identities (or, more realistically, semi-unique-human identities): same as what was written as #15 above, but with an emphasis on a less "absolute" solution: it should be much harder to get two identities than one, but making it impossible to get multiple identities is both impossible and potentially harmful even if we do succeed
. Homomorphic encryption and multi-party computation: ongoing improvements are still required for practicality
. Decentralized governance mechanisms: DAOs are cool, but current DAOs are still very primitive; we can do better
Fully formalizing responses to PoS 51% attacks: ongoing work and refinement of https://ethresear.ch/t/responding-to-51-attacks-in-casper-ffg/6363
. More sources of public goods funding: the ideal is to charge for congestible resources inside of systems that have network effects (eg. transaction fees), but doing so in decentralized systems requires public legitimacy; hence this is a social problem along with the technical one of finding possible sources
. Reputation systems: same as #12 above
In general, base-layer problems are slowly but surely decreasing, but application-layer problems are only just getting started.