Accurate Genomic Prediction Of Human Height. Louis Lello, Steven G Avery, Laurent Tellier, Ana Vazquez, Gustavo de los Campos, and Stephen D. H. Hsu. BioRxiv, Sep 19 2017. doi: https://doi.org/10.1101/190124
Abstract: We construct genomic predictors for heritable and extremely complex human quantitative traits (height, heel bone density, and educational attainment) using modern methods in high dimensional statistics (i.e., machine learning). Replication tests show that these predictors capture, respectively, ~40, 20, and 9 percent of total variance for the three traits. For example, predicted heights correlate ~0.65 with actual height; actual heights of most individuals in validation samples are within a few cm of the prediction. The variance captured for height is comparable to the estimated SNP heritability from GCTA (GREML) analysis, and seems to be close to its asymptotic value (i.e., as sample size goes to infinity), suggesting that we have captured most of the heritability for the SNPs used. Thus, our results resolve the common SNP portion of the "missing heritability" problem - i.e., the gap between prediction R-squared and SNP heritability. The ~20k activated SNPs in our height predictor reveal the genetic architecture of human height, at least for common SNPs. Our primary dataset is the UK Biobank cohort, comprised of almost 500k individual genotypes with multiple phenotypes. We also use other datasets and SNPs found in earlier GWAS for out-of-sample validation of our results.
Bipartisan Alliance, a Society for the Study of the US Constitution, and of Human Nature, where Republicans and Democrats meet.
Wednesday, September 20, 2017
Psychological roadblocks to the adoption of self-driving vehicles
Psychological roadblocks to the adoption of self-driving vehicles. Azim Shariff, Jean-François Bonnefon & Iyad Rahwan. Nature Human Behaviour (2017), doi:10.1038/s41562-017-0202-6
Summary: Self-driving cars offer a bright future, but only if the public can overcome the psychological challenges that stand in the way of widespread adoption. We discuss three: ethical dilemmas, overreactions to accidents, and the opacity of the cars’ decision-making algorithms — and propose steps towards addressing them.
---
Psychological roadblocks to the adoption of self-driving vehicles
Self-driving cars offer a bright future, but only if the public can overcome the psychological challenges that stand in the way of widespread adoption. We discuss three: ethical dilemmas, overreactions to accidents, and the opacity of the cars’ decision-making algorithms — and propose steps towards addressing them. Azim Shariff, Jean-François Bonnefon and Iyad Rahwan
The widespread adoption of autonomous vehicles promises to make us happier, safer and more efficient. Manufacturers are speeding past the remaining technical challenges to the cars’ readiness. But the biggest roadblocks standing in the path of mass adoption may be psychological, not technological; 78% of Americans report fearing riding in an autonomous vehicle, with only 19% indicating that they would trust the car1. Trust — the comfort in making oneself vulnerable to another entity in the pursuit of some benefit — has long been recognized as critical to the adoption of automation, and becomes even more important as both the complexity of automation and the vulnerability of the users increase2. For autonomous vehicles, which will need to navigate our complex urban environment with the power of life and death, trust will determine how widely the cars are adopted by consumers, and how tolerated they are by everyone else. Achieving the bright future promised by autonomous vehicles will require overcoming the psychological barriers to trust. Here we diagnose three factors underlying this resistance and offer a plan of action (see Table 1).
The dilemmas of autonomous ethics
The necessity for autonomous vehicles to make ethical decisions leads to a series of dilemmas for their designers, regulators and the public at large3. These begin with the need for an autonomous vehicle to decide how it will operate in situations where its actions could decrease the risk of harming its own passengers by increasing the risk to a potentially larger number of non-passengers (for example, pedestrians and other drivers). While these decisions will most often involve probabilistic trade-offs in small-risk manoeuvres, at its extreme the decision could involve an autonomous vehicle determining whether to harm its passenger to spare the lives of two or more pedestrians, or vice versa (Fig. 1).
In handling these situations, the cars may operate as utilitarians, minimizing total risk to people regardless of who they are, or as self-protective, placing extra weight on the safety of their own passengers. Human drivers make such decisions instinctively in a split second, and thus cannot be expected to abide by whatever ethical principle they formulated in the comfort of their armchair. But autonomous vehicle manufacturers have the luxury of moral deliberation — and thus the responsibility of that deliberation. The existence of this ethical dilemma in turn produces a social dilemma. People are inconsistent about what principles they want autonomous vehicles to follow. Individuals recognize the utilitarian approach to be the more ethical, and as citizens, want the cars to save the greater number. But as consumers, they want self-protective cars3. As a result, adopting either strategy brings its own risks for manufacturers — a selfprotective strategy risks public outrage, whereas a utilitarian strategy may scare consumers away.
Both the ethical and social dilemmas will need to be addressed to earn the trust of the public. And because it seems unlikely that regulators will adopt the strictest selfprotective solution — in which autonomous vehicles would never harm their passengers, however small the danger to passengers and large the risk to others — we will have to grapple with consumers’ fear that their car might someday decide to harm them. To overcome that fear, we need to make people feel both safe and virtuous about owning an autonomous vehicle. To make
Table 1 | A summary of the psychological challenges to autonomous vehicles, and suggested actions for overcoming them
Psychological challenge Suggested actions
The dilemmas of autonomous ethics. People are torn between how they want autonomous vehicles to ethically behave; they morally believe the vehicles should operate under utilitarian principles, but prefer to buy vehicles that prioritize their own lives as passengers. The idea of a car sacrificing its passengers deters people from purchasing an autonomous vehicle. Shift the discussion from the relative risk of injury to the absolute risk. Appeal to consumers’ desire for virtue signalling.
Risk heuristics and algorithmic aversion. The novelty and nature of autonomous vehicles will result in outsized reactions in the face of inevitable accidents. Such overreactions risk slowing or stalling the adoption of autonomous vehicles. Prepare the public for the inevitability of accidents.
Openly communicate algorithmic improvement.
Manage public overreaction with ‘fear placebos’ and information about actual risk levels. Asymmetric information and the theory of the machine mind. A lack of transparency into the underlying decision-making processes can make it difficult for people to predict the autonomous vehicles’ behaviour, diminishing trust.
Research the type of information required to form trustable mental models of autonomous vehicles.
to most effectively convey the absolute reduction in risk to passengers due to overall accident reduction, so that it is not irrationally overshadowed by a potentially small increase in relative risk that passengers face in relation to other road users. Communication about the overall safety benefits of autonomous vehicles could be further leveraged to appeal to potential consumers’ concerns about self-image and reputation. Virtue signalling is a powerful motivation for buying ethical products, but only when the ethicality is conspicuous4. Allowing the altruistic benefits of autonomous vehicles to reflect on the consumer can change the conversation about autonomous vehicle ethics and prove itself to be a marketing asset. The most relevant example of successful virtue consumerism is that of the Toyota Prius, a hybrid-electric automobile whose distinctive shape has allowed owners to signal their environmental commitment. However, whereas ‘green’ marketing can backfire for those politically unaligned with the environmental movement5, the package of virtues connected with autonomous vehicles — safety, but also reductions in traffic and parking congestion — contain uncontroversial values that allow consumers to advertise themselves as safe, smart and prosocial.
Risk heuristics and algorithm aversion
When the first traffic fatality involving Tesla’s Autopilot occurred in May 2016, it was covered by every major news organization — a feat unmatched by any of the other 40,200 US traffic fatalities that year. We can expect an even larger reaction the first time an autonomous vehicle kills a pedestrian, or kills a child, or two autonomous vehicles crash into each other. Outsized media coverage of crashes involving autonomous vehicles may feed and amplify people’s fears by tapping into the availability heuristic (risks are subjectively higher when they come to mind easily) and affective heuristic (risks are perceived to be higher when they evoke a vivid emotional reaction). As with airplane crashes, the more disproportionate — and disproportionately sensational — the coverage that autonomous vehicle accidents receive, the more exaggerated people will perceive the risk and dangers of these cars in comparison to those of traditional human-driven ones. Worse, for autonomous vehicles, these reactions may be compounded by algorithm aversion6, the tendency for people to more rapidly lose faith in an erring decision-making algorithm than in humans making comparable errors. These reactions could derail the adoption of autonomous vehicles through numerous paths; they could directly deter consumers, provoke politicians to enact suffocating restrictions, or create outsized liability issues — fuelled by court and jury overreactions — that compromise the financial feasibility of autonomous vehicles. Each path could slow or even stall widespread adoption.
Countering these powerful psychological effects may prove especially difficult. Nevertheless, there are opportunities. Autonomous vehicle spokespeople should prepare the public for the inevitability of accidents — not overpromising infallibility, but still emphasizing autonomous vehicles’ safety advantages over human drivers. One barrier that prevents people from adopting (superior) algorithms over human judgment is overconfidence in one’s own performance7 — something famously prevalent in driving. Manufacturers should also be open about algorithmic improvements. Autonomous vehicles are better portrayed as being perfected, not as being perfect. Politicians and regulators can also play a role in managing overreaction. Though human themselves, and ultimately answerable to the public, legislators should resist capitulating to the public’s fears of low-probability risks8. They should instead educate the public about the actual risks and, if moved to act, do so in a calculated way, perhaps by offering the public ‘fear placebos'8 — high-visibility, low-cost gestures that do the most to assuage the public’s fears without undermining the real benefits that autonomous vehicles might bring. Asymmetric information and the theory of the machine mind
The dubious reputation of the CIA is sometimes blamed on the asymmetry between the secrecy of their successes and the broad awareness of their failures. Autonomous vehicles will face a similar challenge. Passengers will be acutely aware of the cars’ rare failures — leading to the issues described above — but may be blissfully unaware of all the small successes and optimizations. This asymmetry of information is part of a larger psychological barrier to the trust in autonomous vehicles: the opacity to the decision-making occurring under the hood. If trust is characterized by the willingness to yield vulnerability to another entity, it is critical that people can comfortably predict and understand the behaviour of the other entity. Indeed, the European Union General Data Protection Regulation recently established the citizen’s “right to [...] obtain an explanation of the decision reached [...] and to challenge the decision” made by algorithms9. However, full transparency may be neither possible nor optimal. Autonomous vehicle intelligence is driven in part by Fig. 1 | A schematic example of the ethical trade-offs that autonomous vehicles will need to make between the lives of passengers and pedestrians3. The visual comes from the Moral Machine website (http://moralmachine.mit.edu), which we launched to collect large-scale data from the public. So far, we have collected over 30 million decisions from over 3 million people.
machine learning, in which computers learn increasingly sophisticated patterns without being explicitly taught. This leaves underlying decision-making processes opaque even to the programmer (let alone the passenger). But even if a detailed account of the computer’s decisions were available, it would only offer the end-user an incomprehensible deluge of information. The trend in many ‘lower stakes’ computer interfaces (for example, web browsers) has thus been in the opposite direction — hiding the complex decision-making of the machine to present a simple, minimalistic user experience. For autonomous vehicles, although some transparency can improve trust, too much transparency into the explanations for the car’s actions can overwhelm the passenger, thereby increasing anxiety10.
Thus, what is most important for generating trust and comfort is not full transparency but communication of the correct amount and type of information to allow people to develop mental models (an abstract representation of the entity’s perceptions and decision rules) of the cars5 — a sort of theory of the machine mind. There is already a robust literature investigating what information is most crucial to communicate; however, most of this research has been conducted on artificial intelligence in industrial, residential or software interface settings. Not all of it will be perfectly transferable to autonomous vehicles, so researchers need to investigate what information best fosters predictability, trust and comfort in this new and specific setting. Moreover, autonomous vehicles will need to communicate not just with their passengers, but with pedestrians, fellow drivers and the other stakeholders on the road. Currently, people decipher the intentions of other drivers through explicit signals (such as indicators, horns and gestures) and through assumptions based on the mental models formed of drivers (‘Why is she slowing down here?’ or ‘Why is he positioning himself like that?’). Everyone on the road will need to adjust their human models to those of autonomous vehicles, and the more research delineating what information people find crucial and comforting, the more seamless and less panicky this transition will be.
A new social contract
Automobiles began their transformational integration into our lives over a century ago. In this time, a system of laws regulating the behaviour of drivers and pedestrians, and the designs and practices of manufacturers, has been introduced and continuously refined. Today, the technologies that mediate these regulations, and the norms, fines and other punishments that enforce them, maintain just enough trust in the traffic system to keep it tolerable. Tomorrow, the integration of autonomous cars will be similarly transformational, but will occur over a much shorter timescale. In that time, we will need a new social contract that provides clear guidelines about who is responsible for different kinds of accidents, how monitoring and enforcement will be performed, and how trust among all stakeholders can be engendered. Many challenges remain — hacking, liability and labour displacement issues, most significantly — but this social contract will be bound as much by psychological realities as by technological and legal ones. We have identified several here, but more work remains. We believe it is morally imperative for behavioural scientists of all disciplines to weigh in on this contract. Every day the adoption of autonomous cars is delayed is another day that people will continue to lose their lives to the non-autonomous human drivers of yesterday.
Summary: Self-driving cars offer a bright future, but only if the public can overcome the psychological challenges that stand in the way of widespread adoption. We discuss three: ethical dilemmas, overreactions to accidents, and the opacity of the cars’ decision-making algorithms — and propose steps towards addressing them.
---
Psychological roadblocks to the adoption of self-driving vehicles
Self-driving cars offer a bright future, but only if the public can overcome the psychological challenges that stand in the way of widespread adoption. We discuss three: ethical dilemmas, overreactions to accidents, and the opacity of the cars’ decision-making algorithms — and propose steps towards addressing them. Azim Shariff, Jean-François Bonnefon and Iyad Rahwan
The widespread adoption of autonomous vehicles promises to make us happier, safer and more efficient. Manufacturers are speeding past the remaining technical challenges to the cars’ readiness. But the biggest roadblocks standing in the path of mass adoption may be psychological, not technological; 78% of Americans report fearing riding in an autonomous vehicle, with only 19% indicating that they would trust the car1. Trust — the comfort in making oneself vulnerable to another entity in the pursuit of some benefit — has long been recognized as critical to the adoption of automation, and becomes even more important as both the complexity of automation and the vulnerability of the users increase2. For autonomous vehicles, which will need to navigate our complex urban environment with the power of life and death, trust will determine how widely the cars are adopted by consumers, and how tolerated they are by everyone else. Achieving the bright future promised by autonomous vehicles will require overcoming the psychological barriers to trust. Here we diagnose three factors underlying this resistance and offer a plan of action (see Table 1).
The dilemmas of autonomous ethics
The necessity for autonomous vehicles to make ethical decisions leads to a series of dilemmas for their designers, regulators and the public at large3. These begin with the need for an autonomous vehicle to decide how it will operate in situations where its actions could decrease the risk of harming its own passengers by increasing the risk to a potentially larger number of non-passengers (for example, pedestrians and other drivers). While these decisions will most often involve probabilistic trade-offs in small-risk manoeuvres, at its extreme the decision could involve an autonomous vehicle determining whether to harm its passenger to spare the lives of two or more pedestrians, or vice versa (Fig. 1).
In handling these situations, the cars may operate as utilitarians, minimizing total risk to people regardless of who they are, or as self-protective, placing extra weight on the safety of their own passengers. Human drivers make such decisions instinctively in a split second, and thus cannot be expected to abide by whatever ethical principle they formulated in the comfort of their armchair. But autonomous vehicle manufacturers have the luxury of moral deliberation — and thus the responsibility of that deliberation. The existence of this ethical dilemma in turn produces a social dilemma. People are inconsistent about what principles they want autonomous vehicles to follow. Individuals recognize the utilitarian approach to be the more ethical, and as citizens, want the cars to save the greater number. But as consumers, they want self-protective cars3. As a result, adopting either strategy brings its own risks for manufacturers — a selfprotective strategy risks public outrage, whereas a utilitarian strategy may scare consumers away.
Both the ethical and social dilemmas will need to be addressed to earn the trust of the public. And because it seems unlikely that regulators will adopt the strictest selfprotective solution — in which autonomous vehicles would never harm their passengers, however small the danger to passengers and large the risk to others — we will have to grapple with consumers’ fear that their car might someday decide to harm them. To overcome that fear, we need to make people feel both safe and virtuous about owning an autonomous vehicle. To make
Table 1 | A summary of the psychological challenges to autonomous vehicles, and suggested actions for overcoming them
Psychological challenge Suggested actions
The dilemmas of autonomous ethics. People are torn between how they want autonomous vehicles to ethically behave; they morally believe the vehicles should operate under utilitarian principles, but prefer to buy vehicles that prioritize their own lives as passengers. The idea of a car sacrificing its passengers deters people from purchasing an autonomous vehicle. Shift the discussion from the relative risk of injury to the absolute risk. Appeal to consumers’ desire for virtue signalling.
Risk heuristics and algorithmic aversion. The novelty and nature of autonomous vehicles will result in outsized reactions in the face of inevitable accidents. Such overreactions risk slowing or stalling the adoption of autonomous vehicles. Prepare the public for the inevitability of accidents.
Openly communicate algorithmic improvement.
Manage public overreaction with ‘fear placebos’ and information about actual risk levels. Asymmetric information and the theory of the machine mind. A lack of transparency into the underlying decision-making processes can make it difficult for people to predict the autonomous vehicles’ behaviour, diminishing trust.
Research the type of information required to form trustable mental models of autonomous vehicles.
to most effectively convey the absolute reduction in risk to passengers due to overall accident reduction, so that it is not irrationally overshadowed by a potentially small increase in relative risk that passengers face in relation to other road users. Communication about the overall safety benefits of autonomous vehicles could be further leveraged to appeal to potential consumers’ concerns about self-image and reputation. Virtue signalling is a powerful motivation for buying ethical products, but only when the ethicality is conspicuous4. Allowing the altruistic benefits of autonomous vehicles to reflect on the consumer can change the conversation about autonomous vehicle ethics and prove itself to be a marketing asset. The most relevant example of successful virtue consumerism is that of the Toyota Prius, a hybrid-electric automobile whose distinctive shape has allowed owners to signal their environmental commitment. However, whereas ‘green’ marketing can backfire for those politically unaligned with the environmental movement5, the package of virtues connected with autonomous vehicles — safety, but also reductions in traffic and parking congestion — contain uncontroversial values that allow consumers to advertise themselves as safe, smart and prosocial.
Risk heuristics and algorithm aversion
When the first traffic fatality involving Tesla’s Autopilot occurred in May 2016, it was covered by every major news organization — a feat unmatched by any of the other 40,200 US traffic fatalities that year. We can expect an even larger reaction the first time an autonomous vehicle kills a pedestrian, or kills a child, or two autonomous vehicles crash into each other. Outsized media coverage of crashes involving autonomous vehicles may feed and amplify people’s fears by tapping into the availability heuristic (risks are subjectively higher when they come to mind easily) and affective heuristic (risks are perceived to be higher when they evoke a vivid emotional reaction). As with airplane crashes, the more disproportionate — and disproportionately sensational — the coverage that autonomous vehicle accidents receive, the more exaggerated people will perceive the risk and dangers of these cars in comparison to those of traditional human-driven ones. Worse, for autonomous vehicles, these reactions may be compounded by algorithm aversion6, the tendency for people to more rapidly lose faith in an erring decision-making algorithm than in humans making comparable errors. These reactions could derail the adoption of autonomous vehicles through numerous paths; they could directly deter consumers, provoke politicians to enact suffocating restrictions, or create outsized liability issues — fuelled by court and jury overreactions — that compromise the financial feasibility of autonomous vehicles. Each path could slow or even stall widespread adoption.
Countering these powerful psychological effects may prove especially difficult. Nevertheless, there are opportunities. Autonomous vehicle spokespeople should prepare the public for the inevitability of accidents — not overpromising infallibility, but still emphasizing autonomous vehicles’ safety advantages over human drivers. One barrier that prevents people from adopting (superior) algorithms over human judgment is overconfidence in one’s own performance7 — something famously prevalent in driving. Manufacturers should also be open about algorithmic improvements. Autonomous vehicles are better portrayed as being perfected, not as being perfect. Politicians and regulators can also play a role in managing overreaction. Though human themselves, and ultimately answerable to the public, legislators should resist capitulating to the public’s fears of low-probability risks8. They should instead educate the public about the actual risks and, if moved to act, do so in a calculated way, perhaps by offering the public ‘fear placebos'8 — high-visibility, low-cost gestures that do the most to assuage the public’s fears without undermining the real benefits that autonomous vehicles might bring. Asymmetric information and the theory of the machine mind
The dubious reputation of the CIA is sometimes blamed on the asymmetry between the secrecy of their successes and the broad awareness of their failures. Autonomous vehicles will face a similar challenge. Passengers will be acutely aware of the cars’ rare failures — leading to the issues described above — but may be blissfully unaware of all the small successes and optimizations. This asymmetry of information is part of a larger psychological barrier to the trust in autonomous vehicles: the opacity to the decision-making occurring under the hood. If trust is characterized by the willingness to yield vulnerability to another entity, it is critical that people can comfortably predict and understand the behaviour of the other entity. Indeed, the European Union General Data Protection Regulation recently established the citizen’s “right to [...] obtain an explanation of the decision reached [...] and to challenge the decision” made by algorithms9. However, full transparency may be neither possible nor optimal. Autonomous vehicle intelligence is driven in part by Fig. 1 | A schematic example of the ethical trade-offs that autonomous vehicles will need to make between the lives of passengers and pedestrians3. The visual comes from the Moral Machine website (http://moralmachine.mit.edu), which we launched to collect large-scale data from the public. So far, we have collected over 30 million decisions from over 3 million people.
machine learning, in which computers learn increasingly sophisticated patterns without being explicitly taught. This leaves underlying decision-making processes opaque even to the programmer (let alone the passenger). But even if a detailed account of the computer’s decisions were available, it would only offer the end-user an incomprehensible deluge of information. The trend in many ‘lower stakes’ computer interfaces (for example, web browsers) has thus been in the opposite direction — hiding the complex decision-making of the machine to present a simple, minimalistic user experience. For autonomous vehicles, although some transparency can improve trust, too much transparency into the explanations for the car’s actions can overwhelm the passenger, thereby increasing anxiety10.
Thus, what is most important for generating trust and comfort is not full transparency but communication of the correct amount and type of information to allow people to develop mental models (an abstract representation of the entity’s perceptions and decision rules) of the cars5 — a sort of theory of the machine mind. There is already a robust literature investigating what information is most crucial to communicate; however, most of this research has been conducted on artificial intelligence in industrial, residential or software interface settings. Not all of it will be perfectly transferable to autonomous vehicles, so researchers need to investigate what information best fosters predictability, trust and comfort in this new and specific setting. Moreover, autonomous vehicles will need to communicate not just with their passengers, but with pedestrians, fellow drivers and the other stakeholders on the road. Currently, people decipher the intentions of other drivers through explicit signals (such as indicators, horns and gestures) and through assumptions based on the mental models formed of drivers (‘Why is she slowing down here?’ or ‘Why is he positioning himself like that?’). Everyone on the road will need to adjust their human models to those of autonomous vehicles, and the more research delineating what information people find crucial and comforting, the more seamless and less panicky this transition will be.
A new social contract
Automobiles began their transformational integration into our lives over a century ago. In this time, a system of laws regulating the behaviour of drivers and pedestrians, and the designs and practices of manufacturers, has been introduced and continuously refined. Today, the technologies that mediate these regulations, and the norms, fines and other punishments that enforce them, maintain just enough trust in the traffic system to keep it tolerable. Tomorrow, the integration of autonomous cars will be similarly transformational, but will occur over a much shorter timescale. In that time, we will need a new social contract that provides clear guidelines about who is responsible for different kinds of accidents, how monitoring and enforcement will be performed, and how trust among all stakeholders can be engendered. Many challenges remain — hacking, liability and labour displacement issues, most significantly — but this social contract will be bound as much by psychological realities as by technological and legal ones. We have identified several here, but more work remains. We believe it is morally imperative for behavioural scientists of all disciplines to weigh in on this contract. Every day the adoption of autonomous cars is delayed is another day that people will continue to lose their lives to the non-autonomous human drivers of yesterday.
Prolonged transport and cannibalism of mummified infant remains by a Tonkean macaque mother
Prolonged transport and cannibalism of mummified infant remains by a Tonkean macaque mother. Arianna De Marco, Roberto Cozzolino, and Bernard Thierry. Primates. https://link.springer.com/article/10.1007/s10329-017-0633-8
Abstract: Observations of animals’ responses to dying or dead companions raise questions about their awareness of states of helplessness or death of other individuals. In this context, we report the case of a female Tonkean macaque (Macaca tonkeana) that transported the body of her dead infant for 25 days and cannibalized its mummified parts. The mother appeared agitated in the first 2 days after the birth. She then took care of her infant’s corpse, which progressively dried and became mummified. In a third stage, the mother continued to transport the corpse as it started disintegrating, and she gnawed and consumed some parts of the remains. Our observations suggest that mummification of the body favored persistence of maternal behaviors by preserving the body’s shape. The female gradually proceeded from strong attachment to the infant’s body to decreased attachment, then finally full abandonment of the remains.
Abstract: Observations of animals’ responses to dying or dead companions raise questions about their awareness of states of helplessness or death of other individuals. In this context, we report the case of a female Tonkean macaque (Macaca tonkeana) that transported the body of her dead infant for 25 days and cannibalized its mummified parts. The mother appeared agitated in the first 2 days after the birth. She then took care of her infant’s corpse, which progressively dried and became mummified. In a third stage, the mother continued to transport the corpse as it started disintegrating, and she gnawed and consumed some parts of the remains. Our observations suggest that mummification of the body favored persistence of maternal behaviors by preserving the body’s shape. The female gradually proceeded from strong attachment to the infant’s body to decreased attachment, then finally full abandonment of the remains.
Anticipated regret from erring after following advice is greater than anticipated regret from erring after ignoring advice
Tzini, K., and Jain, K. (2017) The Role of Anticipated Regret in Advice Taking. J. Behav. Decision Making, doi: 10.1002/bdm.2048.
Abstract: Across five studies, we demonstrate that anticipated future regret influences receptiveness to advice. While making a revision to one's own judgment based on advice, people can anticipate two kinds of future regret: (a) the regret of following non-beneficial advice and (b) the regret of ignoring beneficial advice. In studies 1a (scenario task) and 1b (judgment task), we find that anticipated regret from erring after following advice is greater than anticipated regret from erring after ignoring advice. Furthermore, receptiveness decreases as the difference between anticipated regret from following and from ignoring advice increases. In study 2, we demonstrate that perceived justifiability of one's own initial decision is greater than that of advice. This difference in perceived justifiability influences anticipated regret and that, in turn, influences receptiveness. In study 3, we investigate the effect of advisor's expertise on perceived justifiability, anticipated regret, and receptiveness. In study 4, we propose and test an intervention to improve receptiveness based on self-generation of advice justifications. Participants who were asked to self-generate justifications for the advice were more receptive to it. This effect was mediated by perceived justifiability and anticipated regret. These findings shed further light on what prevents people from being receptive to advice and how this can be improved.
Abstract: Across five studies, we demonstrate that anticipated future regret influences receptiveness to advice. While making a revision to one's own judgment based on advice, people can anticipate two kinds of future regret: (a) the regret of following non-beneficial advice and (b) the regret of ignoring beneficial advice. In studies 1a (scenario task) and 1b (judgment task), we find that anticipated regret from erring after following advice is greater than anticipated regret from erring after ignoring advice. Furthermore, receptiveness decreases as the difference between anticipated regret from following and from ignoring advice increases. In study 2, we demonstrate that perceived justifiability of one's own initial decision is greater than that of advice. This difference in perceived justifiability influences anticipated regret and that, in turn, influences receptiveness. In study 3, we investigate the effect of advisor's expertise on perceived justifiability, anticipated regret, and receptiveness. In study 4, we propose and test an intervention to improve receptiveness based on self-generation of advice justifications. Participants who were asked to self-generate justifications for the advice were more receptive to it. This effect was mediated by perceived justifiability and anticipated regret. These findings shed further light on what prevents people from being receptive to advice and how this can be improved.
Resource Allocation Decisions: When Do We Sacrifice Efficiency in the Name of Equity?
Resource Allocation Decisions: When Do We Sacrifice Efficiency in the Name of Equity? Tom Gordon-Hecke et al. Interdisciplinary Perspectives on Fairness, Equity, and Justice, pp 93-105. https://link.springer.com/chapter/10.1007/978-3-319-58993-0_6
Abstract: Equity, or the idea that one should be compensated according to one’s respective contribution, is a fundamental principle for resource allocation. People tend to endorse equity in a wide range of contexts, from interpersonal relationships to public policy. However, at times, equity might come at the expense of efficiency. What do people do when they must waste resources to maintain equity? In this chapter, we adopt a behavioral perspective on such equity–efficiency trade-offs, reviewing the relevant findings from the social psychology, judgment and decision-making and behavioral economics literature. We show that whereas allocators will often choose to waste in the name of equity, this is not necessarily the case. We review various psychological aspects that affect the allocators’ decision.
Abstract: Equity, or the idea that one should be compensated according to one’s respective contribution, is a fundamental principle for resource allocation. People tend to endorse equity in a wide range of contexts, from interpersonal relationships to public policy. However, at times, equity might come at the expense of efficiency. What do people do when they must waste resources to maintain equity? In this chapter, we adopt a behavioral perspective on such equity–efficiency trade-offs, reviewing the relevant findings from the social psychology, judgment and decision-making and behavioral economics literature. We show that whereas allocators will often choose to waste in the name of equity, this is not necessarily the case. We review various psychological aspects that affect the allocators’ decision.
The Facial Width-to-Height Ratio Predicts Sex Drive, Sociosexuality, and Intended Infidelity
The Facial Width-to-Height Ratio Predicts Sex Drive, Sociosexuality, and Intended Infidelity. Steven Arnocky et al. Archives of Sexual Behavior, https://link.springer.com/article/10.1007/s10508-017-1070-x
Abstract: Previous research has linked the facial width-to-height ratio (FWHR) to a host of psychological and behavioral characteristics, primarily in men. In two studies, we examined novel links between FWHR and sex drive. In Study 1, a sample of 145 undergraduate students revealed that FWHR positively predicted sex drive. There were no significant FWHR × sex interactions, suggesting that FWHR is linked to sexuality among both men and women. Study 2 replicated and extended these findings in a sample of 314 students collected from a different Canadian city, which again demonstrated links between the FWHR and sex drive (also in both men and women), as well as sociosexuality and intended infidelity (men only). Internal meta-analytic results confirm the link between FWHR and sex drive among both men and women. These results suggest that FWHR may be an important morphological index of human sexuality.
Abstract: Previous research has linked the facial width-to-height ratio (FWHR) to a host of psychological and behavioral characteristics, primarily in men. In two studies, we examined novel links between FWHR and sex drive. In Study 1, a sample of 145 undergraduate students revealed that FWHR positively predicted sex drive. There were no significant FWHR × sex interactions, suggesting that FWHR is linked to sexuality among both men and women. Study 2 replicated and extended these findings in a sample of 314 students collected from a different Canadian city, which again demonstrated links between the FWHR and sex drive (also in both men and women), as well as sociosexuality and intended infidelity (men only). Internal meta-analytic results confirm the link between FWHR and sex drive among both men and women. These results suggest that FWHR may be an important morphological index of human sexuality.
Contrary to prediction, kids these days are better able to delay gratification than they were in the past
Kids These Days: 50 years of the Marshmallow task. Protzko. https://osf.io/j9tuz/
Abstract: Have children gotten worse at their ability to delay gratification? We analyze the past 50 years of data on the Marshmallow test of delay of gratification. Children must wait to get two preferred treats; if they cannot wait, they only get one. Duration for how long children can delay has been associated with a host of positive life outcomes. Here we provide the first evidence on whether children’s ability to delay gratification has truly been decreasing, as theories of technology or a culture of instant gratification have predicted. Before analyzing the data, we polled 260 experts in cognitive development, 84% of who believed kids these days are getting worse or are no different. Contrary to this prediction, kids these days are better able to delay gratification than they were in the past, corresponding to a fifth of a standard deviation increase in ability per decade.
Abstract: Have children gotten worse at their ability to delay gratification? We analyze the past 50 years of data on the Marshmallow test of delay of gratification. Children must wait to get two preferred treats; if they cannot wait, they only get one. Duration for how long children can delay has been associated with a host of positive life outcomes. Here we provide the first evidence on whether children’s ability to delay gratification has truly been decreasing, as theories of technology or a culture of instant gratification have predicted. Before analyzing the data, we polled 260 experts in cognitive development, 84% of who believed kids these days are getting worse or are no different. Contrary to this prediction, kids these days are better able to delay gratification than they were in the past, corresponding to a fifth of a standard deviation increase in ability per decade.
Belief in free will affects causal attributions when judging others’ behavior
Belief in free will affects causal attributions when judging others’ behavior. Oliver Genschow, Davide Rigoni, and Marcel Brass. Proceedings of the National Academy of Sciences, vol. 114 no. 38, 10071–10076, doi: 10.1073/pnas.1701916114
Significance: The question whether free will exists or not has been a matter of debate in philosophy for centuries. Recently, researchers claimed that free will is nothing more than a myth. Although the validity of this claim is debatable, it attracted much attention in the general public. This raises the crucial question whether it matters if people believe in free will or not. In six studies, we tested whether believing in free will is related to the correspondence bias—that is, people’s automatic tendency to overestimate the influence of internal as compared to external factors when interpreting others’ behavior. Overall, we demonstrate that believing in free will increases the correspondence bias and predicts prescribed punishment and reward behavior.
Abstract: Free will is a cornerstone of our society, and psychological research demonstrates that questioning its existence impacts social behavior. In six studies, we tested whether believing in free will is related to the correspondence bias, which reflects people’s automatic tendency to overestimate the influence of internal as compared to external factors when interpreting others’ behavior. All studies demonstrate a positive relationship between the strength of the belief in free will and the correspondence bias. Moreover, in two experimental studies, we showed that weakening participants’ belief in free will leads to a reduction of the correspondence bias. Finally, the last study demonstrates that believing in free will predicts prescribed punishment and reward behavior, and that this relation is mediated by the correspondence bias. Overall, these studies show that believing in free will impacts fundamental social-cognitive processes that are involved in the understanding of others’ behavior.
Significance: The question whether free will exists or not has been a matter of debate in philosophy for centuries. Recently, researchers claimed that free will is nothing more than a myth. Although the validity of this claim is debatable, it attracted much attention in the general public. This raises the crucial question whether it matters if people believe in free will or not. In six studies, we tested whether believing in free will is related to the correspondence bias—that is, people’s automatic tendency to overestimate the influence of internal as compared to external factors when interpreting others’ behavior. Overall, we demonstrate that believing in free will increases the correspondence bias and predicts prescribed punishment and reward behavior.
Abstract: Free will is a cornerstone of our society, and psychological research demonstrates that questioning its existence impacts social behavior. In six studies, we tested whether believing in free will is related to the correspondence bias, which reflects people’s automatic tendency to overestimate the influence of internal as compared to external factors when interpreting others’ behavior. All studies demonstrate a positive relationship between the strength of the belief in free will and the correspondence bias. Moreover, in two experimental studies, we showed that weakening participants’ belief in free will leads to a reduction of the correspondence bias. Finally, the last study demonstrates that believing in free will predicts prescribed punishment and reward behavior, and that this relation is mediated by the correspondence bias. Overall, these studies show that believing in free will impacts fundamental social-cognitive processes that are involved in the understanding of others’ behavior.
Maximizers maximize for both themselves and others, whereas satisficers satisfice for themselves but maximize for others
Do maximizers maximize for others? Self-other decision-making differences in maximizing and satisficing. Mo Luan , Lisha Fu , and Hong Li. Personality and Individual Differences, Volume 121, 15 January 2018, Pages 52–56, https://doi.org/10.1016/j.paid.2017.09.009
Highlights
• Self-other decision-making differences between maximizers and satisficers exist.
• Maximizers maximize for both themselves and others.
• Satisficers satisfice for themselves but maximize for others.
Abstract: The current research provides initial evidence of self-other decision-making differences between maximizers and satisficers by focusing on how they make the tradeoff between the value and the effort an option requires when deciding for themselves and for others. Study 1 demonstrates that maximizers prefer a high-value but effort-consuming option both for themselves and for others, whereas satisficers prefer that option for others but not for themselves. Study 2 further shows that to attain high value with a choice, maximizers not only are willing to expend more effort themselves but also advise others to expend more effort; however, satisficers choose to expend less effort themselves but do not advise others to do so. In conclusion, the current research contributes to the relevant literature by demonstrating that maximizers maximize for both themselves and others, whereas satisficers satisfice for themselves but maximize for others.
Keywords: Self-other decision-making; Maximizing; Satisficing; Value; Effort
Highlights
• Self-other decision-making differences between maximizers and satisficers exist.
• Maximizers maximize for both themselves and others.
• Satisficers satisfice for themselves but maximize for others.
Abstract: The current research provides initial evidence of self-other decision-making differences between maximizers and satisficers by focusing on how they make the tradeoff between the value and the effort an option requires when deciding for themselves and for others. Study 1 demonstrates that maximizers prefer a high-value but effort-consuming option both for themselves and for others, whereas satisficers prefer that option for others but not for themselves. Study 2 further shows that to attain high value with a choice, maximizers not only are willing to expend more effort themselves but also advise others to expend more effort; however, satisficers choose to expend less effort themselves but do not advise others to do so. In conclusion, the current research contributes to the relevant literature by demonstrating that maximizers maximize for both themselves and others, whereas satisficers satisfice for themselves but maximize for others.
Keywords: Self-other decision-making; Maximizing; Satisficing; Value; Effort
Consumers in durable goods markets are rational and forward looking
Are consumers forward looking? Evidence from used iPhones. Kanis Saengchote & Voraprapa Nakavachara. Applied Economics Letters, Pages 1-5. http://dx.doi.org/10.1080/13504851.2017.1380286
ABSTRACT: This study examines the impact of planned obsolescence – the introduction of new models to make existing models obsolete – on secondary markets for mobile phones. Using data of over 320,000 used iPhones listings on Thailand’s largest online marketplace, we document that iPhone prices decrease with age, around 2.8–3.2% for each passing month. We find no evidence that the price decline accelerates after launches of new models (i.e. obsolescence), lending support to the view that consumer in durable goods markets are rational and forward looking.
KEYWORDS: Durable goods, mobile phones, product obsolescence, forward-looking consumer
ABSTRACT: This study examines the impact of planned obsolescence – the introduction of new models to make existing models obsolete – on secondary markets for mobile phones. Using data of over 320,000 used iPhones listings on Thailand’s largest online marketplace, we document that iPhone prices decrease with age, around 2.8–3.2% for each passing month. We find no evidence that the price decline accelerates after launches of new models (i.e. obsolescence), lending support to the view that consumer in durable goods markets are rational and forward looking.
KEYWORDS: Durable goods, mobile phones, product obsolescence, forward-looking consumer