Abstract: Using an own survey on wage expectations among students at two Swiss institutions of higher education, we examine the wage expectations of our respondents along two main lines. First, we investigate the rationality of wage expectations by comparing average expected wages from our sample with those of similar graduates; further, we examine how our respondents revise their expectations when provided information about actual wages. Second, using causal mediation analysis, we test whether the consideration of a rich set of personal and professional controls, inclusive of preferences on family formation and number of children in addition to professional preferences, accounts for the difference in wage expectations across genders. Results suggest that both males and females overestimate their wages compared to actual ones and that males respond in an overconfident manner to information about realized wages. Personal mediators alone cannot explain the indirect effect of gender on wage expectations; however, when combined with professional mediators, this results in a quantitatively large reduction in the unexplained effect of gender on wage expectations. Nonetheless, a non-negligible and statistically significant direct (or unexplained) effect of gender on wage expectations remains in several, but not all specifications.
6 Conclusion
Using novel survey data from students from the Business School of the Bern University of Applied Science (BUAS) and the Faculty of Economic and Social Sciences of the University of Fribourg, this paper advances the literature related to gender differences in wage expectations in two specific ways. First, it determines whether these gender differences are rational by comparing expected wages from our respondents to realized wages from comparable graduates; and, further, by investigating how respondents adjust their wage expectations when information about actual wages is provided. Second, using an inverse probability weighting method in the context of causal mediation, it examines whether the consideration of a rich set of professional and personal controls accounts for the difference in wage expectations across gender.
In line with the literature, we confirm the presence of gender differences in wage expectations in our survey results. The difference between male and female expected wages is about one salary class (CHF500) upon graduation and roughly 1.4 salary classes three years thereafter (roughly 19 and 17% of female average expected wages, respectively). The evidence suggests that both males and females overestimate their wages relative to realized wages from comparable graduates. Further, results from an information intervention—about median wages earned in Switzerland—show that males alone (incorrectly) revise their expected wages upward by about 0.6 of a salary class (CHF300) when forecasting wages three years after graduation. This is possibly the result of over-confidence.
Using mediation analysis (which permits explicating endogeneity issues), we find that the inclusion of a rich set of personal and professional mediators—not commonly included in survey data—greatly reduces the direct, unexplained effect of gender on wage expectations. While personal mediators alone do not contribute to the indirect effect of gender on wage expectations, when added to professional mediators they lead to a reduction of about 30% in the contribution of the direct effect of gender on wage expectations and to a similar increase in the indirect effect (when the decomposition of these effects takes the male as the reference). Further, when professional and personal mediators are jointly considered, the direct, unexplained effect of gender is greatly attenuated, both in size as well as in statistical significance. Nonetheless, a non-negligible and statistically significant direct (or unexplained) effect of gender on wage expectations remains in several, but not all specifications. Our results are stable under different specifications and trimming thresholds.
The relationship between baseline pupil size and intelligence. Jason S. Tsukahara, Tyler L. Harrison, Randall W. Engle. Cognitive Psychology, Volume 91, December 2016, Pages 109-123. https://doi.org/10.1016/j.cogpsych.2016.10.001
Highlights
• Higher order cognition is related to baseline pupil size.
• Baseline pupil size is uniquely related to fluid intelligence.
• Implications for resting-state brain organization and locus coeruleus function.
Abstract: Pupil dilations of the eye are known to correspond to central cognitive processes. However, the relationship between pupil size and individual differences in cognitive ability is not as well studied. A peculiar finding that has cropped up in this research is that those high on cognitive ability have a larger pupil size, even during a passive baseline condition. Yet these findings were incidental and lacked a clear explanation. Therefore, in the present series of studies we systematically investigated whether pupil size during a passive baseline is associated with individual differences in working memory capacity and fluid intelligence. Across three studies we consistently found that baseline pupil size is, in fact, related to cognitive ability. We showed that this relationship could not be explained by differences in mental effort, and that the effect of working memory capacity and fluid intelligence on pupil size persisted even after 23 sessions and taking into account the effect of novelty or familiarity with the environment. We also accounted for potential confounding variables such as; age, ethnicity, and drug substances. Lastly, we found that it is fluid intelligence, more so than working memory capacity, which is related to baseline pupil size. In order to provide an explanation and suggestions for future research, we also consider our findings in the context of the underlying neural mechanisms involved.
Are Emotions Natural Kinds After All? Rethinking the Issue of Response Coherence. Daniel Sznycer, Adam Scott Cohen. Evolutionary Psychology, June 1, 2021. https://doi.org/10.1177/14747049211016009
Abstract: The synchronized co-activation of multiple responses—motivational, behavioral, and physiological—has been taken as a defining feature of emotion. Such response coherence has been observed inconsistently however, and this has led some to view emotion programs as lacking biological reality. Yet, response coherence is not always expected or desirable if an emotion program is to carry out its adaptive function. Rather, the hallmark of emotion is the capacity to orchestrate multiple mechanisms adaptively—responses will co-activate in stereotypical fashion or not depending on how the emotion orchestrator interacts with the situation. Nevertheless, might responses cohere in the general case where input variables are specified minimally? Here we focus on shame as a case study. We measure participants’ responses regarding each of 27 socially devalued actions and personal characteristics. We observe internal and external coherence: The intensities of felt shame and of various motivations of shame (hiding, lying, destroying evidence, and threatening witnesses) vary in proportion (i) to one another, and (ii) to the degree to which audiences devalue the disgraced individual—the threat shame defends against. These responses cohere both within and between the United States and India. Further, alternative explanations involving the low-level variable of arousal do not seem to account for these results, suggesting that coherence is imparted by a shame system. These findings indicate that coherence can be observed at multiple levels and raise the possibility that emotion programs orchestrate responses, even in those situations where coherence is low.
We asked if response coherence in shame can be observed in the general case where input variables to the shame system are specified minimally. We observed internal coherence: Five shame responses—felt shame and the motivations to hide, to lie, to destroy evidence, and to threaten a witness—in general covaried with one another in direction and intensity from one event (scenario) to the next. This is in line with the internal coherence that has been documented in some (but not all) of the previous research on response coherence in emotion.
In addition, we observed two novel patterns of response coherence predicted from an adaptationist framework: external coherence and cross-cultural coherence. Regarding external coherence, five shame responses in the individual in general covaried in direction and intensity with the devaluation expressed by audiences from one event to the next. We observed internal and external coherences within the United States and India. And regarding cross-cultural coherence, five shame responses in one country in general covaried in direction and intensity both with the five shame responses and with audience devaluation in the other country from one event to the next. Importantly, the intensity of the motivation to communicate reputationally damaging information to other people—a response that involves arousal—failed to correlate positively, and in fact correlated mostly negatively, with the intensities of audience devaluation and with the five shame responses across events. This is expected if the internal, external, and cross-cultural coherence observed here reflects the operation of a shame orchestrator. But this is not expected if response coherence in emotion stems from low-level affective variables such as arousal. Of course, the alternative evaluated here (communicate event) is but one of a large set of possible alternatives involving arousal. Thus, future research is needed to test against additional alternatives involving arousal, as well as valence and culturally-variable emotion concepts.
Adaptationist thinking suggests that the hallmark of emotion is the capacity to adaptively orchestrate multiple adaptations. And that response coherence is incidental to adaptive orchestration. Evidence on response coherence—whether positive, null, or negative—is therefore not dispositive of whether or not emotion programs are natural kinds. Notwithstanding this critical point, evidence on response coherence can be of value. Data on incidental phenomena are valuable as raw data after all, and anomalies (in affective science, inconsistent observations of response coherence across studies, for instance) can catalyze scientific progress (Kuhn, 1970). The present findings go beyond internal coherence, however. That shame responses can cohere between cultures and also externally, matching in intensity the devaluation expressed by audiences (i.e., matching in intensity the adaptive problem hypothesized to have selected for shame), suggests that shame, and perhaps other emotions (Sznycer & Cohen, 2021; Sznycer, Sell, & Dumont, 2021), are functionally specialized adaptations.
An alternative account, one that is consistent with the theory of constructed emotion, is that the cross-cultural coherences observed here were imparted by the English concept of “shame” and not by a shame neurocognitive system. This is plausible, considering that our stimuli were presented in one and the same language (English) both in the United States and in India, because emotion words have meanings that are more similar in language groups that are closer in linguistic space (Jackson et al., 2019). Similarly, the US–India similarities observed here may have stemmed from culturally-specific concepts or schemas with which people interpret their own affect in shame (see Barrett, 2014). These concepts may be similar across industrial societies such as the United States and India even when they are idiosyncratic of industrial societies; and so these concepts may be shared by our American and Indian participants even when these concepts are not universal. However, we note that previous research has shown cross-cultural commonalities in the feeling of shame across 15 small-scale societies with highly diverse subsistence bases (e.g., horticulture, pastoralism, fishing) and speaking highly diverse languages, including: Igbo, Icé-tód, Nepali, Tuvan, and Mongolian (Sznycer, Xygalatas, Agey, et al., 2018). This suggests that the cross-cultural coherences among multiple shame responses that we observed here may have been driven by an evolved shame system. Nevertheless, further inquiry is needed to determine how generalizable the present findings are across different cultures, ecologies, and language-groups.
Further research is also needed to determine whether the patterns of coherence observed here generalize to other discrediting actions and personal characteristics, to the reactive (vs. prospective) operation of shame in response to actual discrediting events, to the various cognitive, behavioral, and physiological responses that shame appears to control (other than the motivations studied here), and to responses measured within-situations and within-individuals (see Mauss et al., 2005; Reisenzein, 2000). In addition, further research is necessary to know whether and how patterns of response coherence are modulated by a host of situational variables that are relevant to shame (e.g., co-presence of an audience, characteristics of the audience, actual responses of the audience) but were not studied here.
It is important to reiterate that the kinds of comprehensive tests that are necessary to corroborate or deny the hypothesis of adaptive orchestration (for shame or for other emotions) have, to our knowledge, not been conducted yet. We suspect that mapping emotion decision trees systematically and comprehensively will be challenging. Shame, for instance, is likely to be sensitive to many input variables and to implement many contingencies. Moreover, high-order interactions between input variables are expected. The simple (hypothetical) conditional appease (or blame or threaten) when others have seen your disgraceful action, but not when they haven’t seen you might be conditioned further by additional external and internal variables. For example, when others have seen your disgraceful action, active shame responses might be delivered in general. But there might be exceptions. Active shame responses might not be delivered when you have been seen if the individuals in the audience are few or have low physical formidability or status or if they are known to lack strategic information to grasp the true meaning of the disgraceful action.
Combat stress in a small-scale society suggests divergent evolutionary roots for posttraumatic stress disorder symptoms. Matthew R. Zefferman and Sarah Mathew. Proceedings of the National Academy of Sciences, April 13, 2021 118 (15) e2020430118; https://doi.org/10.1073/pnas.2020430118
Significance: Did PTSD and combat stress evolve as a universal human response to danger? Or are they culturally specific? We addressed this question by interviewing 218 warriors from the Turkana, a non-Western small-scale society, who engage in high-risk lethal cattle raids. We found that symptoms that may have evolved to protect against danger, like flashbacks and startle response, were high in the Turkana and best predicted by combat exposure. However, symptoms that are similar to depression were lower in the Turkana compared to American service members and were better predicted by moral violations. These findings suggest different evolutionary roots for different symptoms which may lead to better diagnosis and treatment.
Abstract: Military personnel in industrialized societies often develop posttraumatic stress disorder (PTSD) during combat. It is unclear whether combat-related PTSD is a universal evolutionary response to danger or a culture-specific syndrome of industrialized societies. We interviewed 218 Turkana pastoralist warriors in Kenya, who engage in lethal cattle raids, about their combat experiences and PTSD symptoms. Turkana in our sample had a high prevalence of PTSD symptoms, but Turkana with high symptom severity had lower prevalence of depression-like symptoms than American service members with high symptom severity. Symptoms that facilitate responding to danger were better predicted by combat exposure, whereas depressive symptoms were better predicted by exposure to combat-related moral violations. The findings suggest that some PTSD symptoms stem from an evolved response to danger, while depressive PTSD symptoms may be caused by culturally specific moral norm violations.
Our findings demonstrate that combat-related PTSD symptoms are not limited to industrialized societies and can occur even in small-scale societies where warriors are venerated and socially embedded in tight-knit communities. In particular, learning-and-reacting symptoms are potentially evolved responses to acute dangers such as those encountered in combat. These symptoms had high prevalence among both American service members and Turkana warriors. Moreover, among the Turkana, combat exposure and combat outcomes were more consistently associated with learning-and-reacting symptom severity than with depressive symptom severity.
Our findings have implications for understanding the roots of moral injury (59, 71, 72), trauma causedy “perpetrating, failing to prevent, or bearing witness to acts that transgress deeply held moral beliefs and expectations” (ref. 59, p. 695). For example, moral injury can occur when soldiers violate morally held beliefs against killing civilians (73). Moral injury might also be the primary cause of combat stress in drone pilots who, even though they are flying combat missions from a control room far from danger, have a high-definition view of the human suffering caused by their missile strikes (74). Our statistical models suggest a relationship between moral injury and depressive PTSD symptoms in particular. Combat exposure and outcome measures are not as important predictors for depressive symptoms as they are for learning-and-reacting symptoms among the Turkana. Instead, predictors assessing exposure to moral violations as perpetrators or victims and experiencing social sanctions are associated with depressive symptoms. Additionally, having moral concerns for a larger segment of people from the opposing side was more strongly associated with depressive symptoms than with learning-and-reacting symptoms. All of this supports the idea that depressive symptoms may be a response to expected social sanctioning due to moral violations, which is consistent with some evolutionary theories of depression (56, 57). However, it is also possible that depressive symptoms, whatever their cause, may make instances of moral injury more salient to study participants. Additional experimental, longitudinal, and cross-cultural research may resolve the direction of causality.
Consistent with the association in the Turkana between expected social sanctions and depressive symptoms, Turkana warriors with high symptom severity were less prone than American service members to experience some of the depressive symptoms of PTSD. This could be because the actual or perceived social risks of participating in war are lower for Turkana warriors than for American service members. Turkana warriors are venerated and there is widespread support from their community for going on raids and defending the Turkana from raids. They do not expect to face moral disapproval for participating in combat (43) (although they do face moral disapproval for cowardice and can be blamed for the death of comrades). In fact, those who have killed in combat are often celebrated in Turkana society with many warriors undergoing akiger, a ritual that scars the warrior’s body to mark him as someone who has killed. Warriors with akiger scars are highly regarded by both men and women. Additionally, raid participation is high among Turkana men, so warriors are almost always in the company of other warriors with similar combat experiences. Many women and children too have experienced raids by other groups. As such, combat experiences are a commonly shared and a frequent topic of discussion in Turkana society. There is little to no stigma associated with sharing the details of combat (43).
By contrast, in the United States and other industrialized nation states, support for war and those who participate in war is often far from universal, and killing, even in combat, is rarely celebrated. American soldiers fight in foreign countries away from the civilian population and, upon returning, they may perceive disapproval of their experiences and actions from friends and family. Additionally, most Americans cannot relate to the experiences of those who have participated in combat. Consequently, warfare presents a moral conflict because what is considered a soldier’s duty in combat can violate prevailing moral norms within the soldier’s society. American soldiers may therefore have a heightened awareness of potential social repercussions especially as they integrate back into civilian life. Veterans’ support groups and group therapy replicate some aspects of Turkana society by allowing veterans to share their experiences with each other, but Turkana warriors receive stronger signals of social support and understanding from all members of their communities.
Since most PTSD research has not focused on symptom-specific causes, moral injury research is relatively new, and combat trauma research has not taken a functional evolutionary perspective, there has been little attempt to associate depressive PTSD symptoms with moral injury in the Western context. A better grasp of symptom-specific patterns of PTSD in Western military personnel, as we have done with the Turkana, would be useful to further evaluate the proposed theory, delineate what moral injury manifests as, and assess how it relates to PTSD.
The effect of killing in combat on PTSD is more ambiguous in the Turkana than in American service members. While killing in combat is an important contributor to PTSD in American service members who served in Iraq and Afghanistan (70, 75), it was not present in the top models of total, learning-and-reacting, or depressive symptom severity in the Turkana. On average, the direction of influence is to reduce learning-and-reacting symptoms but increase depressive symptoms, opening the possibility that it might be a contributor to moral injury even in a population where killing in combat confers prestige. While this was counter to our prediction, it is consistent with some ethnographic observations. The Turkana, as well as neighboring pastoral groups, have culturally specific idioms of distress associated with killing in the war zone, including perceptions of being polluted, beliefs that killing portends future misfortune, and feeling haunted by the enemy’s ghost, which suggest that killing of enemies is a potentially morally hazardous event (76). Among Samburu pastoralists, war zone mercy occurs even in circumstances where killing of the opponent would be normative, indicating that warriors may feel empathy toward their opponents (76) and can thus perceive killing as morally hazardous.
Our results imply that while killing is potentially morally hazardous across cultures, culturally specific institutions mediate its role in causing PTSD, which clarifies why killing is more risky for American service members than for Turkana warriors. First, norms regarding killing of individuals from the opposing side are less restrictive among the Turkana than in nation-state warfare. Unlike in nation-state warfare, the Turkana have a high level of moral autonomy in who they kill in combat, a pattern noted in other pastoral societies (76). Additionally, systems of social support within Turkana society may help alleviate its moral ambiguity. In particular, the Turkana have three postraid rituals that warriors can engage in that are specifically designated for those who have killed enemies in combat (43). In addition to akiger which is optional, akipur is a purification ritual which is viewed as mandatory for anyone who has killed an enemy in combat to protect them from weakening and slowly wasting away. Another ritual, ngitebus, protects a warrior from the ghosts of slain enemy warriors. It is considered optional, but it is almost always performed preventatively in conjunction with akipur. It can also be performed any time after a haunting occurs. For instance, one warrior, due to repeated hauntings, estimated that he underwent ngitebus 11 times over 20 y. These rituals, which require the participation of other community members, could serve as a cue to warriors that the community views their act of killing as morally acceptable. The lack of such rituals pertaining to killing, especially in populations with expansive moral beliefs and restrictive norms of killing in combat, may contribute to the heightened depressive symptoms and moral injury experienced by US military service members.
How Do Values Affect Behavior? Let Me Count the Ways. Lilach Sagiv, Sonia Roccas. Personality and Social Psychology Review, May 28, 2021. https://doi.org/10.1177/10888683211015975
Abstract: The impact of personal values on preferences, choices, and behaviors has evoked much interest. Relatively little is known, however, about the processes through which values impact behavior. In this conceptual article, we consider both the content and the structural aspects of the relationships between values and behavior. We point to unique features of values that have implications to their relationships with behavior and build on these features to review past research. We then propose a conceptual model that presents three organizing principles: accessibility, interpretation, and control. For each principle, we identify mechanisms through which values and behavior are connected. Some of these mechanisms have been exemplified in past research and are reviewed; others call for future research. Integrating the knowledge on the multiple ways in which values impact behavior deepens our understanding of the complex ways through which cognition is translated into action.
Keywords: personal values, values and behavior, personality structure, individual differences
Adult attachment and dating strategies: How do insecure people attract mates? Claudia Chloe Brumbaugh, R. Chris Fraley. Personal Relationships, November 4 2010. https://doi.org/10.1111/j.1475-6811.2010.01304.x
Abstract: When asked to choose among secure or insecurely attached partner prototypes, research shows that people tend to select secure individuals as their first choice. Despite this pattern, not everyone chooses secure partners in reality. The goal of this study was to examine the ways in which insecure individuals present themselves that might make them attractive to others. To achieve this goal, participants were led to believe that they were interacting with a possible date. That insecure individuals presented themselves as warm, engaging, and humorous people when communicating with potential mates were found. These findings suggest that insecure people have numerous dating tactics and positive qualities that they display to win over romantic partners.
Support in response to a spouse’s distress: Comparing women and men in same-sex and different-sex marriages. Mieke Beth Thomeer, Amanda M. Pollitt, Debra Umberson. Journal of Social and Personal Relationships, March 3, 2021. https://doi.org/10.1177/0265407521998453
Abstract: Support for a spouse with psychological distress can be expressed in many different ways. Previous research indicates that support expression is shaped by gender, but we do not know much about how support within marriage is provided in response to a spouse’s distress outside of a different-sex couple context. In this study, we analyze dyadic data from 378 midlife married couples (35–65 years; N = 756 individuals) within the U.S. to examine how men and women in same- and different-sex relationships provide support when they perceive that their spouse is experiencing distress. We find women in different-sex couples are less likely to report taking care of their distressed spouse’s tasks or giving their distressed spouse more personal time and space compared to women in same-sex couples and men. We also find that men in different-sex couples are less likely to report encouraging their spouse to talk compared to men in same-sex couples and women. Being personally stressed by a spouse’s distress is positively associated with providing support to that spouse, whereas feeling that a spouse’s distress is stressful for the marriage is negatively associated with providing support. This study advances understanding of gendered provisions of support in response to psychological distress in marriage, moving beyond a framing of women as fundamentally more supportive than men to a consideration of how these dynamics may be different or similar in same- and different-sex marital contexts.
Keywords: Distress, dyadic data analysis, gay/lesbian relationships, gender, gender differences, marriage, mental health, social support, stress
Dyadic effects of attachment and relationship functioning. Elizabeth B. Lozano et al. Journal of Social and Personal Relationships, March 5, 2021. https://doi.org/10.1177/0265407521999443
Abstract: Some scholars have proposed that people in couples in which at least one person is secure are just as satisfied as people in which both members are secure (i.e., buffering hypothesis). The present investigation tested this hypothesis by examining how relationship satisfaction varies as a function of the attachment security of both dyad members. Secondary analyses were performed using data from two studies (Study 1: 172 couples; Study 2: 194 couples) in which heterosexual dating couples were asked to complete self-reports of their own attachment style and relationship satisfaction. To evaluate the buffering hypothesis, we fit a standard APIM using SEM and added an actor × partner interaction term to our model. Contrary to expectations, our results suggested that secure partners do not “buffer” insecurely attached individuals. Moreover, partner attachment did not explain satisfaction much above and beyond actor effects. This work addresses a gap in the literature with respect to the dynamic interplay of partner pairing, allowing scholars to better understand attachment processes in romantic relationships.
Eating Behavior as a New Frontier in Memory Research. Benjamin M. Seitz, A. Janet Tomiyama, Aaron P. Blaisdell. Neuroscience & Biobehavioral Reviews, June 2 2021. https://doi.org/10.1016/j.neubiorev.2021.05.024
Highlights
• We argue eating behavior is highly intertwined with learning and memory processes.
• We review a wide range of literature to support this position.
• We identify existing gaps of knowledge and highlight areas for future research.
• We suggest memory systems may have evolved to help animals obtain food.
Abstract: The study of memory is commonly associated with neuroscience, aging, education, and eyewitness testimony. Here we discuss how eating behavior is also heavily intertwined—and yet considerably understudied in its relation to memory processes. Both are influenced by similar neuroendocrine signals (e.g., leptin and ghrelin) and are dependent on hippocampal functions. While learning processes have long been implicated in influencing eating behavior, recent research has shown how memory of recent eating modulates future consumption. In humans, obesity is associated with impaired memory performance, and in rodents, dietary-induced obesity causes rapid decrements to memory. Lesions to the hippocampus disrupt memory but also induce obesity, highlighting a cyclic relationship between obesity and memory impairment. Enhancing memory of eating has been shown to reduce future eating and yet, little is known about what influences memory of eating or how memory of eating differs from memory for other behaviors. We discuss recent advancements in these areas and highlight fruitful research pursuits afforded by combining the study of memory with the study of eating behavior.
Keywords: memoryeating behaviormnemonic control of eatingepisodic memoryobesityevolution
Abstract: In the political discussion, the promotion of local food systems and short supply chains is sometimes presented as a means to increase the resilience of the food system, e.g. in the context of the COVID-19 pandemic, and it is also suggested as a means to improve the environmental footprint of the food system. Differentiating between local food systems and short supply chains, a review of the literature on the environmental, social and economic dimensions of sustainability is carried out. “Local food” cannot simply be equated with “sustainable food”; in most cases, it neither can ensure food security nor does it necessarily have a lower carbon footprint. For the environmental sustainability of food systems, many more factors matter than just transportation, not least consumers’ dietary choices. In terms of social sustainability, local food systems are not necessarily more resilient, but they can contribute to rural development and a sense of community. In terms of economic sustainability, selling via short supply chains into local markets can benefit certain farmers, while for other producers it can be more profitable to supply international markets.
Sustainability of local food systems
Environmental sustainability
Greenhouse gas emissions
In the general discussion, local food is most often linked to sustainability via the concept of “food miles”, i.e. the idea that transport-related emissions are so important that they can be used to determine a product’s “carbon footprint”. By extension, the suggestion is that local food is more sustainable because it is transported less. While this idea might be intuitive at first glance, it ignores the fact that there are many elements that impact a product’s carbon footprint more than transportation, such as land use, production processes or storage (Ritchie & Roser, 2020). Farmers who operate in more favourable environments and are more productive may therefore be able to compensate for the greater “food miles” of their produce.
The carbon footprint of food systems is much more influenced by consumers’ dietary choices than by the “localness” of the food they buy (Benis & Ferrão, 2017; Carlsson-Kanyama & González, 2009; Ritchie, 2020; Webb et al., 2013). Even eating more seasonal food, another common proposition to decrease the carbon footprint of food, is only another element of a sustainable diet that is overshadowed by the greater environmental and health benefits of dietary change, in particular to reduce overconsumption of meat (Macdiarmid, 2014). Similarly, carbon footprint reductions in local food systems can mainly be achieved with a reduction in animal source foods (Puigdueta et al., 2021).
Even when only looking at transportation, “localness” can be a poor guide to determine a product’s carbon footprint as, e.g., cargo ships or trains can exploit economies of scale and be relatively less polluting over longer distances than small trucks over shorter distances (Bell & Horvath, 2020; Schmitt et al., 2017; Tasca et al., 2017). Similarly, if consumers visit individual local producers, their carbon emissions can be greater than the emissions from the systems of large-scale suppliers (Coley et al., 2009). In short, it seems to be impossible to state that because of their localness, local food systems produce lower emissions compared to conventional ones (Paciarotti & Torregiani, 2021).
Therefore, in the literature, there is general agreement that “local” cannot be used as a proxy, let alone a guarantee, for lower greenhouse gas emissions (Table 1). “‘Longer’ supply channels generate lower environmental impacts” in terms of carbon footprint (Malak-Rawlikowska et al., 2019), and “long food supply chains may generate less negative environmental impacts than short chains (in terms of fossil fuel energy consumption, pollution, and GHG emissions)… environmental impacts of the food distribution process are not only determined by the geographical distance” (Majewski et al., 2020). In fact, transport-related GHG emissions represent only 5–6% of total GHG emissions of global food production (Crippa et al., 2021; Ritchie & Roser, 2020). The notable exception where transport can indeed be used as an indicator for a product’s poor carbon footprint is food that is transported by plane (Carlsson-Kanyama & González, 2009; Schwarz et al., 2016).
Other environmental impacts
Also in terms of other environmental impacts, local food systems are not necessarily more sustainable than systems that operate at larger scales. For instance, local food systems may require more intensive farming to produce enough food to satisfy the local demand, but while agricultural intensification can ensure greater productivity, it also causes environmental stresses (Pradhan et al., 2015). If instead cropland was expanded, also such an expansion has negative impacts on biodiversity and ecosystem services (Pradhan et al., 2014). For instance, in the USA, localising maize production would cause a 2.7 million ton increase in fertiliser applications and a 50 million pound increase in pesticide use per year, while the required conversion of local natural land to agricultural uses would jeopardise biodiversity (Sexton, 2009). Also regarding these other impacts, the methods of production and of processing are important for ensuring less environmental impact; “local” or “short” is not necessarily better (Kneafsey et al., 2013).
On the other hand, local food systems that rely on short supply chains may require less packaging and reduce food losses that otherwise occur at the production and retail stages (Galli & Brunori, 2015; Tasca et al., 2017). Similarly, short supply chains can be conducive to environmentally sounder practices, e.g. due to the closer or even direct contact between consumers and the producers. And for a successful circular economy, spatial location can also be one factor (among others) (Kiss et al., 2019). Still, it is not possible to generalise, and local food does not automatically reduce negative environmental externalities (Paciarotti & Torregiani, 2021).
Social sustainability
Food security and resilience
Apart from environmental impacts of local food systems, for sustainability also social aspects matter. In this regard, an important consideration is the question if local food systems can ensure food security and if they are more resilientFootnote1. Given the differences in agro-ecological and climate conditions across localities and regions, and given vastly different population densities (urbanisation), it is perhaps not surprising that there is agreement in the literature that local food systems generally cannot ensure food security and that resilience is enhanced by strategically diversifying the food supply via trade rather than by limiting it to local production (Table 1).
In fact, less than one-third of the global population would be capable of meeting its food demand from local crop production (even if food waste is reduced, yield gaps are closed, and diets are adjusted to more efficient crop mixes), and only 11–28% could fulfil their demand for specific crops within a 100-km radius. For 26–64% of the population, that distance is even greater than 1000 km, with substantial variation between different regions and crops (Kinnunen et al., 2020). For rice and maize, only 10% of the global population can theoretically fulfil their demand within 100 km, while for other cereals and pulses less than 25% can meet their demand in foodsheds with such a relatively small radius (Verstegen, 2020). Even if foodsheds were defined at a transnational level, large parts of the globe would still depend on trade to feed themselves (Kinnunen et al., 2020). If not just staple food or calories are considered but all the nutrients, the foodsheds required for a balanced diet become even bigger (Costello et al., 2021).
Only about 400 million people worldwide live in an area where locally (within less than 100 km2) enough varieties of the food groups are produced to sustain their existing dietary compositions. Even at a continental scale, the number of food self-sufficient people increases only to around 3.3 billion (Pradhan et al., 2014). This shows the importance of international trade and global products in meeting food demands and ensuring food security (Karg et al., 2016; Pradhan et al., 2014; Schmitt et al., 2017). For instance, in Europe, international trade helped safeguard food security during the heat wave in 2003 (Puma et al., 2015).
On the other hand, countries can have legitimate concerns about risks associated with excessive reliance on trade for their food supplies (Clapp, 2017). Food security can be threatened not only by regional climate-related shocks, but also by price volatility and changes in global markets (D’Odorico et al., 2014). Therefore, while trade allows mitigating the impact of local variability of supply, to increase the resilience of the system overall and to ensure food security also in times of crises, a balance has to be struck between relying on local food production and suitably diversified trade in food products.
Similarly, regarding the affordability of food, there is little evidence that short supply chains improve consumers’ access to affordable healthy food (Galli & Brunori, 2015). For instance in Europe, local food systems can increase prices of livestock products due to the shortening of feed supply chains and concomitant increases in production costs (Deppermann et al., 2018). In contrast, global food products present substantial advantages in terms of affordability, in particular for middle and low-income consumers (Schmitt et al., 2017).
Other social impacts
When it comes to other social impacts, the literature mentions several benefits of local food systems, particularly aspects of care and links to the territory (Schmitt et al., 2017). When production and processing occurs locally, it is influenced by local heritage and consumption patterns (Galli et al., 2015), contributes to the revitalisation of rural areas, provides new job opportunities especially for young people, boosts farmers’ self-esteem and helps create relationships between city and countryside (Mancini et al., 2019; Mundler & Laughrea, 2016), which can promote community development (Karg et al., 2016). The stability of local food systems may be overestimated, though, as there can be substantial flux of actors and social networks can decay over time (Brinkley et al., 2021).
However, in the literature, social benefits are more often linked to short food supply chains rather than to local food systems as such. It is short supply chains that can favour the interaction and connection between farmers and consumers and thereby promote the development of trust and social capital that in turn can generate a sense of local identity and community and contribute to social inclusion (Galli & Brunori, 2015; Kiss et al., 2019; Kneafsey et al., 2013; Vittersø et al., 2019). Short supply chains can also promote the social and professional recognition of farmers (Mundler & Laughrea, 2016).
Economic sustainability
Benefits for producers
Economic aspects of food systems are also important in the context of sustainability. As the previous sections already have shown, local food systems do not necessarily provide consumers with more affordable food, but they may contribute to rural development and help create employment, which benefits rural populations. In this section, the focus is on whether local food systems help farmers increase their viability and profitability.
In the literature, also economic benefits are more often linked to short supply chains than to local food systems as such. And while consumers can have a greater willingness to pay for local food (Printezis et al., 2019), there are indications that short supply chains can result in better prices for producers and that farmers can appropriate more added value and thereby improve their income, e.g. by selling part of their output in their own outlets to reduce costs while gaining reputation (Malak-Rawlikowska et al., 2019; Mancini et al., 2019; Schmitt et al., 2017). To the extent that they are successful and increase local financial flows, local food systems can also have positive multiplier effects on local economies and allow the exploitation of synergies with the tourism sector (Kneafsey et al., 2013; Mancini et al., 2019). The contribution of the food system to local economies of rural areas is limited, though, and other opportunities to drive rural change may be greater, such as the provision of better medical and transport services and of faster internet (OECD, 2020).
However, short supply chains usually rely on the commercialisation of high-quality agricultural products and on consumers’ readiness to pay more for products they know and trust because they understand the “real” costs of production (de Fazio, 2016; Galli & Brunori, 2015). Therefore, the demand for local products may be limited by the number of consumers who can afford to pay higher prices, or who are willing to do so, and eventually sales may stagnate due to plateauing consumer interest (Low et al., 2015). Due to the small scales of local systems and the sourcing of inputs through shorter supply chains, local producers may also be constrained in how much they can reduce their production costs (Deppermann et al., 2018; Kneafsey et al., 2013).
Trade benefits
For instance, in the European Union (EU), agricultural products account for about 8% of the Union’s total international trade, and over the last 10 years, its trade in agricultural products grew on average 5% per year, with exports growing faster than imports (EC, 2020b). In particular, agri-food products with protected “geographical indications” are profitable as their sales value is on average double that for similar but uncertified products—and more than 20% of their total worth of about €75 billion are generated through exports outside the Union (EC, 2020c). In contrast, little more than 50% of the products with geographical indications are sold within the country where they are produced (AND, 2021), and it is safe to assume that only a fraction of that is sold locally in the area where they are produced.
Serving local markets can benefit certain farmers, especially those who are located close to urban areas where there are enough consumers with the purchasing power and the willingness to pay for local premium products. However, other producers benefit from being able to sell quality products on regional and global markets, and they would suffer if they were limited to producing for their local area.