How facial aging affects perceived gender: Insights from maximum likelihood conjoint measurement. Daniel Fitousi. Journal of Vision November 2021, Vol.21, 12. https://doi.org/10.1167/jov.21.12.12
Abstract: Conjoint measurement was used to investigate the joint influence of facial gender and facial age on perceived gender (Experiment 1) and perceived age (Experiment 2). A set of 25 faces was created, covarying independently five levels of gender (from feminine to masculine) and five levels of age (from young to old). Two independent groups of observers were presented with all possible pairs of faces from this set and compared which member of the pair appeared as more masculine (Experiment 1) or older (Experiment 2). Three nested models of the contribution of gender and age to judgment (i.e., independent, additive, and saturated) were fit to the data using maximum likelihood. The results showed that both gender and age contributed to the perceived gender and age of the faces according to a saturated observer model. In judgments of gender (Experiment 1), female faces were perceived as more masculine as they became older. In judgments of age (Experiment 2), young faces (age 20 and 30) were perceived as older as they became more masculine. Taken together, the results entail that: (a) observers integrate facial gender and age information when judging either of the dimensions, and that (b) cues for femininity and cues for aging are negatively correlated. This correlation exerts stronger influence on female faces, and can explain the success of cosmetics in concealing signs of aging and exaggerating sexually dimorphic features.
General discussion
I find that facial gender and age are not perceived independently of each other. For 14 of 16 observers, judgments of gender (or age) were contaminated by age (or gender) according to a saturated observer model. Generally, the results suggest that female faces are perceived as more masculine as they become older; and young faces (age 20 and 30) are judged as older as they become more masculine. Why do aging and gender interact? The answer is rooted in the perceptual structure of the faces themselves. Perception of facial gender and age rely on shape and texture cues (
Brown & Perrett, 1993;
Bruce & Langton, 1994;
Burton et al., 1993). The correlations between these phenotypic aspects can be readily demonstrated in our set of synthetic face stimuli (
Figure 1), and they are likely present in real faces.
3 Facial aging is conveyed by morphological cues (
Berry & McArthur, 1986;
Burt & Perrett, 1995;
George & Hole, 2000;
O’Toole et al., 1997) such as a) an increase in the size of the jaw, b) thinning of the lips, and c) an increase in the distance between the eyebrow and the eyes. Textural cues for aging particularly affect the skin: a) skin tone becomes darker, b) it has more wrinkles, c) its luminance contrast decreases, and d) its pigmentation becomes yellower. Many of these shape and texture cues also signal masculine features (
Brown & Perrett, 1993;
Bruce & Langton, 1994;
Burton et al., 1993;
Russell, 2009). Men have bigger jaws, their lips are thinner, and their eyebrows are closer to their eyes than females; moreover, they tend to have darker skin with lower contrast (
Russell, 2010;
Tarr et al., 2001). The upshot is that many of the features that serve as cues for old age are also signs of masculinity. The current study shows that cues for age and for gender have the strongest interactive influence when faces are either young or feminine.
A comment is in order regarding the relations between skin lightness and gender in the current experimental setting. Despite my great efforts to eliminate the correlation between skin lightness and gender, feminine skin tone created by FaceGen were slightly lighter than masculine faces. One may argue that this undermines the current conclusions because observers could have used skin lightness as a cue for gender. However, one should note that such a confound may reflect an ecologically valid cue because a) in real population female skin reflectance is 2 to 3 percentage points above that of male skin (
Rahrovan et al., 2018), and b) FaceGen relies on a representative sample of real people (
Inversions, 2008). Moreover, a study by
Macke and Wichmann (2010) also attempted to remove textural cues for gender (including lightness), but it seems that these authors could not prevent this built-in confound. In their caption to their
Figure 1 they admit that: “For some men with strong beard growth, like the gentlemen in the rightmost column, this meant that there was a slightly darker region around the mouth – at least from an introspective point of view a reasonable cue to gender” (p. 6). The upshot is that it is difficult to equate experimentally the skin lightness of feminine and masculine faces due to natural differences. Future studies may be able to circumvent this confound, but then an issue may arise as to whether such faces reflect the statistical structure of real-world faces.
Evolution, cosmetics, and facial aging
From an evolutionary stand point, the current findings make sense. Fertility in young females may be signaled by cues for femininity and cues for age. The correlation between the two types of cues lead to informational redundancy that increases the chances that information about fertility is transmitted efficiently and correctly to potential mates. This idea can also explain the success of cosmetics (
Russell, 2010) and its higher prevalence among women (
Etcoff et al., 2011;
Russell, 2009). Sexual attractiveness and anti-aging are two main goals of the cosmetics industry, and the current study can explain why. Signals of femininity are positively correlated with attractiveness (
O'Toole et al., 1998), and as we have shown here are also negatively correlated with age. This finding can explain the biological incentive for using cosmetics to highlight sexually dimorphic attributes of femininity, but also to conceal cues for old age. Both serve as signals of fertility and are expressed on the same facial cues. For example,
Russell (2009) demonstrated the existence of a sex difference in facial contrast that affects the perception of gender. Females have greater luminance contrast between the eyes, lips and the surrounding skin than men.
Russell (2009) showed that cosmetics consistently increase facial contrast and thus are functioning to exaggerate feminine features and consequently their attractiveness. Notably, skin contrast also differs between young and old faces and serves as a cue for age (
Berry & McArthur, 1986;
Burt & Perrett, 1995;
George & Hole, 2000;
O’Toole et al., 1997). Lower levels of contrast signal old age. Thus, cosmetics not only exaggerates sexually dimorphic attributes, but also decreases perceived age.
Etcoff et al. (2011) found that the influences of cosmetics go even farther than that, exerting dramatic positive effects on judgments of competence, likability, and trustworthiness.
Nonveridical perception of facial gender and age
The present study reveals that facial age and gender are not perceived veridically, but are subjected to major influences of context. Context here refers to the contamination of each dimension by the other. In this sense, each face has a specific gender (age) level that sets a unique context for the perception of its age (gender). This finding is in accordance with the mentioned effects of cosmetics on perceived gender (
Etcoff et al., 2011;
Russell, 2009), and also with several recent adaptation studies that found that the appearance of both age (
O’Neil & Webster, 2011) and gender (
Schweinberger et al., 2010) can be altered through adaptation to a previous face. For example, a neutral-gender face seems to be male after adaptation to a female face (
Schweinberger et al., 2010). Similarly, adapting to an old face causes faces of intermediate age to appear younger (
O’Neil & Webster, 2011). These context effects imply that the internal representations that govern facial age and gender are dynamic and are sensitive to previous experience and correlational structures in the faces themselves. I have recently proposed a ‘face file’ approach to face recognition (
Fitousi, 2017a,
2017b), which assumes that faces are stored as temporary episodic representations with detailed featural information about the face’s gender, age, identity, and emotion. These features are bound to each other (e.g., male+young) and can be updated momentarily. Face files can be used to account for the context-dependent nature of facial age and gender (
Fitousi, 2017a,
2017b).
Age and gender are essential for what social scientists call person ‘construal’ (
Bodenhausen & Macrae, 1998;
Fiske & Neuberg, 1990;
Freeman & Ambady, 2011;
Macrae & Bodenhausen, 2001), the process by which social agents construct coherent representations of themselves and others. These representations are used by observers to guide information processing and information generations towards others. According to the
dynamic interactive model by
Freeman and Ambady (2011) the initial presentation of a face launches simultaneous activation of several competing social categories (e.g., age, gender, race). Along the accrual of evidence, the pattern of activation gradually sharpens into clear interpretation (young female), while other alternatives are inhabited. According to this framework, a confluence of perceptual (bottom-up) and cognitive–social (top-down) factors can generate various types of interactions among social facial dimensions such as facial age and gender. The dynamic–interactive model can account for a large body of research that has documented interactive patterns in face categorization including the current findings (
Cloutier et al., 2014;
Freeman et al., 2012;
Johnson et al., 2012). One crucial goal made explicit by the dynamic-interactive model is the need to distinguish between lower (perceptual) and higher (stereotypes, attitudes, expectations) sources of bias in face categorization (
Becker et al., 2007). The former are yielded by correlated phenotypic traits in the sensory cues themselves (skin texture and cues for age), whereas the latter are generated by learned associations or social expectations that can be located in the ‘head’ of the observer.
The integral/separable distinction and MLCM
The application of the MLCM approach (
Knoblauch et al., 2014) to psychological dimensions raises a caveat concerning a more general issue in psychology—the concept of
perceptual independence. Garner proposed a fundamental distinction between
integral and
separable dimensions (
Garner, 1962,
1970,
1974a,
1974b,
1976,
1991). This distinction is a pillar of modern cognitive science (for review see
Algom & Fitousi, 2016). Objects made of integral dimensions, such as hue and saturation are perceived in their totality and cannot be readily decomposed into their constituent dimensions. Objects made of separable dimensions, such as shape and color, can be readily decomposed into their constituent dimensions. The integral–separable distinction cannot be decided based on the verdict of only one procedure. There is the risk that a theoretical concept (e.g., separability) would be only a restatement of the empirical result (
Fitousi, 2015;
Von Der Heide et al., 2018).
To avoid circular reasoning, Garner has noted the need for converging operations (
Garner et al., 1956). Several methodologies have been used to support the integral–separable distinction: a) Garner’s speeded classification task (
Garner, 1974b), b) similarity scaling (
Attneave, 1950;
Melara, 1992), c) information theory (
Fitousi, 2013;
Garner, 1962;
Garner & Morton, 1969), d) general recognition theory (GRT
Ashby & Townsend, 1986;
Fitousi, 2013;
Townsend et al., 2012;
Maddox & Ashby, 1996), and e) system factorial technology (
SFT Townsend & Nozawa, 1995). Take method b) for example, in which observers are asked to rate the similarity of two objects (
Hyman & Well, 1967). It has been often found that for integral objects similarity is computed according to a Euclidian distance metric, and for separable objects similarity is computed according to a city-block distance metric (
Melara, 1992). It has also been shown that the outcome from the similarity procedure accords well with the Garner task results (
Algom & Fitousi, 2016).
Recently,
Rogers et al. (2016) and
Rogers et al. (2018) have proposed that the MLCM can be used as a converging operation on the notion of integrality–separability. A case in point in their studies is the color dimensions of chroma and lightness (
Munsell, 1912). In the Garnerian tradition, these dimensions are considered as classic integral dimensions: a) they produce Garner interference (
Garner & Felfoldy, 1970) and b) they obey a Euclidian distance metric in similarity scaling (
Burns & Shepp, 1988). If indeed the dimensions are dependent in processing, then an additive or saturated observer MLCM models should best describe the data.
Rogers et al. (2016) found that the additive observer model best described the data. Lightness negatively contributed to perception of chroma for red, blue, and green hues (but not for yellow). These results are important because they demonstrate the utility of the MLCM in providing converging evidence on the notion of integrality–separability, and in identifying the internal representations that govern color dimensions. They are also highly informative in uncovering the specific pattern of dimensional interaction. One would have expected integral dimensions to be best fitted by saturated observer model rather than additive observer model. Hence, the application of multiple related methodologies to investigate questions of perceptual independence is of great practical and theoretical importance in sharpening and explicating our concepts.
The Garnerian edifice is rich in theoretical insights that can illuminate issues in MLCM, and vice versa. This can lead to a cross-fertilization of both methods. For example, an important caveat raised in the Garnerian tradition concerns the direction of interaction between a pair of dimensions. Integrality is not a symmetric concept. Dimension A can be integral with dimension B, while dimension B can be separable from dimension A (
Fitousi & Algom, 2020). This notion can be readily applied to studies in MLCM. When judging dimension A and ignoring dimension B, observers can exhibit a complete independent observer model. However, when judging dimension B and ignoring dimension A, observers can exhibit an additive or saturated observer model. Moreover, Garnerian theorizing highlights the role of relative discriminability in determining the direction of asymmetry (
Melara & Mounts, 1993). Often the more discriminable dimension intrudes on the less-discriminable dimension (
Fitousi & Algom, 2006). It has been shown that relative discriminability can be altered by the researcher and determine the direction of interaction. Therefore, to provide a fair test of independence the dimensions should be equally discriminable (
Algom et al., 1996). These factors might also be important in MLCM modeling.
Future work should test in detail the exact relations between notions of integrality–separability in the Garner tradition and the notions of MLCM. It is not immediately clear for example, that independence in the two approaches is the same. When the dimensions of facial age and gender were subjected to the Garner test
Garner (1974b) by
Fitousi (2020), they were found to be separable dimensions. But the application of the MLCM to the same dimensions supported their dependency. Why age and gender can appear as separable dimensions in the Garner paradigm and as integral or interactive dimensions (
Algom et al., 2017;
Algom & Fitousi, 2016) in the MLCM? The solution to this caveat comes by assuming that perceptual independence is not a unitary concept, but rather a nomenclature pointing to various types of independence (
Ashby & Townsend, 1986;
Fitousi, 2013,
2015;
Fitousi & Wenger, 2013). This idea has been originally developed by
Garner and Morton (1969) and
Ashby and Townsend (1986). It seems that conjoint measurement gauges different types of independence than the Garner paradigm. Future studies may be able to understand the relations between these two approaches.
No comments:
Post a Comment