La revista Psicothema fue fundada en Asturias en 1989 y está editada conjuntamente por la Facultad y el Departamento de Psicología de la Universidad de Oviedo y el Colegio Oficial de Psicología del Principado de Asturias. Publica cuatro números al año.
Se admiten trabajos tanto de investigación básica como aplicada, pertenecientes a cualquier ámbito de la Psicología, que previamente a su publicación son evaluados anónimamente por revisores externos.
Psicothema, 2004. Vol. Vol. 16 (nº 4). 587-591
Fernando Carvajal, Ruth Vidriales, Sandra Rubio and Pilar Martín
Universidad Autónoma de Madrid
The present study examined the cognitive evaluation of facial expressions. The amounts of time taken by 32 participants (from the general population) to find the different face in a set of photos where 31 faces were the same and only one was different, were analysed. The variations of the faces were due to identity and/or facial expressions (happiness, anger, neutral or combinations of these). The results showed that the happiness expression was detected the fastest and that the processing of this expression was made in a gestaltic way. They also showed that the recognition of the facial identity and the recognition of the expression were carried out by independent ways of processing.
Efecto de las variaciones en la expresión facial y/o en la identidad del modelo en una tarea de discriminación de caras. El presente estudio examina la evaluación cognitiva de la expresión facial. Se analizaron los tiempos invertidos por 32 sujetos de la población general en detectar la fotografía de una cara diferente entre un conjunto de treinta y una caras iguales. Las variaciones en las caras podían corresponder a la identidad y/o a la expresión facial (alegría, ira, cara neutra o combinaciones de estas tres expresiones). Los resultados indicaron que la expresión facial de alegría fue la que se detectó más rápido, que el procesamiento de dicha expresión se hizo de manera gestáltica y que el reconocimiento de la identidad facial y el de la expresión facial siguieron formas de procesamiento independientes.
Studies carried out with babies, children and adults, emphasise the accuracy of the human perceptual system in the discrimination of facial cues (Carvajal and Iglesias, 2002; Olivares and Iglesias, 2000). Neuropsychological and psychophisiological data also shows evidence of the existence of different cerebral areas, selectively implicated in facial information processing. Other than this, it has been more difficult to find the precise anatomical circuits that support facial expression and identity information, than other neuropsychological systems.
It is suggested that the right cerebral hemisphere is more accurate than the left in facial information proccessing (Borod, Koff, Yecker, Santschi and Schmidt, 1998; Schmidt, Hartje and Willmes, 1997). Data from patients with cerebral damage also show that facial identity recognition can be relatively independent of facial expression recognition. Parieto-occipital circuits are thougt to be relatively more implicated in identity recognition and frontal-temporal areas are thougt to be relatively implicated in facial expression identification (Braun, Denault, Cohen and Rouleau, 1994). However, patients with inferior pariental cortex and mesial anterior infracalcarine lesions are susceptible to facial expression disruptions (Adolphs, Damasio, Tranel and Damasio, 1996), and some prosopagnosia cases have been associated with occipital-temporal lesions, but also with damage in mesial anterior and temporal areas (Damasio, Damasio and van Hoesen, 1982; Tovee and Cohen-Tovee, 1993).
In accordance with this view, ERP studies find differences in time and scalp distribution. While matching for identity was associated with N400 electrophysiological components in fronto-central areas, matching for expression was associated with a centro parietal P300 response (Münte, Brack, Grootheer, Wieringa, Matzke and Johannes, 1998). It seems that taking into consideration ERP responses and the structures implicated in identity and expression matching of faces, it can be concluded that the two functions are executed by different neuronal systems.
Specifically, in relation to expression and emotional recognition processes, it has been demonstrated that some nuclei of the amygdala complex are essential in processing affective significance of some emotional expressions, specially fear (Adolphs, Tranel, Damasio and Damasio, 1994; Braun et al., 1994; Phillips et al., 2001). Also, some interconnections between amygdala and prefrontal cortexes have been found. This could reflect an adaptive ability to control our primitive emotional responses through conscious evaluation (Hariri, Mattay, Tessitore, Fera and Weinberger, 2003; Nomura et al., 2004).
With regard to hemispherical asymmetries, while some studies suggest the predominance of the right hemisphere in identification and expression of all emotions (Anderson, Spencer, Fulbright and Phelps, 2000; Bowers, Blonder, Feinberg and Heilman, 1991), others indicate that the hemispherical asymmetries are involved with emotional value. According to this view, negative emotions would be located in the right hemisphere, and positive emotions would depend on left hemisphere activity (Adolphs, Tranel and Damasio, 2001; Burton and Labar, 1999; Davidson, 1984; 1993; Kinsbourne and Bernardo, 1994; Lee, Loring, Dahl and Meador, 1993; Mandal, Borod, Asthana, Mohanty, Mohanty and Koff, 1999; Sutton and Davidson, 1997).
Finally, it should be pointed out, that the asymmetries described in the case of emotions are more evident in facial expression and recognition of emotions through the face, being less clear when they are evaluated by prosody (Adolphs and Tranel, 1999; Adolphs et al., 2001; Anderson and Phelps, 1998; Dronkers, Pinder and Damasio, 2001).
From an ethological view, the importance of facial information is also emphasised, especially its role in facial expression recognition and in social interactions. Concretely, when the task consists of detecting a face in a crowd in photographs of different faces (face- in- the- crowd effect), Hansen and Hansen (1988) found that an «angry» face in a «happy» faces crowd was detected faster than a «happy» face in an «angry» faces crowd. These results lead the authors to conclude that facial expressions that have a potential threat, are processed more efficiently than others.
On the other hand, results obtained by Hampton, Purcell, Bersine, Hansen and Hansen (1989) contradict these previous findings, as do those of Byrne and Eysenck (1995). Using neutral faces as a distractor stimulus, they found that the speed of processing was shorter when participants had to identify a happiness expression than when identifying an anger expression. The authors concluded that the facial expression of happiness is the easiest one to identify, and this could be attributed to the higher prevalence of this expression in social contexts.
In his research, Öhman et al. (Öhman, Hamm and Hughdal, 2000; Öhman, Lundqvist and Esteves, 2001) used schematic drawings of faces instead of photographs that differed in the details of the angles of the mouth and the eyebrows. The results supported those obtained by Hansen and Hansen (1988) because the anger expression was the fastest in being detected. This effect was independent of familiarity or novelty of the emotion, because the advantage was observed when anger expression were compared with very common configurations in social contexts (happiness and neutral expressions), or with others less frequent (sadness or mixed expressions, such as the eyebrows of an anger expression with the mouth of a happiness expression).
In this study, we set out firstly (as Öhman et al. 2000 and 2001) to evaluate which facial expression is detected most quickly and if facial processing is carried out in a gestaltic way or attending to isolated elements. To do this we used photographs (instead of drawings) of a model that showed different expressions of happiness, anger and neutral (trying to improve ecological validity).
Secondly, we tried to evaluate the independent suppositions involved in facial identity processing, and facial expression processing. One of the ways to establish the relationship between cerebral functioning and behaviour is to use the correlations that are present between them. The anatomo-clinical method has followed this approach frequently, trying to correlate structural and functional measures (for example, Chiarello, Kacinick, Manowitz, Otto and Leonard, 2004; Marshall and Gunn, 2004). Also, it has been considered that correlations of regional cerebral blood flow or regional cerebral metabolism among brain regions reveal their functional connections (e.g, Horwitz, Duara and Rapoport, 1984; Young et al., 2003). Lastly, another classical method is to define the correlations between cognitive processes in relation to the correlation established between two different measures of them (for example, Bates, Salcedo, Saigin and Pizzamiglio, 2003; Moulin, James, Freeman and Jones, 2004).
Taking this view into account, we used different crowds where the stimuli could vary in three different ways: facial expresión, model identity or both. It is suppoussed that the correlation between answer latencies in the crowds will constitute a measure of the relationship between facial identity processing and facial expression processing.
Method
Subjects
A total of 32 volunteers took part in the study (16 males, 16 females), with ages between 18 and 26 years old (M= 21.37, Sx= 1.82).
Stimuli
We used thirteen «crowds» with 32 photographs of a single face in each, arranged in 4 rows and 8 columns, with only one photograph different from the others. In ten of the crowds photographs of a female model were used. In the other three, photographs were faces of two male models.
To compose the first ten crowds, three photographs of one woman with different facial expressions (happiness, anger, and neutral) were used. To these three basic expressions, other mixed ones were added. These were four combinations of the three basic ones, showing different expressions on the lower and upper parts of the face. In all these cases, one of the two parts of the face was a happiness expression and the other part could be anger or a neutral expression. Two of the ten crowds were photographs with happiness facial expressions crowds, where the different one was either anger or neutral; four of the crowds were composed of anger facial expressions, and the different photograph could be a happy face, a neutral one or a combination of happiness and anger; and the other four crowds were composed of photographs of neutral faces, and only one with happiness or anger, or a combination of neutral face and happiness.
In the configuration of the last three crowds, photographs of two male models, posing a facial expression of joy or of anger were used.
In those crowds, 31 photographs belonged to the same model that was showing the joy expression in all of them, and the other photograph could be one of the same model with an anger expression, or the other model with the joy expression. In the last crowd, the different photograph was one of the second model posing anger.
Procedure
A computer screen was used to present all the crowds stimuli. Participants had to indicate if all the photographs in the crowd were the same, or if one of them was different, pressing the space bar when they had the correct answer. Time used in the task was recorded by the computer.
To control if the position of the different photograph could affect the answering speed, each stimulus was presented in three different times. One of them, the position of the different photograph was always the same (row 3, column 7), and could vary randomly in the other two presentations.
To the original 39 crowds (13 stimuli in three different presentations), 16 more were added. In these 16 new presentations all the photographs were the same. Twelve were composed with the photograph of the female model (with a joy, anger, or neutral facial expression), and the other four were designed with photographs of the first male model, always showing a joy facial expression. The 55 crowds were presented in two different orders. Half of the subjects were shown the first order and the other half worked with the second one.
Each participant was individually evaluated, and after the instructions, 12 training stimuli were presented. In these training crowds, 32 drawings appeared. In six of them all the drawings were the same, and in the other six one drawing was different from the others.
Results
When all the photographs were the same, participants did not commit any errors, and they got 97.8% of the answers correct when the crowd had a photograph different form the others.
Using the Friedman test, and taking into consideration only the right answers, the possible variations of the answering speed (due to the changes in the photograph positions) were analysed. No differences between the three types of presentation were noted (there was only a significant result in the anger expressions crowd that contained one different photograph of the neutral expression, χ2(2, N= 26)= 6.87, p<.05; although, subsequent comparisons did not find any difference). Therefore photograph position did not affect answer latency, the following analyses were carried out with the mean times for each stimulus (see Table 1).
After that, two MANOVAS were carried out. In those analyses the dependent variable was the answer latency. In the first MANOVA, results of the 10 stimuli that differed in the type of the facial expression showed by the model were compared. In the second one, the answer latency in the three stimuli that varied in facial expression and/or model identity were contrasted.
The first analyses were significant (F(1,31)= 1058.28, p<.0001), and the subsequent comparisons (Tukey a) showed that participants could identify more quickly: (1) A happiness expression in an anger crowd than an anger expression in a happiness crowd; (2) A happiness expression in a neutral expressions crowd than a neutral expression in a happiness crowd; (3) A complete happiness expression than mixed expressions of happiness, when distractor stimuli were angry or neutral faces.
The second analyses compared answer latency in the three stimuli that varied in the facial expression, the identity of the model or both. These results were also significant (F(1,31)= 738.79, p<.0001), and longer reaction times were found when model changed and facial expression remained the same. There were no significant differences between the other two stimuli (Tukey a).
Finally, answer latency of these last stimuli were correlated. The correlation of the stimulus that varied in facial expression and the other that varied in facial expression and in model identity simultaneously was significant (r= 0,58; p<.0,1), but the correlation between the stimuli that varied in model identity and the other with the facial expression variations was not (r= 0.27; p=.13). Likewise, no correlation was found between the stimuli that varied in model identity and the other that varied in model identity and facial expression simultaneously (r= - 0.11; p=.52). When answer latencies between these three stimuli and the first ten (those with the same female model with different facial expressions) were carried out, the results were lower for the stimulus that varied in the identity (M= 0.23, Sx= 0.18), than the those that varied in identity and facial expression (M= 0.48, Sx= 0.11), or those stimuli that varied only in facial expression (M= 0.61, Sx= 0.12).
Discussion
The main objectives of this study, proposed in the intoduction were focused on emotional facial expression. In particular, our aim was to evaluate which is the facial expression detected most quickly; which facial elements are more relevant in its processing and finally, to explore the relationship between facial expresión processing and facial identity processing.
The main results obtained in the study were: (1) Happiness facial expressions was the fastest in being detected; (2) Detection of happiness expressions was done in a gestaltic way, and not by processing single elements of the faces and; (3) Facial identity recognition and facial expression identification were carried out in two independent processes.
In contrast to the argument described in the introduction, first result do not replicate the obtained by Öhman et al (2000, 2001). On the contrary, it confirm those studies that emphasise that happiness expressions are the easiest to identify (Byrne and Eysenck,1995; Hampton et al, 1989; Harrison, Gorelczenko and Cook, 1990; Wagner, McDonald and Manstead, 1986). It would seem that threatening features that represent a facial expression do not determine the processing speed. It is possible that the role that this expression plays in the social context is the main element (taking into account its frequency and its affiliative function). It is then possible, that unlike to expressions such as happiness (where its facial component is emphasised), in expressions such as anger, other features like vocal or bodily behaviours have more importance or it could be that alternatively, this importance could be attributed to contextual variables (Caballero, Carrera, Sánchez, Muñoz, and Blanco, 2003; Carrera and Fernández -Dols, 1994; Galati and Lavelli, 1997).
The second results reflected that participants spent more time in taking the decision when the different photograph belonged to a mixed configuration (happiness only in a half of the face), than when it was a complete happiness expression. Although attentional mechanisms can influence physical properties independently, or can influence categorial information depending on task requirements (amongst others) (Funes and Lupiáñez, 2003; Roselló and Munar, 2004), attending to these results and according to other authors (Donelly and Davidoff, 1999; Seitz, 2002), we conclude that happiness facial expression differentiation is done in a gestaltic way and not primarily by processing isolated facial elements. If decision would have been taken, only attending to the information showed in a half of the face, answer latencies should have been the same, independently of the information showed in the other half of the face.
With respect to the role of the model identity in relation to the facial expression, the results showed that reaction time correlations between those tasks where the participant had to differentiate between facial expression were significant, and were not significant in the task where the participant had to discriminate between model identity. Furthermore identification of the different photograph was faster when facial expression varied, than when the model changed. When the participant had to differentiate between two different emotions, answer latency was similar independently of who showed the expression (the same person or two different people). However, participants spent more time discriminating between two models that showed the same expression. It would seem then, that facial expression and model identity recognition use two different procedures, and that facial expression processing is the fastest. In addition to emphasising the independence of the two processes, we think that this result can have implications in other tasks where the differentiation of identity is required, this is because participants can give their final answer perceiving features such as facial expression and not the identity of the person. On the other hand, we should emphasise that we cannot conclude that facial identity proccessing is carried out in a gestaltic way (as it happens in expression proccessing), because according to the main objectives of the study, the procedure only allowed us to analyse which were the relevant facial elements in facial expressions processing, but not if these elements were relevant in identity facial proccessing too.
In conclusion, we want to point out that this study is the first step in an investigation about cerebral specialization in facial information proccessing. Our next objective is to validate these results with other data obtained in neurological patients with focal damage, and studies of cerebral activity using functional neuroimaging techniques.
Acknowledgements
This research was supported by Universidad Autónoma de Madrid (PD 15-541-9-640 project).
Adolphs, R., Damasio, H., Tranel, D. and Damasio, A. (1996). Cortical systems for the recognition of emotion in facial expressions. Journal of Neuroscience, 16, 7.678-7.687.
Adolphs, R. and Tranel, D. (1999). Intact recognition of emotional prosody following amygdala damage. Neuropsychologia, 37, 1.285-1.292.
Adolphs, R., Tranel, D. and Damasio, H. (2001). Emotion recognition from faces and prosody following temporal lobectomy. Neuropsychology, 15, 396-404.
Adolphs, R., Tranel, D., Damasio, H. and Damasio, A.R. (1994). Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature, 372, 669-672.
Anderson, A.K. and Phelps, E.A. (1998). Intact recognition of vocal expressions of fear following bilateral lesions of the human amygdala. NeuroReport, 9, 3.607-3.613.
Anderson, A.K., Spencer, D.D., Fulbright, R.K. and Phelps, E.A. (2000). Contribution of the anteromedial temporal lobes to the evaluation of facial emotion. Neuropsychology, 14, 526-536.
Bates, E., Salcedo, J., Saygin, A.P. and Pizzamiglio, L. (2003). Quantifying dissociations in neuropsychological research. Journal of Clinical and Experimental Neuropsychology, 25, 1.128-1.153.
Borod, J.C., Koff, E., Yecker, S., Santschi, C. and Schmidt, J.M. (1998). Facial asymmetry during emotional expression: Gender, valence, and measurement technique. Neuropsychologia, 36, 1.209-1.215.
Bowers, D., Blonder, L.X., Feinberg, T. and Heilman, K.N. (1991). Differential impact of right and left hemisphere lesions on facial emotion and objet imagery. Brain 114, 2.593-2.609.
Braun, C.M., Denault, C., Cohen, H. and Rouleau, I. (1994). Discrimination of facial identity and facial affect by temporal and frontal lobectomy patients. Brain and Cognition, 24, 198-212.
Burton, L.A. and Labar, D. (1999). Emotional status after right vs left temporal lobectomy. Seizure, 8, 116-119.
Byrne, A. and Eysenck, M.W. (1995). Trait anxiety, anxious mood, and threat detection. Cognition and Emotion, 9, 549-562.
Caballero, A., Carrera, P., Sánchez, F., Muñoz, D. and Blanco, A. (2003). La experiencia emocional como predictor de los comportamientos de riesgo. Psicothema, 15, 427-432.
Carrera, P. and Fernández-Dols, J.M. (1994). Neutral faces in context: their emotional meaning and their function. Journal of Nonverbal Behavior, 18, 281-299.
Carvajal, F. and Iglesias, J. (2002). Face-to-face emotion interaction studies in Down Syndrome infants. International Journal of Behavioral Development, 26, 104-112.
Chiarello, C., Kacinick, N., Manowitz, B., Otto, R. and Leonard, C. (2004). Cerebral asymmetries for Language: Evidence for structural-behavioral correlations. Neuropsychology, 18, 219-231.
Damasio, A.R., Damasio, H. and van Hoesen, G.W. (1982). Prosopagnosia: anatomical basis and neurobehavioral mechanisms. Neurology, 32, 331-341.
Davidson, R.J. (1984). Affect, cognition, and hemispheric specialization. En C.E. Izard, J. Kagan, R. Zajonc (Eds.), Emotions, cognition and behavior. Cambridge: Cambridge University Press, 320-365.
Davidson, R.J. (1993). Parsing affective space: perspectives from Neuropsychology and Psychopgysiology. Neuropsychology, 7, 464-475.
Donnelly, N. and Davidoff, J. (1999). The mental representations of faces and houses: Issues concerning parts and wholes. Visual Cognition, 6, 319-343.
Dronkers, N.F., Pinder, S. and Damasio, A. (2001). Lenguaje y afasias. En E.R. Kandel, J.H. Schwartz y T.M. Jessell (Eds.): Principios de Neurociencia (4ª ed). Madrid: McGrawHill, 1.169-1.187.
Funes, M.J. and Lupiáñez, J. (2003). La teoría atencional de Posner: una tarea para medir las funciones atencionales de orientación, alerta y control cognitivo y la interacción entre ellas. Psicothema, 15, 260-266.
Galati, D. and Lavelli, M. (1997). Neonate and infants emotion expression perceived by adults. Journal of Nonverbal Behavior, 21, 57-83.
Hampton, C., Purcell, D.G., Bersine, L., Hansen, C.H. and Hansen, R.D. (1989). Probing «pop-out»: Another look at the face-in-the-crowd effect. Bulletin of the Psychonomic Society, 27, 563-566.
Hansen, C.H. and Hansen, R.D. (1988). Finding the face in the crowd: An anger superiority effect. Journal of Personality and Social Psychology, 54 (6), 917-924.
Hariri, A.R., Mattay, V.S., Tessitore, A., Fera, F. and Weinberger, D.R. (2003). Neocortical modulation of the amygdala response to fearful stimuli. Biological Psychiatry, 53, 494-501.
Harrison, D.W., Gorelczenko, P.M. and Cook, J. (1990). Sex differences in the functional asymmetry for facial affect perception. International Journal of Neuroscience, 52, 11-16.
Horwitz, B., Duara, R. and Rapoport, S.I. (1984). Intercorrelations of glucose metabolic rates between brain regions: application to healthy males in a state of reduced sensory input. Journal of Cerebral Blood Flow Metabolism, 4, 484-499.
Kinsbourne, M. and Bernardo, G. (1994). Bases neurológicas de los trastornos de atención. In N. Fejerman et al. (Eds.), Autismo infantil y otros trastornos del desarrollo. Buenos Aires: Paidós, 133-148.
Lee, G., Loring, D., Dahl, J. and Meador, K. (1993). Hemispheric specialization for emotional expression. Neuropsychiatry, Neuropsychology and Behavioural Neurology, 6, 143-148.
Mandal, M.K., Borod, J.C., Asthana, H.S., Mohanty, A., Mohanty, S. and Koff, E. (1999). Effects of lesion variables and emotion type on the perception of facial emotion. The Journal of Nervous and Mental Disease, 187, 603-609.
Marshall, J.C. and Gurd, J.M. (2004). On the anatomo-clinical method. Cortex, 40, 230-231.
Moulin, C.J.A., James, N., Freeman, J.E. and Jones, R.W. (2004). Deficient acquisition and consolidation: intertribal free recall performance in Alzheimer’s disease and mild cognitive impairment. Journal of Clinical and Experimental Neuropsychology, 26, 1-10.
Münte, T.F., Brack, M., Grootheer, O., Wieringa, B.M., Matzke, M. and Johannes, S. (1998). Brain potentials reveal the timing of face identity and expression judgements. Neuroscience Research, 30, 25-43.
Nomura, M., Ohira, H., Haneda, K., Iiada, T., Sadato, N., Okada, T. and Yonekura, Y. (2004). Functional association of the amygdale and ventral prefrontal cortex during cognitive evaluation of facial expressions primed by masked angry faces: an event-related fMRI study. Neuroimage, 21, 352-363.
Öhman, A., Ham, O.A. and Hugdhal, K. (2000). Cognition and the autonomic nervous system. Orienting, anticipation, and conditioning. En J.T. Cacioppo, L.G.Tassinary and G.G. Berntson (Eds.), Handbook of psychophysiology (pp. 533-575). New York:Cambridge University Press.
Öhman, A., Lundqvist, D. and Esteves, F. (2001). The Face in the Croad Revisited: A threat advantage with schematic stimuli. Journal of Personality and Social Psychology, 80 (3), 381-396.
Olivares, E. and Iglesias, J. (2000). Bases neurales de la percepción y el reconocimiento de caras. Revista de Neurología, 30, 946-952.
Phillips, M.L., Medford, N., Young, A.W., Williams, L., Williams, S.C. and Bullmore, E.T. et al. (2001). Time courses of left and right amygdalar responses to fearful facial expressions. Human Brain Mapping, 12, 193-202.
Roselló, J. and Munar, E. (2004). Resolviendo el puzzle de la atención visual: ¿Hacia la desintegración del «homúnculo»? Psicothema, 16, 64-69.
Schmitt, J.J. Hartje, W. and Willmes, K. (1997). Hemispheric asymmetry in the recognition of emotional attitude conveyed by facial expression, prosody, and propositional speech. Cortex, 33, 65-81.
Seitz, K. (2002). Parts and wholes in person recognition: Developmental tends. Journal of Experimental Child Psychology, 82, 367-281.
Sutton, S.K. and Davidson, R.J. (1997). Prefrontal brain asymmetry: A biological substrate of the behavioural approach and inhibition systems. Psychological Science, 8, 204-210.
Tovee, M. and Cohen-Tovee, E.M. (1993). The neural substrates of face-processing models: a review. Cognitive Neuropsychology, 10, 505-528.
Wagner, H.L., MacDonald, C.J. and Manstead, A.S.R. (1986). Communication of individual emotions by spontaneous facial expressions. Journal of Personality and Social Psychology, 50, 737-743.
Young, J.P., Geyer, S., Grefkes, C., Amunts, K., Morosan, P., Zilles, K. and Roland, P.E. (2003). Regional cerebral blood flow correlations of somatosensory areas 3a, 3b, I, and 2 in humans during rest: a PET and cytoarchitectural study. Human Brain Mapping, 19, 183-196.
English