Emotion concepts shape experience
Chapter 3 endnote 9, from Lisa Feldman Barrett.
Some context is:
[After semantic satiation], we immediately showed subjects two wordless faces side by side as before. Their performance dropped to a dismal 36 percent: nearly two-thirds of their yes/no decisions were incorrect!
In addition to the experiments described in chapter 3, we ran a final set of experiments to show that emotion concepts allow people to simulate, and therefore to perceive the “expression” they actually see in a face. (More precisely, they construct their perception of emotion in a face.) These experiments were led by a former member of my lab, Maria Gendron, for her masters thesis. The methods are more intricate than the others discussed in the chapter, so I omitted the experiments from the book, but here are the details.
On each experimental trial, we selected a basic-emotion-style, posed stereotype (e.g., a photograph of a scowling face) and showed it to the subject twice in a row. On some trials, subjects performed semantic satiation with an emotion word (repeating it 30 times) before seeing the face for the first time; on other trials, they repeated a control word. After seeing the face for the first time, a few seconds went by, and then they viewed the same face a second time (without semantic satiation).
When subjects see the same image several times in a row in an experiment, they are very fast to react to it, much faster than when they view two different images. Subjects who satiated a control word (“trust, trust, trust…”) were very quick to respond to the second showing of the face, indicating that they had constructed the same perception of the image both times, exactly as we would expect. Subjects who satiated an emotion word (“anger, anger, anger…”) were slow to respond to the second showing of the face, indicating that they constructed two different perceptions of the face (one perception when the emotion word was satiated, and a different perception when the word’s meaning was accessible). Without access to the relevant emotion concept, test subjects could not launch the relevant simulation to make meaning of the photo, and therefore they did not see emotion in the face.
Based on this evidence, emotion concepts don’t merely shape our judgments but actually shape our experience. Brain imaging studies now support this observation; a recent meta-analysis of brain imaging studies shows that when emotion words are not provided within the experiment (compared to when they are), there is more amygdala activity. Amygdala response is usually associated with unexpected or novel input, like that which is not simulated. There are also other experiments that interfere with words in the basic emotion method and thereby disrupt emotion perception.
Notes on the Notes
- Gendron, Maria, Kristen A. Lindquist, Lawrence Barsalou, and Lisa Feldman Barrett. 2012. "Emotion words shape emotion percepts." Emotion 12 (2): 314-325.
- Fox, Christopher J., So Young Moon, Giuseppe Iaria, and Jason JS Barton. 2009. "The correlates of subjective perception of identity and expression in the face network: An fMRI adaptation study." NeuroImage 44 (2): 569–580.
- Thielscher, Axel, and Luiz Pessoa. 2007. "Neural correlates of perceptual choice and decision making in fear–disgust discrimination." Journal of Neuroscience 27 (11): 2908–2917.
- Brooks, Jeffrey A., Holly Shablack, Maria Gendron, Ajay B. Satpute, Michael H. Parrish, and Kristen A. Lindquist. 2017. "The role of language in emotion: A meta-analysis." Social Cognitive and Affective Neuroscience 12 (2): 169–183.
- McNally, Gavan P., Joshua P. Johansen, and Hugh T. Blair. 2011. "Placing prediction into the fear circuit." Trends in Cognitive Sciences 34 (6): 283-292.
- Li, Susan Shi Yuan, and Gavan P. McNally. 2014. "The conditions that promote fear learning: prediction error and Pavlovian fear conditioning." Neurobiology of Learning and Memory 108: 14-21.
- Roberson, Debi, and Jules Davidoff. 2000. "The categorical perception of colors and facial expressions: the effect of verbal interference." Memory and Cognition 28 (6): 977–986.