Chapter 6 endnote 6, from Lisa Feldman Barrett.
Some context is:
As a baby nurses one morning, groups of neurons fire in her various sensory systems, in statistically related patterns, to represent the mother’s visual image, the sound of her voice, her scent, the tactile sensations of being held, an increase in energy from being fed, the sensations of a full tummy, plus the pleasure of feeding and being cuddled. All of these representations are interrelated, and their summary is represented elsewhere, in the pattern of firing within a smaller group of neurons, as a rudimentary, multisensory instance of “Mother.” During nursing again later in the day, other summaries of the concept “Mother” will be similarly created, using similar, but not identical, groupings of neurons.
As far as the brain is concerned, a feature can be something elemental, like a wavelength of light, or a change in air pressure, or the presence of a chemical, or it can be a summary of other co-occurring features (two lines make an angle, a bunch of angles and other visual features makes an object, a bunch of visual objects and other co-occurring sensory features make an event). Features are usually represented by groupings or populations of neurons.
Elemental features are usually coded in a map that corresponds to their spatial location (called a topographic map). For example:
- The neurons in primary visual cortex make up a retinotopic map, a one-to-one point-mapping between a spot in the retina and certain neurons in V1
- The neurons in primary auditory cortex make up a tonotopic map, a one-to-one point-mapping between a spot in the cochlea and certain neurons in A1
- ...and so on.
But summary features (i.e., groupings of lower features) are coded in a map that corresponds to their conceptual similarity to one another. The more conceptually similar these summaries are, the closer they will be to one another in neural space.
So, patterns of neural firing in early sensory cortices code for low-level sensory details, but eventually, in the summaries upstream, it is the conceptual similarity, not the physical similarity, that determines how similar the distributed response patterns are for two instances of the same concepts, or even of different concepts. This means that the neural patterns for two human faces in a given instance will be closer than for a human face and an animal face (in that instance), but two faces will be closer to one another than a face and a body (in a given instance), and the pattern representing two animate creatures (in an instance) will closer than for one animate and one inanimate object... and this is just in the visual system.
Some terminology: When scientists say neurons on "closer," they mean the neurons are physically closer together (and the same neurons might even be used in different representations, because different representations are separable, not necessarily physically separate). And by the phrase "in a given instance," I mean that any feature is not necessarily represented by exactly the same neurons each time (i.e., there is degeneracy).
Notes on the Notes
- Marder, Eve, and Jean-Marc Goaillard. 2006. "Variability, compensation and homeostasis in neuron and network function." Nature Reviews Neuroscience 7 (7): 563-574.
- Gjorgjieva, Julijana, Guillaume Drion, and Eve Marder. 2016. "Computational implications of biophysical diversity and multiple timescales in neurons and synapses for circuit performance." Current Opinion in Neurobiology 37: 44-52.
- Denève, Sophie, and Christian K. Machens. 2016. "Efficient codes and balanced networks." Nature Neuroscience 19 (3): 375-382.
- Grill-Spector, Kalanit, and Kevin S. Weiner. 2014. "The functional architecture of the ventral temporal cortex and its role in categorization." Nature Reviews Neuroscience 15 (8): 536-548.
- Edelman, Gerald M., and Joseph A. Gally. 2001. “Degeneracy and Complexity in Biological Systems.” Proceedings of the National Academy of Sciences 98 (24): 13763–13768.