April 19, 2018
Written by: Sarah Reitz
It happens to everyone: you’re at a crowded event and somehow manage to get separated from the people you arrived with. But, after quickly scanning the crowd you’re able to instantly pick them out from a sea of faces. Or, while walking down the street you recognize someone you were introduced to weeks ago, but can’t remember his or her name no matter how hard you try.
As a species that relies on social connections for survival, our brains have adapted to be able to recognize, remember, and respond to faces. These abilities give us huge social advantages, allowing us to deduce emotion and predict how others will respond in certain situations. Interestingly, this innate recognition of faces seems to be present from birth. Infants spend more time looking at patterns that resemble faces over other patterns that do not look similar to faces1, and also show a preference for their mother’s face within a few days of being born2. But how is our brain able to look at dozens, sometimes hundreds, of faces and recognize the faces that we’ve seen before – even when we can’t remember their names – all in a fraction of a second?

One of the major regions in our brain that processes face information is located in the fusiform gyrus (Figure 1). Researchers believe that this region is responsible for detecting and recognizing faces by incorporating information from other visual processing regions of the brain. When the fusiform face area is damaged, a condition called prosopagnosia results. People suffering from this condition have an impaired ability to recognize faces, including their own face! The fusiform face area seems to be specific to faces, because people with prosopagnosia are still able to accurately discriminate other objects from each other.
Within the fusiform gyrus are areas called “face patches”. In these regions are “face cells” that are more strongly activated when a person is looking at faces compared to any other object. Previously, scientists used electrophysiology—a technique where a probe is inserted into the brain to record the electrical activity of neurons, which can reveal their responses to certain stimuli or during certain behaviors—to examine activity in these face cells while looking at different objects. When they recorded from some of these face cells in rhesus macaques, they found that a single cell not only responded to faces more strongly than other objects, but also responded to a specific face more strongly than other faces! In the most well known of these studies, researchers found a neuron that was activated when looking at pictures of Jennifer Aniston, but not when looking at other faces3. This “Jennifer Aniston neuron” led to the grandmother cell theory, which hypothesized that individual neurons were responsible for recognizing individual faces. However, this seemed unlikely due to the fact that there are millions of faces in the world, and specifying a neuron for each person you encounter seems like a highly inefficient way to process and recognize faces.
For years, any idea as to how exactly face cells recognized a seemingly infinite array of faces was not known. However, work from Drs. Doris Tsao and Le Chang published in 2017 unlocked an entirely new understanding of how these face cells identify and process faces, showing that the brain processes faces in a way that is much more elegant and efficient than previously imagined4. By recording activity from face cells in rhesus macaques while they were shown 2,000 images faces with different characteristics, Tsao and Chang were able to determine that face cells don’t respond to a specific person like previously thought, but rather to specific facial features. For example, some neurons responded to features like eyebrow length or distance between the eyes. The researchers identified 50 dimensions in “face space” that tend to vary between people, and categorized them as either spatial features, like the features mentioned above, or features not dependent on face shape, like eye color or skin tone.
When they recorded from these face cells, they found that each cell responded in a predictable way to specific dimensions, or features, in this face space. For instance, one cell might respond to distance between the eyes, and be most active when inter-eye distance is large and least active when this distance is small. This also means that in faces where inter-eye distance was identical, the neuron’s firing rate was the same, regardless of the other features of the faces. They also found that many neurons are active in response to combinations of these features, rather than a single feature. Using this type of facial encoding system, Tsao and Chang hypothesized that the combination of cells that respond to all of these different dimensions tell the brain the identity of the face. You can imagine this system as analogous to our color system, where every color we can imagine can be produced from a combination of only a few colors of light. Instead of the 3 main components of the color system however (red, yellow, and blue) this study shows that faces can be broken down into roughly 50 components. These 50 components can then be used to identify an immense array of faces.
Then, in an experiment that sounds straight out of an episode of Black Mirror, Tsao and Chang were able to recreate a face the macaque had just seen based only on interpreting the electrical recordings from these face cells! The scientists showed the macaques a brand new face while recording from just 205 face cells in the brain. They then used a computer algorithm to decode the electrical activity of those 205 cells and recreated an image of the face the macaque had seen (check out the comparisons here!). . When they showed a group of people this recreated face and asked them to guess which face the macaque had actually been shown from a group of 40 faces, they chose correctly nearly 80% of the time!
This work is an exciting leap forward in the field of face recognition, providing some of the first evidence against the previously held theory that individual face cells respond to a specific person and proposing a system by which our brains can efficiently recognize such an immense array of faces. Based on this new data, it is likely that in previous studies researchers happened to record from a cell that responded to a few dimensions of Jennifer Aniston’s face that were not common to any of the other faces that were shown. Though humans likely require more than 200 neurons to reliably and accurately encode all of the faces we see, this system gives us new insight into how the brain can efficiently recognize such an immense array of faces. These results might also help researchers understand how our brains encode and recognize other complex objects that we encounter.
References:
- Goren CC, Sarty M, Wu PY. Visual following and pattern discrimination of face-like stimuli by newborn infants. Pediatrics. 1975; 56(4):544-549.
- Bushnell IW, Sai F, Mullin JT. Neonatal recognition of the mother’s face. British Journal of Developmental Psychology. 1989; 7(1):3–15.
- Quiroga RQ, Reddy L, Kreiman G, Koch C, Fried I. Invariant visual representation by single neurons in the human brain. Nature. 2005; 435(7045):1102-1107
- Chang L & Tsao DY. The code for facial identity in the primate brain. Cell. 2017; 169(6):1013-1028
Images:
Cover image: Image by Geralt, CC0 Creative Commons, via Pixabay
Figure 1: Image by Henry Gray, vectorized by Mysid, colored by was_a_bee. [Public domain], via Wikimedia Commons
Thank you , that was extremely fascinating, the fact they could essentially reverse the principle and decode neuronal activity is so cool.
LikeLike