That’s Too Real

August 20, 2019

Written by: Greer Prettyman

 

Does movie The Polar Express or the recent Cats trailer make you feel a little…unsettled? Why is that? As an animation or robot becomes more human-like in appearance, people typically find it cute and react to it with empathy. Up until a point, that is. At this point, the robots become so human-like as to create an uncomfortable sense of uncanniness that leads to feelings of revulsion and eeriness. This strange aspect of humans’ reaction to anthropomorphic agents is known as the “Uncanny Valley” effect.

Scientists have started to investigate the Uncanny Valley effect to determine what happens in the brain when someone reaches that tipping point where an artificial agent goes from cute to creepy. As robots and AI become more ingrained in our social lives, from serving fast food to helping to care for our elderly, it’s important for tech developers to ensure that humans trust the robots they are interacting with, part of which involves avoiding slipping into the Uncanny Valley.

One study recently published in the Journal of Neuroscience looked at how the brain responds to artificial agents that produce the Uncanny Valley effect1. To address this question, the researchers showed participants pictures of beings that spanned the spectrum from artificial to human: mechanoid robots, humanoid robots with human body shape but not facial features, android robots with human-like facial features, humans who had very extreme plastic surgery, and humans.

While an MRI scanner recorded brain activity, participants completed two tasks: a rating task and a choice task. In the rating task, participants saw a series of pictures from the categories described above. They then had to rate each picture’s likeability, familiarity, and human-likeness on scales from 1 to 5. With this data, the researchers could look into how much people liked the spectrum of agents and whether any brain regions responded in a way consistent with an Uncanny Valley effect.

As expected, people rated mechanoid robots as not very human-like and humans as very human-like. The other agents fell in between. The researchers observed patterns of liking ratings consistent with an Uncanny Valley effect (Figure 1). Overall, ratings of likeability increased with human-likeness. However, some specific androids and humans who had been altered by plastic surgery were rated as not very likeable and very human-like, providing evidence that these almost-but-not-quite-human agents fell into the Uncanny Valley.

 

Uncanny_Valley_graph
Figure 1. Graphical representation of an Uncanny Valley effect. Researchers found that participants’ ratings reflected a UV effect, with androids and synthetic humans rated lower on likeability than would be predicted by a continuous rise in likeability as human-likeness increases.

 

Next, the researchers wanted to know whether patterns of brain activity reflected the Uncanny Valley effect. They found that the ventromedial prefrontal cortex (vmPFC), part of the reward network that represents value, activated to likeability and human-likeness— except for highly human-like artificial agents— suggesting that the Uncanny Valley effect is reflected in vmPFC activation patterns. Other parts of the brain that are involved in social cognition, including the temporo-parietal junction (TPJ), the dorsomedial prefrontal cortex (dmPFC), and the fusiform gyrus, also encoded aspects of information related to the Uncanny Valley effect in vmPFC, such as distinguishing humans from non-humans.

The researchers then used statistical modeling to better understand the relationship between activation in these areas. They found that the vmPFC integrates information from the other regions in order to represent the Uncanny Valley effect. This type of multiplicative integration is actually similar to the way the brain integrates sensory information, suggesting a common type of computational mechanism for these distinct cognitive processes.

 

After first finding evidence for an Uncanny Valley effect in the brain, the researchers also wanted to know how much people trust decisions made by artificial agents. In a choice task, participants had to decide whether they would rather receive a gift chosen by an agent from one category or another, for example, from an android robot or a human. Participants were told that each agent had already chosen a gift among four options that would be given to the participant at the end of the research study. Some gifts were appealing (a movie theater voucher and a bottle of wine), while others (dishwasher tabs and toilet deodorizer blocks) were not typically appealing gifts. While you might trust a human to choose one of the good gift from these options, would you trust a robot to do the same?

It turns out that Uncanny Valley reactions also guide decision-making. Unsurprisingly, people were more likely to accept gifts from agents they considered more likeable, familiar, and human-like. Participants with especially high Uncanny Valley reactions had strong preference for gifts from humans compared to the human-like agents that fell in the Uncanny Valley. The vmPFC also tracked preferences during the choice task, with selectively low activation for agents in the Uncanny Valley. Participants with greater activation in the amygdala, a region of the brain related to fear and emotion, were less likely to choose gifts from artificial agents, suggesting that people with strong emotional responses to these creepy agents were least likely to trust their choices.

 

This study points to a neural mechanism underlying the Uncanny Valley effect, clarifying how we characterize and respond to artificial agents. It also demonstrates the role of individual variation in the way people respond emotionally and neurally to these not-quite-right artificial agents, helping to explain why you might enjoy watching The Polar Express every Christmas but your mom finds it too disturbing and refuses to watch. In the future, hopefully this research will guide tech development so we can interact with robots that don’t give us the creeps.

 

 

 

 

 

References:

  1. Rosenthal-von der Putten, A.M., Kramer, N.C., Maderwald, S., Brand, M., Grabenhorst, F. (2019). Neural Mechanisms for Accepting and Rejecting Artificial Social Partners in the Uncanny Valley Neural Mechanisms for Accepting and Rejecting Artificial Social Partners in the Uncanny Valley. J. Neuroscience 39(33) 6555-6570.

 

 

Images:

Figure 1 created with BioRender

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Powered by WordPress.com.

Up ↑

%d bloggers like this: