Get a Grip

July 6, 2021

Written by: Greer Prettyman

One of the skills that most reliably separates humans from other mammals is our unique ability to use tools. From spoons to pens to screwdrivers, the tools we use allow us to accomplish tasks that no other animals can. Despite the ease with which we can learn to use a huge variety of tools, the mechanisms of performing the right actions with any given tool are quite complex. Recent research demonstrates that the human brain can accomplish these feats through our ability to visualize one very important feature: our hands.

For years, scientists have been studying the many processes in the brain that are required to use tools and interact with objects. Although we often take the complexity of these actions for granted, a fair amount of coordination is required, engaging multiple senses and brain regions. We rely on our visual system to determine what and where the object is, and where our body is in relation to it. The motor system must plan how to grab and use the object and then execute that motion, updating in real time as the body begins to move. We also have to use our knowledge and memory of what a tool is and how to use it, such as grabbing a hammer by the handle and not the head. 

Neuroimaging technology has allowed researchers to study how the brain is involved in each of these processes. This typically involved showing research participants 2-dimensional pictures of tools or people using tools and observing which parts of the brain were activated in response to these visual cues. Viewing 2-D tools reliably activates a variety of sensorimotor brain regions involved in the sensory and motor systems, mostly in the brain’s left hemisphere1, but these experiments can’t tell us what happens in the brain when people actually grasp a tool.  

When you think about using a tool, you probably visualize a 3-dimensional motion, like twisting a screwdriver or tilting a coffee mug to your lips. A recent study2 took advantage of advances in neuroimaging and computational modeling to get a better look at what the brain is doing to visualize and execute these specific motions. 

In this experiment, adult participants performed real actions during fMRI scanning. They were lying down with their heads in the MRI scanner, a big magnet that can be used to measure blood flow through the brain, but their arms were free to reach out and grab tools placed in front of them. They received verbal instructions to grab three different kitchen tools, a knife, a spoon, and a pizza cutter, on the left or right side of the object. This corresponded to grabbing in a “typical” way that the tool would actually be used, or a “non-typical” grasp, say on the blade side of the knife (in this case, the tools were plastic so no one got hurt!). With this paradigm, experimenters could look for brain activity associated with typical tool use, rather than non-specific motions like reaching. They had an additional control condition where the participants were instructed to grab the left or right side of non-tool objects like plastic blocks that were a similar size and shape to the tools but didn’t have any recognizable function. Brain activity was recorded while the participants reached out and grabbed these tools and non-tools on the left or right sides.

After completing the grasp task, participants were instructed to look at a series of 2-D images of tools, objects, hands, and bodies. By comparing activation in different parts of the brain in response to each of these image types, the researchers could identify specific regions that were selectively activated by each of these categories. For example, two regions called the intraparietal sulcus (IPS) and lateral occipital temporal cortex (LOTC) each had subregions that selectively responded to pictures of hands or pictures of tools. Additional regions including the premotor cortex were selectively engaged by tool images. 

The researchers then used a computational analysis technique called multi-voxel pattern analysis (MVPA) to identify patterns of brain activity associated with each type of grasp (typical/non-typical and tool/non-tool). The cool thing about MVPA is that it can be used to “decode” what a person is seeing or doing. Once the model had classified the neural patterns associated with each condition, it could be used to make predictions about which activity the participants had done. In this case, the researchers wanted to see if they could use the activity patterns in different brain regions identified in the 2-D image task to determine whether the participant had been performing an action consistent with typical tool use with the 3-D objects (i.e. grabbing the handle end of the pizza cutter). 

Surprisingly, the patterns of activity in tool-selective regions of the brain weren’t very useful for decoding actual tool grasping. The decoding accuracy in these regions was not greater than chance, meaning the model couldn’t determine if the person had grasped the tool correctly or incorrectly. This also suggested that the regions of the brain that help us to recognize tools might not actually store much information about how to hold or use them. 

In contrast, the researchers found that the subregions of the IPS and LOTC that were selective for representing the hands could decode proper tool grasp. This was only true for the real tools and not for the “non-tools”, indicating that these activity patterns represented grasping of tools specifically and not just the general act of holding any object with the hands. They also confirmed that the decoding accuracy was not merely due to lower-level sensory features of the objects, such as their size or texture. The patterns observed, therefore, reflected something special about the act of grabbing a tool in a way that it could be used. 

What can we learn from this research? It makes sense that humans’ impressive ability to use tools relies on our highly capable hands. The complex motions needed to properly hold a tool may rely more on our brains’ ability to visualize where our hands are than previously believed. These same neural processes likely underlie a range of other actions that also require hand-eye coordination. Additionally, as we learn more about the fine-grained patterns of brain activity that underlie goal-directed grasping motions, we can enhance brain-computer interface technologies that help paralyzed patients use their brain activity to grasp objects with robotic arms and hands. 

References:

  1. Lewis JW. Cortical networks related to human use of tools. Neuroscientist. 12(3):211-31. 2006. 
  1. Knights, E., et al. Hand-selective visual regions represent how to grasp 3d tools: brain decoding during real actions. The Journal of Neuroscience. 41(24): 5263. 2021. 

Cover Photo by Jacek Dylag on Unsplash

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: