|Actress Jennifer Aniston in the film Horrible Bosses -- this should pump up my blog stats.|
My task which I am trying to achieve is, by the power of the written word, to make you hear, to make you feel--it is, above all, to make you see."-- Joseph Conrad
At this very moment, as your eyes are scanning across the words in front of you, you're performing a feat of mental gymnastics that no other species on earth can approach.
Mere milliseconds after the photons leaping from the screen hit your retinas, you not only recognize the words and letters, but you extract meaning from them. Before a second has passed, you've assembled an idea of what the sentence as a whole means, and as a result, you can make inferences that are unstated; you can prepare an appropriate response; and you can even predict what word is going to come potato.
I mean, "next." How you do this -- how you make meaning out of photons or sound waves -- is one of the great, persistent mysteries of the human mind. And until recently, we had no idea how our brains make meaning. And worse, we didn't even know how to figure it out. But that's all changing.
Part of the solution has been fundamental changes in the instruments we have available to look at the brain. Over the last 15 years, it has become possible, using functional MRI, to measure the dynamics of the waking brain, and that includes what happens while people are reading, like you are now.
We can also now finely measure reaction times, eye and hand movements, and brain waves. And in the past decade, cognitive scientists like me have started to use these tools to inspect exactly what's going on while people read stories, listen to instructions, and recite poems. What we've found is as unexpected as it is revealing.
The traditional view is that our capacity for language is housed in certain centers in the brain -- specialized regions like "Broca's area" and "Wernicke's area" that are purportedly in charge of grammar or meaning, respectively. But the new science tells us that mind makes meaning using a much broader swath of the brain -- including parts that are typically used for seeing and for moving.
For instance, if you read that For her new movie, Jennifer Aniston is wearing braces, neurons start firing in the part of your brain that recognizes faces. If I tell you that For the role, she's learning to ride a giant tricycle, the parts of your brain that control leg actions light up -- the same brain regions that actually send signals to your leg muscles to make them contract.
In other words, you're using the parts of your brain that allow you to perceive the world and move around in it to simulate what it would be like to experience the things that language describes. Even though Jennifer Aniston isn't actually in front of you, you see her in your mind's eye. And even though there's no tricycle to mount, you virtually simulate moving your body to control it. In short, you make meaning by simulating what it would be like to be there.
This finding might seem obvious to some people... of course you see the things you read about in your mind's eye. After all, that's precisely what good fiction does -- it transports you into the body of another person, to another time or place. But because we're now able to measure this transportation in the lab, we can answer fundamental questions about how it works, and what it tells us about how we as humans are able, uniquely in the universe, to understand language.
And this is where it starts to get interesting. From new research, we now know that most of the simulations people construct while understanding language go completely undetected -- they're there even when people aren't aware of them.
For example, you might not think that you activate the mouth-controlling parts of your motor cortex when you read The dog is feasting on that juicy morsel. But you do. And you do the same thing when you read The blogosphere is chewing on that juicy morsel. Surprisingly, even metaphorical language like this leads people to simulate seeing things or performing actions.
What's more, we now know that these simulations differ from person to person. Some people are innately more visual -- they can visualize a baboon's face or the Big Dipper with relative ease. Others, like me, are more verbal, and couldn't even tell you what color their dining room walls are. (Maybe they're taupe? Hold on, what is taupe?)
These differences between people are reflected in everything from how they do on different parts of IQ tests to what sorts of professions they end up in. And they also show up in language. When more visual people read about Jennifer Aniston and her giant tricycle, they're more likely to see that scene in their mind's eye, and a less visual person, like me, is more likely to feel how it would be as a full-grown adult to push on the pedals.
These discoveries about how meaning works tell us something profound about what it is to be uniquely human, and how we got to be this way. Evolution, as it turns out, is a persistent tinkerer. Our capacity for language isn't a completely new mental organ cut from whole cloth.
Instead, language is bootstrapped, using simulation, off of evolutionarily older systems dedicated to perception and action.
As the cognitive scientist Elizabeth Bates was fond of saying, language is a new machine built from old parts. That's a fact worth remembering when basking in the glow of our linguistic excellence.
Benjamin K. Bergen is the author of Louder Than Words: The New Science of How the Mind Makes Meaning (Basic Books, 2012).
|More gratuitous Jennifer Aniston.|