Whether reading a book or listening to an audiobook, UC Berkeley neuroscience researchers have discovered that the human brain is similarly stimulated by both literary experiences.
In a study published Monday in the Journal of Neuroscience, researchers compiled functional magnetic resonance imaging, or fMRI, brain scans taken while subjects were reading or listening to words to demonstrate regions of the brain activated by each form of media.
After reviewing the scans, researchers determined that the subjects processed the semantic, or language meaning, information from auditory and visual consumption of language comparably, according to Fatma Deniz, the study’s lead author and a campus postdoctoral neuroscience researcher at the Gallant lab at UC Berkeley.
“As humans, we can comprehend the meaning of words from spoken and written text, but the exact relationship between the meaning of words and how they are represented in the brain (when) spoken versus written was not understood to a full extent,” Deniz said.
The study’s subjects read stories from the Moth Radio Hour podcast series, with words appearing one at a time on a screen to replicate the listening portion of the study. The researchers pulled from many years of linguistic research on word meanings to quantify and measure how much each word’s meaning impacts brain activity, Deniz said.
The researchers were then able to code each word’s meaning and map the location of where the coded word meaning activates thousands of areas in the brain.
Although there has been evidence that some cortical regions of the brain are activated while listening to or reading the same words, a common activation does not necessarily mean that they share a “word meaning representation” or that the words are comprehended in the same way, according to Deniz.
“We show this representation is extremely similar not only in the few regions that have been reported in the literature thus far but (we) also show it in a large network of brain regions,” Deniz said.
Researchers also created maps of related words to predict which areas of the brain would be activated by certain words. According to Deniz, the data sets are so similar that the listening data could be used to predict which areas of the brain reading the words would activate, and vice versa with the reading data.
While the study did not propose any future clinical applications, Deniz said that with more research into how the human brain processes language — which their study has furthered — interventions for neurodegenerative disorders, such as dyslexia or aphasia, could be developed.
“Understanding how the brain processes semantic information across modalities, including speaking, reading and writing, can help us build language decoders that can help with language disorders,” Deniz said.