Japanese, Italian, Ukrainian, Swahili, Tagalog, and dozens of other spoken languages cause the same “universal language network” to light up in the brains of native speakers. This hub of language processing has been studied extensively by English speakers, but now neuroscientists have confirmed that the exact same network is activated in speakers of 45 different languages representing 12 different language families.
“This study is very fundamental, extending some findings from English to a wide range of languages,” senior author Evelina Fedorenko, an associate professor of neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research, said in a statement (opens in a new tab).
“The hope is that now that we see that the basic properties appear to be general across languages, we can ask about potential differences between languages and language families in how they are implemented in brainand we can study phenomena that don’t really exist in English,” said Fedorenko. For example, speakers of “tonal” languages, such as Mandarin, convey different word meanings through intonation, or pitch; English is not a tonal language, so it can be treated a little differently in the brain.
The study, published Monday (July 18) in the journal Nature Neuroscience (opens in a new tab), included two native speakers of each language who underwent brain scans as they performed various cognitive tasks. Specifically, the team scanned the participants’ brains using a technique called functional magnetic resonance imaging (fMRI), which tracks the flow of oxygenated blood through the brain. Active brain cells require more energy and oxygen, so fMRI provides an indirect measure of brain cell activity.
Related: ‘Secret code’ behind key type of memory revealed in new brain scans
During the fMRI scans, participants listened to passages from Lewis Carroll’s “Alice’s Adventures in Wonderland” (better known as “Alice in Wonderland”) read in their native language. In theory, all listeners should use the same language network to process stories read in their native language, the researchers hypothesized.
Participants also listened to several recordings that theoretically would not activate this language network. For example, they listened to recordings in which the words of the native language were distorted beyond recognition and to passages read by a speaker of an unknown language. In addition to completing these language-related tests, participants were asked to do math problems and perform memory tasks; like the incoherent recordings, neither the math nor the memory tests should activate the language network, the team theorized.
“Language areas [of the brain] are selective,” first author Saima Malik-Moraleda, a graduate student in the Speech and Hearing Bioscience and Technology program at Harvard University, said in the statement. “They should not respond during other tasks, such as a spatial working memory task, and that’s what we found across the speakers in 45 languages that we tested.”
In native English speakers, the brain areas that are activated during language processing appear mainly in the left hemisphere, primarily in the frontal lobe, which is located behind the forehead, and in the temporal lobe, which is located behind the ear. By constructing “maps” of brain activity from all of their subjects, the researchers revealed that the same brain areas were activated regardless of the language being heard.
The team observed small differences in brain activity among the individual speakers of different languages. However, the same small degree of variation has also been seen among native English speakers.
These results are not necessarily surprising, but they lay a critical foundation for future studies, the team wrote in their report. “Although we expected this to be the case, this demonstration is an important basis for future systematic, in-depth and more detailed cross-linguistic comparisons,” they wrote.
Originally published on Live Science.