UC Berkeley researchers explore search mechanism of human brains

brainimages
Helen Wills/Neuroscience Institute/Courtesy

The inner workings of the brain remain largely a mystery to neuroscientists, even with significant advances in the field over the last decade. However, a new discovery from UC Berkeley may help scientists understand one of the brain’s most important functions: its search mechanism.

A recent study published April 21 in Nature Neuroscience found that when humans search for objects, different regions of the brain devoted to both visual and nonvisual tasks are mobilized to perform a targeted search, similar to the way a search engine works.

Researchers in the campus department of neuroscience strapped five subjects into an MRI machine and showed them a series of Hollywood trailers posted on YouTube.

The subjects watched the hourlong reel of clips three times, while the researchers, headed by doctoral candidate Tolga Cukur, observed the participants’ brain activity using electromagnetic images produced by the MRI.

Participants were not asked to do anything during the first viewing, providing researchers with a neural baseline that could be compared with later results. The second time, participants were asked to search for people in the videos and press a button once they had been located. For the final viewing, subjects were asked to locate vehicles instead of people.

According to Cukur, the results of these experiments may explain why humans find it difficult to multitask. When an individual is searching for a specific item, different regions of the brain abandon their primary tasks to focus on a single network, making extraneous thought difficult.

“We found that when attention switches between humans and vehicles, the representation of the relevant category is expanded across many brain areas, and the representation of the irrelevant category is suppressed,” said Jack Gallant, a campus professor of neuroscience and head of the lab that conducted the study.

Gallant said that this research could have enormous implications for how humans understand basic thought and mental disorders.

Although it is too early to say what further research in this field might unveil, the researchers say they are excited by the possibilities.

“As far as the practical implications, the models we’ve developed could help to build brain machine interfaces that could reconstruct what people are looking at or paying attention to,” said Alex Huth, co-author of the paper. “This could be really useful for something like Google Glass — imagine a system that could figure out what you’re looking for and help you find it.”

Contact Eoghan Hughes at [email protected].