Research reconstructs brain activity

Scientists could recreate dreams and memories with new brain imaging technology developed by UC Berkeley researchers.

With the use of functional magnetic resonance imaging, activities created by the brain from watching YouTube videos were reconstructed, allowing scientists to view the images that the brain observed. The research was published Sept. 22 in the journal Current Biology.

The MRIs, which measure the blood flow controlled by neural activities, use changes in the blood flow to help interpret what the subject saw on the screen.

The applications of the technology could extend farther than reconstructing YouTube videos, potentially helping scientists to understand parts of the brain that have remained a mystery, including dreams and memories.

“The technology will definitely get there, the question is just when,” said Jack Gallant, professor of psychology at UC Berkeley and co-author of the study.

The research focuses on visual activity, which accounts for a third of the brain’s mechanisms.

“One way to think about the brain is to build a dictionary that translates between the world and the parts of the brain,” Gallant said.

In the research, brain activity of each subject was sampled every second from a total of 18 million seconds of random YouTube videos, and each  recorded second was reconstructed separately.

“Each person reconstructs a different image, and you have to work with what you manage to get,” said Yuval Benjamini, a graduate student in the UC Berkeley Department of Statistics who helped work on the statistical decoding aspect of the research.

“First you need to build individual dictionaries for individual brains,” Gallant said.

Reconstruction of the images was  attained by taking the hundred YouTube videos that best produced predictive activity closest to the actual activity in the brain. However, the researchers faced a problematic limitation with the data once all of it was collected.

Even though subjects watched 18 million seconds’ worth of videos, the data displayed only a small portion of the brain activity that occurred during the research, Benjamini said.

“It’s a very small subset of the videos that people see and understand,” he said.

To solve the dilemma in data, the lab created models to predict brain activity and compared them to the actual brain activity that the subjects displayed,  Gallant said.

In addition to decoding dreams and memories, this breakthrough also has the potential to improve the lives of people who have degenerative neurological diseases — an internal speech decoder would “allow people with no motor skills to go into the MRI for two hours a day and communicate with their families,” Gallant said.

Two factors are limiting the advancement of the brain imaging technology — the limitations of MRIs and the question of what kind of decoding models are necessary for translating specific brain activity, Gallant said.

In the future, the technology could have many theoretical applications in areas of creativity and artistic production.

“You could build a brain decoder that composes music and you could just think of music and then it would be composed for you,” Gallant said.