Ever dreamed of recording your dreams and turning them in a video clip? The technology that allows you to accomplish that is near: UC Berkeley scientists worked out a way to turn the best way our brains interpret visual stimuli in a video, and the result can be amazing.
To have the ability to do this, the researches used functional Magnetic Resonance Imaging (fMRI) to study the blood flow through brain’s visual cortex. Then, some other part of the brain were split up into volumetric pixels or voxels (the term might be familiar to people who remember early 3D games that have been based on voxels as opposed to polygons which are more widely used today). Finally, the scientists built a computational model which describes how visual details are mapped into brain activity.
Then, test subjects viewed an extra set of clips. The film reconstruction algorithm was fed 18 million seconds of random YouTube videos, that had been used to teach the program how to predict your brain activity evoked by film clips. Finally, the program chose 100 clips which are most similar to the movie the subject saw, which were merged to create a reconstruction with the original movie.
It feels right a video that shows how our brain sees things, and at moments it’s eerily similar to the original imagery.
“This is really a major leap toward reconstructing internal imagery. We have been opening a window in the movies within our minds,” said Professor Jack Gallant, a UC Berkeley neuroscientist and coauthor with the study published within the journal Current Biology.
Recording our dreams and “reading” the minds of coma patients takes a lot of work still, as current technology only enables scientists to interpret brain activity even though the test subject is watching a movie. Ultimately, it may be used to decode how our brain processes visual events in everyday life or, perhaps, our dreams.