Scientists used AI to recreate clip of Pink Floyd song from recordings of brain activity

Anatomical location of song-responsive electrodes. (A) Electrode coverage across all 29 patients shown on the MNI template (N = 2,379). All presented electrodes are free of any artifactual or epileptic activity. The left hemisphere is plotted on the left. (B) Location of electrodes significantly encoding the song’s acoustics (Nsig = 347). Significance was determined by the STRF prediction accuracy bootstrapped over 250 resamples of the training, validation, and test sets. Marker color indicates the anatomical label as determined using the FreeSurfer atlas, and marker size indicates the STRF’s prediction accuracy (Pearson’s r between actual and predicted HFA). We use the same color code in the following panels and figures. (C) Number of significant electrodes per anatomical region. Darker hue indicates a right-hemisphere location. (D) Average STRF prediction accuracy per anatomical region. Electrodes previously labeled as supramarginal, other temporal (i.e., other than STG), and other frontal (i.e., other than SMC or IFG) are pooled together, labeled as other and represented in white/gray. Error bars indicate SEM. The data underlying this figure can be obtained at HFA, high-frequency activity; IFG, inferior frontal gyrus; MNI, Montreal Neurological Institute; SEM, Standard Error of the Mean; SMC, sensorimotor cortex; STG, superior temporal gyrus; STRF, spectrotemporal receptive field.
Share this:

Scientists used artificial intelligence or AI to guess what Pink Floyd’s ‘Another Brick in the Wall’ sounds like based on patterns of brain activity recorded while people were listening to it.

This is extremely cool well worth a listen!

Scientists from the University of California, Berkeley studied recordings from electrodes that had been surgically implanted onto the surface of 29 people’s brains to treat epilepsy.

According to the scientists: “We analyzed a unique intracranial electroencephalography (iEEG) dataset of 29 patients who listened to a Pink Floyd song and applied a stimulus reconstruction approach previously used in the speech domain.”

They then trained AI to enable it to successfully reconstruct the song that can be clearly recognised from direct neural recordings. The findings go a long way in furthering our understanding of how we perceive sound. Furthermore, they could eventually improve devices for people with speech impediments.

The study results were published in PLOS Biology.


Become a Patron!


I am a Chartered Environmentalist from the Royal Society for the Environment, UK and co-owner of DoLocal Digital Marketing Agency Ltd, with a Master of Environmental Management from Yale University, an MBA in Finance, and a Bachelor of Science in Physics and Mathematics. I am passionate about science, history and environment and love to create content on these topics.

Free Email Updates
We respect your privacy.