Making brain simulation fun with real time neuro-feedback

The general public knows next to nothing about modern neuroscience, let alone the capabilities, limits and goals of brain simulation. People outside the scientific community are mainly exposed to neuroscience when they read another sloppily written news article about a doubtful MRI study, supposedly explaining why men like horror movies better than women.

Most people only start to look into neuroscience seriously when their own life or that of a family member is disrupted by a brain disease.

To rectify this unfortunate framing, the Brain Simulation Section has been working for years to actually let people see and feel how exciting, enlightening and fun brain simulation can be. By bringing together artists and neuroscientists, it’s even possible to advance the field of computational neuroscience while an audience is enjoying a great experience.

Four key measures are making this possible:

  1. Visualizing the brain simulation in exciting and innovative ways
  2. Decoupling the brain scanning process from a clinical setting
  3. Simplifying and accelerating the actual brain scanning phase
  4. Closing the loop from scan to simulation and vice versa, providing real-time neurofeedback

As a result, our team developed new algorithms to dynamically adapt the parameters of a simple brain model to real-time signals provided by a light-weight, battery-powered, dry-sensor EEG headband. This live brain model was coupled with innovative analysis and visualization processors.

Basically, we developed a self-sustaining brain-computer interface (BCI), ready to be deployed in all sorts of entertainment and art applications:

Connecting multiple participants: MyVirtualDream

MyVirtualDream is an immersive, audio-visual art installation, taking place inside a geodesic dome spanning 18 meters. Up to 20 participants at a time are wearing EEG headbands, letting their brain waves interact live with the brain simulation core of The Virtual Brain (TVB).

A central server controls the real-time visualization of the brain simulation projected upon the dome’s hemisphere. Up to 50 spectators can watch the resulting “dream projections” as the simulation processes the EEG signals and mixes up snippets of pre-recorded audio/video sequences. A group of musicians accompany the experience with live improvisations inspired by the dream sequences.

Since the first event in 2013 during Toronto’s Nuit Blanche art festival, MyVirtualDream has been performed at various locations from Irvine in California to Berlin at the Long Night of Science in 2016, proving to be very popular with festival attendees.

On the science side, MyVirtualDream achieves what clinical studies can hardly ever do: In just a single night, participants are willingly providing 500 recorded EEG time-series in alpha and beta frequency ranges, covering basic cognitive activities like relaxation and concentration.

After controlling for variables like time-of-night effects and gender differences, subsequent analysis of this unusually large sample size revealed unprecedented speed of learning changes in the power spectrum (~ 1 min). The participants’ baseline brain activity predicted subsequent neurofeedback beta training, indiciating state-depedent learning – a quite relevant discovery for BCI applications.

Interacting with yourself: BrainModes smartphone app

The BrainModes smartphone app (currently in a private Beta test and demonstrated at public events) builds on the experience with BCI-based, real-time brain simulations at MyVirtualDream events. The goal of this app is to provide personal, mobile neurofeedback to improve concentration, reduce stress, and strengthen learning and problem solving skills in people of any age.

Running a real-time brain simulation on a smartphone obviously poses two key challenges:

  1. Missing structural data:

    A useful brain simulation has to provide both temporal and spatial signals. While battery-powered, Bluetooth EEG headsets are available to transmit measurements to a mobile app, structural and spatial brain imaging is simply impossible – since it would require a bulky, slow and hugely expensive MRI scanner.

  2. Limited computing power:

    Modern smartphones come with impressive CPU and GPU capacities but these are galaxies below of what is typically required to run the TVB brain simulator code, written in Python, the lingua franca of neuroscience.

To overcome these challenges, the team from the Brain Simulation Section developed a minimalized version of the TVB brain simulation core, written in C to ensure optimal performance. The central algorithm was enhanced to work with prepared averages of structural brain data, predicting simulated fMRI signals solely driven by live EEG signals provided by the headset. The result was published as the Hybrid Virtual Brain on GitHub.