<BACK TO BLOGS
Blog
5 mins read

BCI Kickstarter #05 : Signal Processing in Python: Shaping EEG Data for BCI Applications

Welcome back to our BCI crash course! We've covered the fundamentals of BCIs, explored the brain's electrical activity, and equipped ourselves with the essential Python libraries for BCI development. Now, it's time to roll up our sleeves and dive into the practical world of signal processing. In this blog, we will transform raw EEG data into a format primed for BCI applications using MNE-Python. We will implement basic filters, create epochs around events, explore time-frequency representations, and learn techniques for removing artifacts. To make this a hands-on experience, we will work with the MNE sample dataset, a combined EEG and MEG recording from an auditory and visual experiment.

Getting Ready to Process: Load the Sample Dataset

First, let's load the sample dataset. If you haven't already, make sure you have MNE-Python installed (using conda install -c conda-forge mne).  Then, run the following code:

import mne

# Load the sample dataset

data_path = mne.datasets.sample.data_path()

raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'

raw = mne.io.read_raw_fif(raw_fname, preload=True)

# Set the EEG reference to the average

raw.set_eeg_reference('average')

This code snippet loads the EEG data from the sample dataset into a raw object, ready for our signal processing adventures.

Implementing Basic Filters: Refining the EEG Signal

Raw EEG data is often contaminated by noise and artifacts from various sources, obscuring the true brain signals we're interested in. Filtering is a fundamental signal processing technique that allows us to selectively remove unwanted frequencies from our EEG signal.

Applying Filters with MNE: Sculpting the Frequency Landscape

MNE-Python provides a simple yet powerful interface for applying different types of filters to our EEG data using the raw.filter() function. Let's explore the most common filter types:

  • High-Pass Filtering: Removes slow drifts and DC offsets, often caused by electrode movement or skin potentials. These low-frequency components can distort our analysis and make it difficult to identify event-related brain activity. Apply a high-pass filter with a cutoff frequency of 0.1 Hz to our sample data using:

raw_highpass = raw.copy().filter(l_freq=0.1, h_freq=None) 

  • Low-Pass Filtering:  Removes high-frequency noise, which can originate from muscle activity or electrical interference. This noise can obscure the slower brain rhythms we're often interested in, such as alpha or beta waves.  Apply a low-pass filter with a cutoff frequency of 30 Hz using:

raw_lowpass = raw.copy().filter(l_freq=None, h_freq=30)

  • Band-Pass Filtering: Combines high-pass and low-pass filtering to isolate a specific frequency band. This is useful when we're interested in analyzing activity within a particular frequency range, such as the alpha band (8-12 Hz), which is associated with relaxed wakefulness. Apply a band-pass filter to isolate the alpha band using:

raw_bandpass = raw.copy().filter(l_freq=8, h_freq=12)

  • Notch Filtering: Removes a narrow band of frequencies, typically used to eliminate power line noise (50/60 Hz) or other specific interference. This noise can create rhythmic artifacts in our data that can interfere with our analysis. Apply a notch filter at 50 Hz using:

raw_notch = raw.copy().notch_filter(freqs=50)

Visualizing Filtered Data: Observing the Effects

To see how filtering shapes our EEG signal, let's visualize the results using MNE-Python's plotting functions:

  • Time-Domain Plots: Plot the raw and filtered EEG traces in the time domain using raw.plot(), raw_highpass.plot(), etc. Observe how the different filters affect the appearance of the signal.
  • PSD Plots: Visualize the power spectral density (PSD) of the raw and filtered data using raw.plot_psd(), raw_highpass.plot_psd(), etc.  Notice how filtering modifies the frequency content of the signal, attenuating power in the filtered bands.

Experiment and Explore: Shaping Your EEG Soundscape

Now it's your turn! Experiment with applying different filter settings to the sample dataset.  Change the cutoff frequencies, try different filter types, and observe how the resulting EEG signal is transformed.  This hands-on exploration will give you a better understanding of how filtering can be used to refine EEG data for BCI applications.

Epoching and Averaging: Extracting Event-Related Brain Activity

Filtering helps us refine the overall EEG signal, but for many BCI applications, we're interested in how the brain responds to specific events, such as the presentation of a stimulus or a user action.  Epoching and averaging are powerful techniques that allow us to isolate and analyze event-related brain activity.

What are Epochs? Time-Locked Windows into Brain Activity

An epoch is a time-locked segment of EEG data centered around a specific event. By extracting epochs, we can focus our analysis on the brain's response to that event, effectively separating it from ongoing background activity.

Finding Events: Marking Moments of Interest

The sample dataset includes dedicated event markers, indicating the precise timing of each stimulus presentation and button press.  We can extract these events using the mne.find_events() function:

events = mne.find_events(raw, stim_channel='STI 014')

This code snippet identifies the event markers from the STI 014 channel, commonly used for storing event information in EEG recordings.

Creating Epochs with MNE: Isolating Event-Related Activity

Now, let's create epochs around the events using the mne.Epochs() function:

# Define event IDs for the auditory stimuli

event_id = {'left/auditory': 1, 'right/auditory': 2}

# Set the epoch time window

tmin = -0.2  # 200 ms before the stimulus

tmax = 0.5   # 500 ms after the stimulus

# Create epochs

epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(-0.2, 0))

This code creates epochs for the left and right auditory stimuli, spanning a time window from 200 ms before to 500 ms after each stimulus onset.  The baseline argument applies baseline correction, subtracting the average activity during the pre-stimulus period (-200 ms to 0 ms) to remove any pre-existing bias.

Visualizing Epochs: Exploring Individual Responses

The epochs.plot() function allows us to explore individual epochs and visually inspect the data for artifacts:

epochs.plot()

This interactive visualization displays each epoch as a separate trace, allowing us to see how the EEG signal changes in response to the stimulus. We can scroll through epochs, zoom in on specific time windows, and identify any trials that contain excessive noise or artifacts.

Averaging Epochs: Revealing Event-Related Potentials

To reveal the consistent brain response to a specific event type, we can average the epochs for that event.  This averaging process reduces random noise and highlights the event-related potential (ERP), a characteristic waveform reflecting the brain's processing of the event.

# Average the epochs for the left auditory stimulus

evoked_left = epochs['left/auditory'].average()

# Average the epochs for the right auditory stimulus

evoked_right = epochs['right/auditory'].average() 

Plotting Evoked Responses: Visualizing the Average Brain Response

MNE-Python provides a convenient function for plotting the average evoked response:

evoked_left.plot()

evoked_right.plot()

This visualization displays the average ERP waveform for each auditory stimulus condition, showing how the brain's electrical activity changes over time in response to the sounds.

Analyze and Interpret: Unveiling the Brain's Auditory Processing

Now it's your turn! Analyze the evoked responses for the left and right auditory stimuli.  Compare the waveforms, looking for differences in amplitude, latency, or morphology.  Can you identify any characteristic ERP components, such as the N100 or P300?  What do these differences tell you about how the brain processes sounds from different spatial locations?

Time-Frequency Analysis: Unveiling Dynamic Brain Rhythms

Epoching and averaging allow us to analyze the brain's response to events in the time domain. However, EEG signals are often non-stationary, meaning their frequency content changes over time. To capture these dynamic shifts in brain activity, we turn to time-frequency analysis.

Time-frequency analysis provides a powerful lens for understanding how brain rhythms evolve in response to events or cognitive tasks. It allows us to see not just when brain activity changes but also how the frequency content of the signal shifts over time.

Wavelet Transform with MNE: A Window into Time and Frequency

The wavelet transform is a versatile technique for time-frequency analysis. It decomposes the EEG signal into a set of wavelets, functions that vary in both frequency and time duration, providing a detailed representation of how different frequencies contribute to the signal over time.

MNE-Python offers the mne.time_frequency.tfr_morlet() function for computing the wavelet transform:

from mne.time_frequency import tfr_morlet

# Define the frequencies of interest

freqs = np.arange(7, 30, 1)  # From 7 Hz to 30 Hz in 1 Hz steps

# Set the number of cycles for the wavelets

n_cycles = freqs / 2.  # Increase the number of cycles with frequency

# Compute the wavelet transform for the left auditory epochs

power_left, itc_left = tfr_morlet(epochs['left/auditory'], freqs=freqs, n_cycles=n_cycles, use_fft=True, return_itc=True)

# Compute the wavelet transform for the right auditory epochs

power_right, itc_right = tfr_morlet(epochs['right/auditory'], freqs=freqs, n_cycles=n_cycles, use_fft=True, return_itc=True)

This code computes the wavelet transform for the left and right auditory epochs, focusing on frequencies from 7 Hz to 30 Hz. The n_cycles parameter determines the time resolution and frequency smoothing of the transform.

Visualizing Time-Frequency Representations: Spectrograms of Brain Activity

To visualize the time-frequency representations, we can use the mne.time_frequency.AverageTFR.plot() function:

power_left.plot([0], baseline=(-0.2, 0), mode='logratio', title="Left Auditory Stimulus")

power_right.plot([0], baseline=(-0.2, 0), mode='logratio', title="Right Auditory Stimulus")

This code displays spectrograms, plots that show the power distribution across frequencies over time. The baseline argument normalizes the power values to the pre-stimulus period, highlighting event-related changes.

Interpreting Time-Frequency Results

Time-frequency representations reveal how the brain's rhythmic activity evolves over time. Increased power in specific frequency bands after the stimulus can indicate the engagement of different cognitive processes.  For example, we might observe increased alpha power during sensory processing or enhanced beta power during attentional engagement.

Discovering Dynamic Brain Patterns

Now, explore the time-frequency representations for the left and right auditory stimuli. Look for changes in power across different frequency bands following the stimulus onset.  Do you observe any differences between the two conditions? What insights can you gain about the dynamic nature of auditory processing in the brain?

Artifact Removal Techniques: Cleaning Up Noisy Data

Even after careful preprocessing, EEG data can still contain artifacts that distort our analysis and hinder BCI performance.  This section explores techniques for identifying and removing these unwanted signals, ensuring cleaner and more reliable data for our BCI applications.

Identifying Artifacts: Spotting the Unwanted Guests

  • Visual Inspection:  We can visually inspect raw EEG traces (raw.plot()) and epochs (epochs.plot()) to identify obvious artifacts, such as eye blinks, muscle activity, or electrode movement.
  • Automated Methods: Algorithms can automatically detect specific artifact patterns based on their characteristic features, such as the high amplitude and slow frequency of eye blinks.

Rejecting Noisy Epochs: Discarding the Troublemakers

One approach to artifact removal is to simply discard noisy epochs.  We can set rejection thresholds based on signal amplitude using the reject parameter in the mne.Epochs() function:

# Set rejection thresholds for EEG and EOG channels

reject = dict(eeg=150e-6)  # Reject epochs with EEG activity exceeding 150 µV

# Create epochs with rejection criteria

epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(-0.2, 0), reject=reject) 

This code rejects epochs where the peak-to-peak amplitude of the EEG signal exceeds 150 µV, helping to eliminate trials contaminated by high-amplitude artifacts.

Independent Component Analysis (ICA): Unmixing the Signal Cocktail

Independent component analysis (ICA) is a powerful technique for separating independent sources of activity within EEG data.  It assumes that the recorded EEG signal is a mixture of independent signals originating from different brain regions and artifact sources.

MNE-Python provides the mne.preprocessing.ICA() function for performing ICA:

from mne.preprocessing import ICA

# Create an ICA object

ica = ICA(n_components=20, random_state=97)

# Fit the ICA to the EEG data

ica.fit(raw)

We can then visualize the independent components using ica.plot_components() and identify components that correspond to artifacts based on their characteristic time courses and scalp topographies. Once identified, these artifact components can be removed from the data, leaving behind cleaner EEG signals.

Experiment and Explore: Finding the Right Cleaning Strategy

Artifact removal is an art as much as a science. Experiment with different artifact removal techniques and settings to find the best strategy for your specific dataset and BCI application.  Visual inspection, rejection thresholds, and ICA can be combined to achieve optimal results.

Mastering the Art of Signal Processing

We've journeyed through the essential steps of signal processing in Python, transforming raw EEG data into a form ready for BCI applications. We've implemented basic filters, extracted epochs, explored time-frequency representations, and tackled artifact removal, building a powerful toolkit for shaping and refining brainwave data.

Remember, careful signal processing is the foundation for reliable and accurate BCI development. By mastering these techniques, you're well on your way to creating innovative applications that translate brain activity into action.

Resources and Further Reading

From Processed Signals to Intelligent Algorithms: The Next Level

This concludes our deep dive into signal processing techniques using Python and MNE-Python. You've gained valuable hands-on experience in cleaning up, analyzing, and extracting meaningful information from EEG data, setting the stage for the next exciting phase of our BCI journey.

In the next post, we'll explore the world of machine learning for BCI, where we'll train algorithms to decode user intent, predict mental states, and control external devices directly from brain signals. Get ready to witness the magic of intelligent algorithms transforming processed brainwaves into real-world BCI applications!

Share this blog

Subscribe to Neurotech Pulse

A roundup of the latest in neurotech covering breakthroughs, products, trials, funding, approvals, and industry trends straight to your inbox.

Button Text
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.