<BACK TO BLOGS
Blog
5 mins read

BCI Kickstarter #03: EEG Signal Acquisition and Processing

Welcome back to our BCI crash course! In the previous blog, we explored the basic concepts of BCIs and delved into the fundamentals of Neuroscience. Now, it's time to get our hands dirty with the practical aspects of EEG signal acquisition and processing. This blog will guide you through the journey of transforming raw EEG data into a format suitable for meaningful analysis and BCI applications. We will cover signal preprocessing techniques, and feature extraction methods, providing you with the essential tools for decoding the brain's electrical secrets.

Signal Preprocessing Techniques: Cleaning Up the Data

Raw EEG data, fresh from the electrodes, is often a noisy and complex landscape. To extract meaningful insights and develop reliable BCIs, we need to apply various signal preprocessing techniques to clean up the data, remove artifacts, and enhance the true brain signals.

Why Preprocessing is Necessary: Navigating a Sea of Noise

The journey from raw EEG recordings to usable data is fraught with challenges:

  • Noise and Artifacts Contamination: EEG signals are susceptible to various sources of interference, both biological (e.g., muscle activity, eye blinks, heartbeats) and environmental (e.g., power line noise, electrode movement). These artifacts can obscure the true brain signals we are interested in.
  • Separating True Brain Signals:  Even in the absence of obvious artifacts, raw EEG data contains a mix of neural activity related to various cognitive processes.  Preprocessing helps us isolate the specific signals relevant to our research or BCI application.

Importing Data: Laying the Foundation

Before we can begin preprocessing, we need to import our EEG data into a suitable software environment. Common EEG data formats include:

  • FIF (Functional Imaging File Format): A widely used format developed for MEG and EEG data, supported by the MNE library in Python.
  • EDF (European Data Format): Another standard format, often used for clinical EEG recordings.

Libraries like MNE provide functions for reading and manipulating these formats, enabling us to work with EEG data in a programmatic way.

Removing Bad Channels and Interpolation: Dealing with Faulty Sensors

Sometimes, EEG recordings contain bad channels — electrodes that are malfunctioning, poorly placed, or picking up excessive noise. We need to identify and address these bad channels before proceeding with further analysis.

Identifying Bad Channels:

  • Visual Inspection: Plotting the raw EEG data and visually identifying channels with unusually high noise levels, flat lines, or other anomalies.
  • Automated Methods: Using algorithms that detect statistically significant deviations from expected signal characteristics.

Interpolation:

If a bad channel cannot be salvaged, we can use interpolation to estimate its missing data based on the surrounding good channels. Spherical spline interpolation is a common technique that projects electrode locations onto a sphere and uses a mathematical model to estimate the missing values.

Filtering: Tuning into the Right Frequencies

Filtering is a fundamental preprocessing step that allows us to remove unwanted frequencies from our EEG signal. Different types of filters serve distinct purposes:

  • High-Pass Filtering: Removes slow drifts and DC offsets, which are often caused by electrode movement or skin potentials. A typical cutoff frequency for high-pass filtering is around 0.1 Hz.
  • Low-Pass Filtering: Removes high-frequency noise, which can originate from muscle activity or electrical interference. A common cutoff frequency for low-pass filtering is around 30 Hz for most cognitive tasks, though some applications may use higher cutoffs for studying gamma activity.
  • Band-Pass Filtering: Combines high-pass and low-pass filtering to isolate a specific frequency band of interest, such as the alpha (8-12 Hz) or beta (12-30 Hz) band.
  • Notch Filtering: Removes a narrow band of frequencies, typically used to eliminate power line noise (50/60 Hz) or other specific interference.

Choosing the appropriate filter settings is crucial for isolating the relevant brain signals and minimizing the impact of noise on our analysis.

Downsampling: Reducing the Data Load

Downsampling refers to reducing the sampling rate of our EEG signal, which can be beneficial for:

  • Reducing data storage requirements: Lower sampling rates result in smaller file sizes.
  • Improving computational efficiency:  Processing lower-resolution data requires less computing power.

However, we need to be cautious when downsampling to avoid losing important information.  The Nyquist-Shannon sampling theorem dictates that we must sample at a rate at least twice the highest frequency of interest in our signal to avoid aliasing, where high frequencies are incorrectly represented as lower frequencies.

Decimation is a common downsampling technique that combines low-pass filtering with sample rate reduction to ensure that we don't introduce aliasing artifacts into our data.

Re-Referencing: Choosing Your Point of View

In EEG recording, each electrode's voltage is measured relative to a reference electrode.  The choice of reference can significantly influence the interpretation of our signals, as it affects the baseline against which brain activity is measured.

Common reference choices include:

  • Linked Mastoids: Averaging the signals from the mastoid electrodes behind each ear.
  • Average Reference: Averaging the signals from all electrodes.
  • Other References: Specific electrodes (e.g., Cz) or combinations of electrodes can be chosen based on the research question or BCI application.

Re-referencing allows us to change the reference of our EEG data after it's been recorded.  This can be useful for comparing data recorded with different reference schemes or for exploring the impact of different references on signal interpretation. Libraries like MNE provide functions for easily re-referencing data.

Feature Extraction Methods: Finding the Signal in the Noise

Once we've preprocessed our EEG data, it's time to extract meaningful information that can be used for analysis or to train BCI systems. Feature extraction is the process of transforming the preprocessed EEG signal into a set of representative features that capture the essential patterns and characteristics of the underlying brain activity.

What is Feature Extraction? Simplifying the Data Landscape

Raw EEG data, even after preprocessing, is often high-dimensional and complex.  Feature extraction serves several important purposes:

  • Reducing Data Dimensionality: By extracting a smaller set of representative features, we simplify the data, making it more manageable for analysis and machine learning algorithms.
  • Highlighting Relevant Patterns: Feature extraction methods focus on specific aspects of the EEG signal that are most relevant to the research question or BCI application, enhancing the signal-to-noise ratio and improving the accuracy of our analyses.

Time-Domain Features: Analyzing Signal Fluctuations

Time-domain features capture the temporal characteristics of the EEG signal, focusing on how the voltage changes over time. Some common time-domain features include:

  • Amplitude:
    • Peak-to-Peak Amplitude: The difference between the highest and lowest voltage values within a specific time window.
    • Mean Amplitude: The average voltage value over a given time period.
    • Variance: A measure of how much the signal fluctuates around its mean value.
  • Latency:
    • Onset Latency: The time it takes for a specific event-related potential (ERP) component to appear after a stimulus.
    • Peak Latency: The time point at which an ERP component reaches its maximum amplitude.
  • Time-Series Analysis:
    • Autoregressive Models: Statistical models that predict future values of the signal based on its past values, capturing temporal dependencies in the data.
    • Moving Averages:  Smoothing techniques that calculate the average of the signal over a sliding window, reducing noise and highlighting trends.

Frequency-Domain Features: Unveiling the Brain's Rhythms

Frequency-domain features analyze the EEG signal in the frequency domain, revealing the power distribution across different frequency bands. Key frequency-domain features include:

  • Power Spectral Density (PSD): A measure of the signal's power at different frequencies. PSD is typically calculated using the Fast Fourier Transform (FFT), which decomposes the signal into its constituent frequencies.
  • Band Power: The total power within a specific frequency band, such as delta, theta, alpha, beta, or gamma. Band power features are often used in BCI systems to decode mental states or user intent.

Time-Frequency Features: Bridging the Time and Frequency Divide

Time-frequency features provide a combined view of the EEG signal in both time and frequency domains, capturing dynamic changes in frequency content over time.  Important time-frequency features include:

  • Wavelet Transform:  A powerful technique that decomposes the signal into a set of wavelets, functions that vary in both frequency and time duration. Wavelet transforms excel at capturing transient events and analyzing signals with non-stationary frequency content.
  • Short-Time Fourier Transform (STFT):  Divides the signal into short segments and calculates the FFT for each segment, providing a time-varying spectrum. STFT is useful for analyzing how the frequency content of the signal changes over time.

From Raw Signals to Actionable Insights

The journey from raw EEG data to meaningful insights and BCI control involves a carefully orchestrated sequence of signal acquisition, preprocessing, and feature extraction. Each step plays a crucial role in revealing the hidden patterns within the brain's electrical symphony, allowing us to decode mental states, control external devices, and unlock new possibilities for human-computer interaction.

By mastering these techniques, we can transform the complex and noisy world of EEG recordings into a rich source of information, paving the way for innovative BCI applications that can improve lives and expand our understanding of the human brain.

Further Reading and Resources

What's Next: Real-World BCIs using Signal Processing

This concludes our exploration of EEG signal acquisition and processing. Now that we've learned how to clean up and extract meaningful features from raw EEG data, we are ready to explore how these techniques are used to build real-world BCI applications.

In the next post, we'll dive into the fascinating world of BCI paradigms and applications, discovering the diverse ways BCIs are being used to translate brain signals into actions. Stay tuned!

Share this blog

Subscribe to Neurotech Pulse

A roundup of the latest in neurotech covering breakthroughs, products, trials, funding, approvals, and industry trends straight to your inbox.

Button Text
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.