Hybrid Time-Frequency Deep Attention Network for EEG-Based Cognitive State Classification
Keywords:
Convolutional neural networks, machine learning classifiers, natural language processingAbstract
Electroencephalogram (EEG)-based
classification
remains challenging
due to signal non-stationarity and noise. We present
a compact hybrid model that integrates residual
convolutional blocks for spectral–spatial feature
extraction, a bidirectional Long Short-Term Memory
(LSTM) for temporal fusion, and multi-head self-attention
to weight time–frequency representations.
On the
PhysioNet Motor Imagery dataset (109 subjects, 64
channels), our approach attains 95.2% test accuracy,
surpassing standalone Convolutional Neural Network
(CNN), LSTM and Transformer baselines by 5–15%.
An ablation study confirms that jointly leveraging
convolutional and attention mechanisms is critical
for robust performance. Statistical comparison using
McNemar’s test further supports the reliability of
the proposed model, which shows no significant
difference compared to a CNN+LSTM+Fusion baseline
(p = 0.19), and a highly significant improvement over
the Transformer-based model (p < 0.0001). These
results highlight the importance of tailoring model
components to the unique properties of EEG data
for reliable cognitive-state decoding. These findings
highlight the power of attention-driven fusion for reliable
EEGdecoding.