Treffer: An Explainable 3D-Deep Learning Model for EEG Decoding in Brain-Computer Interface Applications.
Weitere Informationen
Decoding electroencephalographic (EEG) signals is of key importance in the development of brain-computer interface (BCI) systems. However, high inter-subject variability in EEG signals requires user-specific calibration, which can be time-consuming and limit the application of deep learning approaches, due to general need of large amount of data to properly train these models. In this context, this paper proposes a multidimensional and explainable deep learning framework for fast and interpretable EEG decoding. In particular, EEG signals are projected into the spatial-spectral-temporal domain and processed using a custom three-dimensional (3D) Convolutional Neural Network, here referred to as EEGCubeNet . In this work, the method has been validated on EEGs recorded during motor BCI experiments. Namely, hand open (HO) and hand close (HC) movement planning was investigated by discriminating them from the absence of movement preparation (resting state, RE). The proposed method is based on a global- to subject-specific fine-tuning. The model is globally trained on a population of subjects and then fine-tuned on the final user, significantly reducing adaptation time. Experimental results demonstrate that EEGCubeNet achieves state-of-the-art performance (accuracy of [Formula: see text] and [Formula: see text] for HC versus RE and HO versus RE, binary classification tasks, respectively) with reduced framework complexity and with a reduced training time. In addition, to enhance transparency, a 3D occlusion sensitivity analysis-based explainability method (here named 3D xAI-OSA ) that generates relevance maps revealing the most significant features to each prediction, was introduced. The data and source code are available at the following link: https://github.com/AI-Lab-UniRC/EEGCubeNet.