Treffer: An Explainable 3D-Deep Learning Model for EEG Decoding in Brain-Computer Interface Applications.

Title:
An Explainable 3D-Deep Learning Model for EEG Decoding in Brain-Computer Interface Applications.
Authors:
Suffian M; DIIES, University Mediterranea of Reggio Calabria, Via Zehender, Loc. Feo di Vito, Reggio Calabria 89122, Italy., Ieracitano C; DICMAPI, University of Naples 'Federico II', Piazzale Vincenzo Tecchio, 80, Napoli 80125, Italy., Morabito FC; DICEAM, Mediterranea University of Reggio Calabria, Via Zehender, Loc. Feo di Vito, Reggio Calabria 89122, Italy., Mammone N; DICEAM, Mediterranea University of Reggio Calabria, Via Zehender, Loc. Feo di Vito, Reggio Calabria 89122, Italy.
Source:
International journal of neural systems [Int J Neural Syst] 2025 Dec 30; Vol. 35 (13), pp. 2550073. Date of Electronic Publication: 2025 Oct 18.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: World Scientific Pub. Co Country of Publication: Singapore NLM ID: 9100527 Publication Model: Print-Electronic Cited Medium: Internet ISSN: 1793-6462 (Electronic) Linking ISSN: 01290657 NLM ISO Abbreviation: Int J Neural Syst Subsets: MEDLINE
Imprint Name(s):
Original Publication: Singapore ; Teaneck, N.J. : World Scientific Pub. Co., c1989-
Contributed Indexing:
Keywords: 3D convolutional neural networks; Electroencephalography; brain–computer interfaces; explainable artificial intelligence
Entry Date(s):
Date Created: 20251019 Date Completed: 20251215 Latest Revision: 20251215
Update Code:
20251216
DOI:
10.1142/S012906572550073X
PMID:
41109958
Database:
MEDLINE

Weitere Informationen

Decoding electroencephalographic (EEG) signals is of key importance in the development of brain-computer interface (BCI) systems. However, high inter-subject variability in EEG signals requires user-specific calibration, which can be time-consuming and limit the application of deep learning approaches, due to general need of large amount of data to properly train these models. In this context, this paper proposes a multidimensional and explainable deep learning framework for fast and interpretable EEG decoding. In particular, EEG signals are projected into the spatial-spectral-temporal domain and processed using a custom three-dimensional (3D) Convolutional Neural Network, here referred to as EEGCubeNet . In this work, the method has been validated on EEGs recorded during motor BCI experiments. Namely, hand open (HO) and hand close (HC) movement planning was investigated by discriminating them from the absence of movement preparation (resting state, RE). The proposed method is based on a global- to subject-specific fine-tuning. The model is globally trained on a population of subjects and then fine-tuned on the final user, significantly reducing adaptation time. Experimental results demonstrate that EEGCubeNet achieves state-of-the-art performance (accuracy of [Formula: see text] and [Formula: see text] for HC versus RE and HO versus RE, binary classification tasks, respectively) with reduced framework complexity and with a reduced training time. In addition, to enhance transparency, a 3D occlusion sensitivity analysis-based explainability method (here named 3D xAI-OSA ) that generates relevance maps revealing the most significant features to each prediction, was introduced. The data and source code are available at the following link: https://github.com/AI-Lab-UniRC/EEGCubeNet.