Treffer: Transformer-based hybrid systems to combat BCI illiteracy.
Original Publication: New York, Pergamon Press.
Weitere Informationen
This study addresses the challenge of enhancing Brain-Computer Interfaces (BCIs), focusing on low Signal-to-Noise Ratios and "BCI illiteracy" often affecting up to 20% of users. Transformer-based models show promise but remain underexplored. Three experiments were conducted. Experiment A assessed the performance of architectures combining Convolutional and Transformer Blocks for binary Motor Imagery (MI) classification. Experiment B introduced a hybrid system, refining both block types and adding a Noise Focus Block to infuse Stochastic Noise, enhancing multi-class classification robustness. Experiment C evaluated the emerging architectures on 106 subjects, focusing on robustness across weak and strong learners. In Experiment A, the best networks achieved a validation accuracy of 0.914 and a loss of 0.146 (p=0.000967, F=12.675). In Experiment B, the proposed architecture improved multi-class MI classification to 84.5% on Dataset II, significantly improving performance for BCI-illiterate users. Experiment C showed a Kappa >83%, reduced standard deviation, and a highest validation accuracy of 88.69% across all individuals. The hybrid integration of Transformers, CNNs, and Noise-Resonance-based layers significantly enhances classification performance, particularly for weak BCI learners. Further research is recommended to optimize hybrid system architectures and hyperparameter settings to overcome current limitations in BCI performance.
(Copyright © 2025. Published by Elsevier Ltd.)
Declaration of competing interest The authors declare that there are no conflicts of interest.