Motor imagery (MI) decoding is an important part of brain-computer interface (BCI) research, which translates the subject's intentions into commands that external devices can execute. The traditional methods for discriminative feature extraction, such as common spatial pattern (CSP) and filter bank common spatial pattern (FBCSP), have only focused on the energy features of the electroencephalography (EEG) and thus ignored the further exploration of temporal information. However, the temporal information of spatially filtered EEG may be critical to the performance improvement of MI decoding. In this paper, we proposed a deep learning approach termed filter-bank spatial filtering and temporal-spatial convolutional neural network (FBSF-TSCNN) for MI decoding, where the FBSF block transforms the raw EEG signals into an appropriate intermediate EEG presentation, and then the TSCNN block decodes the intermediate EEG signals. Moreover, a novel stage-wise training strategy is propose d to mitigate the difficult optimization problem of the TSCNN block in the case of insufficient training samples. Firstly, the feature extraction layers are trained by optimization of the triplet loss. Then, the classification layers are trained by optimization of the cross-entropy loss. Finally, the entire network (TSCNN) is fine-tuned by the back-propagation (BP) algorithm. Experimental evaluations on the BCI IV 2a and SMR-BCI datasets reveal that the proposed stage-wise training strategy yields significant performance improvement compared with the conventional end-to-end training strategy, and the proposed approach is comparable with the state-of-the-art method.
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου