Enhancing EEG-Based Emotion Classification by Refining the Spatial Precision of Brain Activity.
Advancements in neuroscience and deep learning have significantly enhanced bio-signal-based emotion recognition, a critical component in Brain-Machine Interface (BMI) applications for healthcare, human-computer interaction, and human-AI assistant communication. Former studies have proposed Manual Mapping electrode matrices and employing Convolutional Neural Networks (CNNs) to recognize spatial EEG activities. However, this Manual Mapping of EEG electrodes onto matrix grids limits spatial precision and introduces inefficiencies. This study proposes automated channel mapping methods of Orthographic Projection and Stereographic Projection to address these challenges, using Differential Entropy and Power Spectral Density with Linear Dynamical Systems as features. A 3-branch multiscale CNN was trained on open-source dataset, employing a 5-fold cross-classification approach. Experimental results demonstrate that higher-resolution grids (16×16, 24×24) with automated projections significantly outperform Manual Mappings, achieving up to a 4.06% improvement in classification accuracy (p < 0.05). This result indicates that enhancing spatial precision of EEG data improves emotion classification, establishing automated spatial mapping as an advancement in EEG-based emotion recognition.Clinical Relevance-Advancement in emotion classification accuracy can facilitate more reliable diagnostic tools and personalized therapeutic interventions for mental health disorders, such as depression and anxiety.