Feature and Decision Levels Fusion for the Synergistic Analysis of Facial Expressions and EEG Signals in the Context of Discrete Emotion Recognition

Main Article Content

Amit S. Ufade, V. J. Gond, Madhuri D. Kawade

Abstract

The undertaking of brain-computer interface emotion recognition represents a challenging task that demands meticulous thinking in order for machines to discern human emotions and respond appropriately. This work aims to improve the efficacy of individual modalities by incorporating multimodality for emotion recognition, which employs two distinct modalities. Electroencephalogram (EEG) data and facial expressions are used as independent modalities for emotion identification, with each modality evaluated separately. To combine the modalities, decision- and feature-level fusion algorithms are used. While EEG-derived emotions are primarily classified as continuous domains of valence and arousal, facial emotions are primarily classified as discrete emotions, which presents a considerable hindrance to the fusion process. Ashford Bird[10]. present the dataset used for EEG experiments at UKCI-2019.the CFEE dataset serves as the foundation for face emotion recognition for the proposed work[9]. EEG signals are analyzed for statistical properties such as mean, standard deviation, skewness, and kurtosis, while the face emotion dataset is used to identify various action units. Given the large number of features, redundant feature removal approaches are used to determine feature efficacy. Using the product rule, decision-level fusion obtains an accuracy of 80%. Accuracy in feature-level fusion is 94.48% for KNN and 98.66% for SVM classification which are comparatively higher than individual average accuracy of facial expression 88.04% and 90.73% for EEG signal.

Article Details

Section
Articles