Robust modeling of continuous 4-D affective space from EEG recording

Abstract

The inherent intangible nature, complexity, context-specific interpretations of emotions make it difficult to quantify and model affective space. Dimensional theory is one of the effective methods to describe and model emotions. Despite recent advances in affective computing, modeling continuous affective space remains a challenge. Here, we present a computational framework to study the role of functional areas of brain and band frequencies in modeling 4-D continuous affective space (Valence, Arousal, Like and Dominance). In particular, we used Electroencephalogram (EEG) recordings and adopted a recursive feature elimination (RFE) approach to select band frequencies and electrode locations (functional areas) that are most relevant for predicting affective space. Empirical analyses on DEAP dataset [1] reveals that only a small number of locations (7-12) and certain band frequencies carry most discriminative information. Using the selected features, we modeled 4-D affective space using Support Vector Regression (SVR). Regression analysis show that Root Mean Square Error (RMSE) for Valence, Arousal, Dominance, Like are 1.40, 1.23, 1.24 and, 1.24, respectively. Besides SVR, the performance of feature fusion and ensemble classifiers were also compared to determine the robust model against technical noise and individual variations. It was observed that the prediction accuracy of the final model is up to 37% better than human judgment evaluated on same data set. Spillover effect of our approach may include design of task-specific (i.e., emotion, memory capacity) EEG headset with a minimal number of electrodes.

Publication Title

Proceedings - 2016 15th IEEE International Conference on Machine Learning and Applications, ICMLA 2016

Share

COinS