Multivariate Models for Decoding Hearing Impairment using EEG Gamma-Band Power Spectral Density


Speech-in-noise (SIN) comprehension decreases with age, and these declines have been related to social isolation, depression, and dementia in the elderly. In this work, we build models to distinguish the normal hearing (NH) or mild hearing impairment (HI) using the different genres of machine learning. We compute band wise power spectral density (PSD) of source- derived EEGs as features in building models using support vector machine (SVM), k-nearest neighbors (KNN), and AdaBoost classifiers and compare their performance while listeners perceived clear or noise-degraded sounds. Combining all frequency bands features obtained from the whole-brain, the SVM registered the best performance. The group classification accuracy was found to be 94.90% [area under the curve (AUC) 94.75%; F1-score 95.00%] perceived the clear speech, and for noise- degraded speech perception, accuracy was found to be 92.52% (AUC 91.12%, and F1-score 93.00%). Remarkably, individual frequency band analysis on whole-brain data showed that γ frequency band segregated groups with a best accuracy of 96.78%, AUC 96.79% for clear speech data and noise-degraded speech data yielded slightly less accuracy of 93.62% with AUC 93.17% by using SVM. A separate analysis using the left hemisphere (LH) and right hemisphere (RH) data showed that the LH activity is a better predictor of groups compared to RH. These results are consistent with the dominance of LH in auditory-linguistic processing. Our results demonstrate that spectral features of the γ-band frequency could be used to differentiate NH and HI older adults in terms of their ability to process speech sounds. These findings would be useful to model attentional and listening assistive devices to amplify a more specific pitch than others.

Publication Title

Proceedings of the International Joint Conference on Neural Networks