Inferring hearing loss from learned speech kernels
Does a hearing-impaired individual's speech reflect his hearing loss, and if it does, can the nature of hearing loss be inferred from his speech? To investigate these questions, at least four hours of speech data were recorded from each of 37 adult individuals, both male and female, belonging to four classes: 7 normal, and 30 severely-to-profoundly hearing impaired with high, medium or low speech intelligibility. Acoustic kernels were learned for each individual by capturing the distribution of his speech data points represented as 20 ms duration windows. These kernels were evaluated using a set of neurophysiological metrics, namely, distribution of characteristic frequencies, equal loudness contour, bandwidth and Q10 value of tuning curve. Our experimental results reveal that a hearing-impaired individual's speech does reflect his hearing loss provided his loss of hearing has considerably affected the intelligibility of his speech. For such individuals, the lack of tuning in any frequency range can be inferred from his learned speech kernels.
Proceedings - 2016 15th IEEE International Conference on Machine Learning and Applications, ICMLA 2016
Banerjee, B., Kapourchali, M., Najnin, S., Mendel, L., Lee, S., Patro, C., & Pousson, M. (2017). Inferring hearing loss from learned speech kernels. Proceedings - 2016 15th IEEE International Conference on Machine Learning and Applications, ICMLA 2016, 26-31. https://doi.org/10.1109/ICMLA.2016.113