Auditory categorical processing for speech is modulated by inherent musical listening skills


During successful auditory perception, the human brain classifies diverse acoustic information into meaningful groupings, a process known as categorical perception (CP). Intense auditory experiences (e.g., musical training and language expertise) shape categorical representations necessary for speech identification and novel sound-to-meaning learning, but little is known concerning the role of innate auditory function in CP. Here, we tested whether listeners vary in their intrinsic abilities to categorize complex sounds and individual differences in the underlying auditory brain mechanisms. To this end, we recorded EEGs in individuals without formal music training but who differed in their inherent auditory perceptual abilities (i.e., musicality) as they rapidly categorized sounds along a speech vowel continuum. Behaviorally, individuals with naturally more adept listening skills ("musical sleepers") showed enhanced speech categorization in the form of faster identification. At the neural level, inverse modeling parsed EEG data into different sources to evaluate the contribution of region-specific activity [i.e., auditory cortex (AC)] to categorical neural coding. We found stronger categorical processing in musical sleepers around the timeframe of P2 (∼180 ms) in the right AC compared to those with poorer musical listening abilities. Our data show that listeners with naturally more adept auditory skills map sound to meaning more efficiently than their peers, which may aid novel sound learning related to language and music acquisition.

Publication Title