Electronic Theses and Dissertations


Jane A. Brown



Document Type


Degree Name

Doctor of Philosophy


Communication Sciences & Disorders

Committee Chair

Eugene Buder

Committee Member

Sarah E Warren

Committee Member

Deborah W Moncrieff

Committee Member

Gavin M Bidelman


In everyday life, we often face the challenge of comprehending speech amidst background noise, which frequently includes music. While extensive research investigates speech-in-noise and “cocktail party" perception with linguistic or synthetic music stimuli, the role of realistic background music remains underexplored. This dissertation comprises three complementary studies investigating the relationship between naturalistic background music and concurrent speech processing. In the first study, participants completed a speech recognition task while listening to either familiar or unfamiliar music. The songs were presented in three conditions: music with lyrics, instrumentals only, and isolated vocals. Speech recognition was poorest in music with vocals, likely due to informational masking. Familiar music was more distracting and impaired speech processing more than unfamiliar music. Interestingly, this negative familiarity effect occurred in both music with vocals and isolated instrumentals, possibly attributed to listeners “singing along” with familiar music, introducing linguistic interference. The second study expanded upon these findings by exploring neural correlates. Participants listened to a continuous audiobook while again listening to familiar or unfamiliar music without or without vocals. Using multichannel EEG recordings and temporal response functions (TRFs), we modeled cortical tracking of the continuous speech envelope and analyzed responses around 100 milliseconds (corresponding to auditory N1 wave). Response latencies to speech were less susceptible to informational masking by familiar music, indicating that unfamiliar music was a more difficult listening condition, especially for listeners with less musical ability. The final study used the same familiar/unfamiliar music paradigm but directed participants to attend to either continuous speech or song lyrics. The modeled P1 (around 50 milliseconds) was larger in unfamiliar background music, indicating poorer speech encoding and increased attention to the music. N1 when tracking speech was prolonged when attending to the music as compared to the speech only for less-musical listeners. Collectively, these results demonstrate background music with lyrics impairs concurrent speech perception. The impact of music familiarity on speech is task-dependent. Moreover, speech-in-music listening is modulated by objective measures of musicality that may be independent of formal musical training. These findings expand on the intricate interplay between background music and speech processing in real-world scenarios.


Data is provided by the student.

Library Comment

Dissertation or thesis originally submitted to ProQuest.


Open Access