Skip to main content

Research Repository

Advanced Search

Visual Speech Benefit in Clear and Degraded Speech Depends on the Auditory Intelligibility of the Talker and the Number of Background Talkers

Blackburn, Catherine L.; Kitterick, Pádraig T.; Jones, Gary; Sumner, Christian J.; Stacey, Paula C.

Visual Speech Benefit in Clear and Degraded Speech Depends on the Auditory Intelligibility of the Talker and the Number of Background Talkers Thumbnail


Authors

Catherine L. Blackburn

Pádraig T. Kitterick

Gary Jones

Christian J. Sumner

Paula C. Stacey



Abstract

Perceiving speech in background noise presents a significant challenge to listeners. Intelligibility can be improved by seeing the face of a talker. This is of particular value to hearing impaired people and users of cochlear implants. It is well known that auditory-only speech understanding depends on factors beyond audibility. How these factors impact on the audio-visual integration of speech is poorly understood. We investigated audio-visual integration when either the interfering background speech (Experiment 1), or intelligibility of the target talkers (Experiment 2) was manipulated. Clear speech was also contrasted with sine-wave vocoded speech to mimic the loss of temporal fine structure with a cochlear implant. Experiment 1 showed that for clear speech the visual speech benefit was unaffected by the number of background talkers. For vocoded speech, a larger benefit was found when there was only one background talker. Experiment 2 showed that visual speech benefit depended upon the audio intelligibility of the talker, and increased as intelligibility decreased. Degrading the speech by vocoding resulted in even greater benefit from visual speech information. A single ‘independent noise’ Signal Detection Theory model predicted the overall visual speech benefit in some conditions, but could not predict the different levels of benefit across variations in the background or target talkers. This suggests that, similar to audio-only speech intelligibility, the integration of audio-visual speech cues may be functionally dependent on factors other than audibility and task difficulty, and that clinicians and researchers should carefully consider the characteristics of their stimuli when assessing audio-visual integration.

Journal Article Type Article
Acceptance Date Feb 20, 2019
Online Publication Date Mar 26, 2019
Publication Date Mar 26, 2019
Deposit Date Feb 26, 2019
Publicly Available Date Feb 26, 2019
Journal Trends in Hearing
Electronic ISSN 2331-2165
Publisher SAGE Publications
Peer Reviewed Peer Reviewed
Volume 23
Pages 1-14
DOI https://doi.org/10.1177/2331216519837866
Public URL https://nottingham-repository.worktribe.com/output/1584148
Publisher URL https://journals.sagepub.com/doi/full/10.1177/2331216519837866

Files





Downloadable Citations