Skip to main content

Research Repository

Advanced Search

Deep word embeddings for visual speech recognition

Stafylakis, Themos; Tzimiropoulos, Georgios

Deep word embeddings for visual speech recognition Thumbnail


Authors

Themos Stafylakis

Georgios Tzimiropoulos



Abstract

In this paper we present a deep learning architecture for extracting word embeddings for visual speech recognition. The embeddings summarize the information of the mouth region that is relevant to the problem of word recognition, while suppressing other types of variability such as speaker, pose and illumination. The system is comprised of a spatiotemporal convolutional layer, a Residual Network and bidirectional LSTMs and is trained on the Lipreading in-the-wild database. We first show that the proposed architecture goes beyond state-of-the-art on closed-set word identification, by attaining 11.92% error rate on a vocabulary of 500 words. We then examine the capacity of the embeddings in modelling words unseen during training. We deploy Probabilistic Linear Discriminant Analysis (PLDA) to model the embeddings and perform low-shot learning experiments on words unseen during training. The experiments demonstrate that word-level visual speech recognition is feasible even in cases where the target words are not included in the training set.

Citation

Stafylakis, T., & Tzimiropoulos, G. (2018). Deep word embeddings for visual speech recognition.

Conference Name IEEE International Conference on Acoustics, Speech, and Signal Processing
End Date Apr 20, 2018
Acceptance Date Jan 20, 2018
Publication Date Apr 15, 2018
Deposit Date Apr 13, 2018
Publicly Available Date Apr 15, 2018
Peer Reviewed Peer Reviewed
Keywords Visual Speech Recognition, Lipreading, Word Embeddings, Deep Learning, Low-shot Learning
Public URL https://nottingham-repository.worktribe.com/output/925071

Files





Downloadable Citations