Shashank Jaiswal
Deep learning the dynamic appearance and shape of facial action units
Jaiswal, Shashank; Valstar, Michel F.
Authors
Michel F. Valstar
Abstract
Spontaneous facial expression recognition under uncontrolled conditions is a hard task. It depends on multiple factors including shape, appearance and dynamics of the facial features, all of which are adversely affected by environmental noise and low intensity signals typical of such conditions. In this work, we present a novel approach to Facial Action Unit detection using a combination of Convolutional and Bi-directional Long Short-Term Memory Neural Networks (CNN-BLSTM), which jointly learns shape, appearance and dynamics in a deep learning manner. In addition, we introduce a novel way to encode shape features using binary image masks computed from the locations of facial landmarks. We show that the combination of dynamic CNN features and Bi-directional Long Short-Term Memory excels at modelling the temporal information. We thoroughly evaluate the contributions of each component in our system and show that it achieves state-of-the-art performance on the FERA-2015 Challenge dataset.
Citation
Jaiswal, S., & Valstar, M. F. (2016). Deep learning the dynamic appearance and shape of facial action units
Conference Name | Winter Conference on Applications of Computer Vision (WACV) |
---|---|
End Date | Mar 9, 2016 |
Publication Date | Jan 1, 2016 |
Deposit Date | Jan 21, 2016 |
Publicly Available Date | Jan 21, 2016 |
Peer Reviewed | Peer Reviewed |
Public URL | http://eprints.nottingham.ac.uk/id/eprint/31301 |
Copyright Statement | Copyright information regarding this work can be found at the following address: http://eprints.nottingham.ac.uk/end_user_agreement.pdf |
Files
paper.pdf
(457 Kb)
PDF
Copyright Statement
Copyright information regarding this work can be found at the following address: http://eprints.nottingham.ac.uk/end_user_agreement.pdf
You might also like
NottReal: A Tool for Voice-based Wizard of Oz studies
(2020)
Conference Proceeding
Clinical Scene Segmentation with Tiny Datasets
(2019)
Conference Proceeding
Dynamic Facial Models for Video-based Dimensional Affect Estimation
(2019)
Conference Proceeding
Digital innovations in L2 motivation: harnessing the power of the Ideal L2 Self
(2018)
Journal Article