EEMCS

Home > Publications
Home University of Twente
Education
Research
Prospective Students
Jobs
Publications
Intranet (internal)
 
 Nederlands
 Contact
 Search
 Organisation

EEMCS EPrints Service


27134 A Framework for Joint Estimation and Guided Annotation of Facial Action Unit Intensity
Home Policy Brochure Browse Search User Area Contact Help

Walecki, R. and Rudovic, O. and Pantic, M. and Pavlovic, V. and Cohn, J.F. (2016) A Framework for Joint Estimation and Guided Annotation of Facial Action Unit Intensity. In: Proceedings of IEEE International Conference on Computer Vision & Pattern Recognition, CVPR 2016, 26 June - 1 July 2016, Las Vegas, NV, USA. pp. 1460-1468. IEEE Computer Vision and Pattern Recognition. ISSN 2160-7516 ISBN 978-1-5090-1438-5

Full text available as:

PDF

798 Kb

Official URL: https://doi.org/10.1109/CVPRW.2016.183

Abstract

Manual annotation of facial action units (AUs) is highly tedious and time-consuming. Various methods for automatic coding of AUs have been proposed, however, their performance is still far below of that attained by expert human coders. Several attempts have been made to leverage these methods to reduce the burden of manual coding of AU activations (presence/absence). Nevertheless, this has not been exploited in the context of AU intensity coding, which is a far more difficult task. To this end, we propose an expertdriven probabilistic approach for joint modeling and estimation of AU intensities. Specifically, we introduce a Conditional Random Field model for joint estimation of the AU intensity that updates its predictions in an iterative fashion by relying on expert knowledge of human coders. We show in our experiments on two publicly available datasets of AU intensity (DISFA and FERA2015) that the AU coding process can significantly be facilitated by the proposed approach, allowing human coders to faster make decisions about target AU intensity.

Item Type:Conference or Workshop Paper (Full Paper, Talk)
Research Group:EWI-HMI: Human Media Interaction
Research Program:CTIT-General
Research Project:De-enigma: Multi-modal Human-robot Interaction For Teaching And Expanding Social Imagination In Autistic Children
Uncontrolled Keywords:facial action units, human coders
ID Code:27134
Status:Published
Deposited On:15 March 2017
Refereed:Yes
International:Yes
More Information:statistics

Export this item as:

To correct this item please ask your editor

Repository Staff Only: edit this item