EEMCS EPrints Service
Walecki, R. and Rudovic, O. and Pantic, M. and Pavlovic, V. and Cohn, J.F. (2016) A Framework for Joint Estimation and Guided Annotation of Facial Action Unit Intensity. In: Proceedings of IEEE International Conference on Computer Vision & Pattern Recognition, CVPR 2016, 26 June - 1 July 2016, Las Vegas, NV, USA. pp. 1460-1468. IEEE Computer Vision and Pattern Recognition. ISSN 2160-7516 ISBN 978-1-5090-1438-5
Full text available as:
Official URL: https://doi.org/10.1109/CVPRW.2016.183
Manual annotation of facial action units (AUs) is highly tedious and time-consuming. Various methods for automatic coding of AUs have been proposed, however, their performance is still far below of that attained by expert human coders. Several attempts have been made to leverage these methods to reduce the burden of manual coding of AU activations (presence/absence). Nevertheless, this has not been exploited in the context of AU intensity coding, which is a far more difficult task. To this end, we propose an expertdriven probabilistic approach for joint modeling and estimation of AU intensities. Specifically, we introduce a Conditional Random Field model for joint estimation of the AU intensity that updates its predictions in an iterative fashion by relying on expert knowledge of human coders. We show in our experiments on two publicly available datasets of AU intensity (DISFA and FERA2015) that the AU coding process can significantly be facilitated by the proposed approach, allowing human coders to faster make decisions about target AU intensity.
Export this item as:
To correct this item please ask your editor
Repository Staff Only: edit this item