Face Animation Parameter

A Face Animation Parameter (FAP) is a component of the MPEG-4 Face and Body Animation (FBA) International Standard (ISO/IEC 14496-1 & -2) developed by the Moving Pictures Experts Group.[1] It describes a standard for virtually representing humans and humanoids in a way that adequately achieves visual speech intelligibility as well as the mood and gesture of the speaker, and allows for very low bitrate compression and transmission of animation parameters.[2] FAPs control key feature points on a face model mesh that are used to produce animated visemes and facial expressions, as well as head and eye movement.[1] These feature points are part of the Face Definition Parameters (FDPs) also defined in the MPEG-4 standard.

FAPs represent 66 displacements and rotations of the feature points from the neutral face position, which is defined as: mouth closed, eyelids tangent to the iris, gaze and head orientation straight ahead, teeth touching, and tongue touching teeth.[3] These FAPs were designed to be closely related to human facial muscle movements. In addition to animation, FAPs are used in automatic speech recognition[4] and biometrics.[5]

References

  1. Ostermann, Jörn (August 2002). "Chapter 2: Face Animation in MPEG-4". In Pandzic, Igor; Forchheimer, Robert (eds.). MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley. pp. 17–55. ISBN 978-0-470-84465-6.
  2. Tao, Hai; Chen, H.H.; Wu, Wei; Huang, T.S. (1999). "Compression of MPEG-4 facial animation parameters for transmission of talking heads". IEEE Transactions on Circuits and Systems for Video Technology. IEEE Press. 9 (3): 264–276. doi:10.1109/76.752094.
  3. Petajan, Eric (September 2005). "MPEG-4 Face and Body Animation Coding Applied to HCI". In Kisačanin, B.; Pavlović, V.; Wang, T.S. (eds.). Real-time vision for human-computer interaction. Springer. pp. 249–268. ISBN 0387276971.
  4. Petajan, Eric (January 1, 2009). "Chapter 4: Visual Speech and Gesture Coding Using the MPEG-4 Face and Body Animation Standard". In Wee-Chung Liew, Alan (ed.). Visual Speech Recognition: Lip Segmentation and Mapping. IGI Global. pp. 128–148. ISBN 9781605661872.
  5. Aleksic, P.S.; Katsaggelos, A.K. (November 2006). "Audio-Visual Biometrics". Proceedings of the IEEE. IEEE Press. 94 (11): 2025–2044. doi:10.1109/JPROC.2006.886017.


This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.