Abstract
This paper presents a new method to analyze and synthesize facial expressions, in which a spatio-temporal gradient based method (i.e., optical flow) is exploited to estimate the movement of facial feature points. We proposed a method (called motion correlation) to improve the conventional block correlation method for obtaining motion vectors. The tracking of facial expressions under an active camera is addressed. With the motion vectors estimated, a facial expression can be cloned by adjusting the existing 3-D facial model, or synthesized by using different facial models. The experimental results demonstrate that the approach proposed is feasible for applications such as low bit rate video coding and face animation.
| Original language | English |
|---|---|
| Pages (from-to) | 347-354 |
| Number of pages | 8 |
| Journal | Proceedings of the International Conference on Tools with Artificial Intelligence |
| State | Published - 2002 |
| Event | 14th International Conference on Tools with Artificial Intelligence - Washington, DC, United States Duration: Jun 4 2002 → Nov 6 2002 |
Fingerprint
Dive into the research topics of 'Active tracking and cloning of facial expressions using spatio-temporal information'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver