EMAGE: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling
2024
Conference Paper
ps
We propose EMAGE, a framework to generate full-body human gestures from audio and masked gestures, encompassing facial, local body, hands, and global movements. To achieve this, we first introduce BEAT2 (BEAT-SMPLX-FLAME), a new mesh-level holistic co-speech dataset. BEAT2 combines a MoShed SMPL-X body with FLAME head parameters and further refines the modeling of head, neck, and finger movements, offering a community-standardized, high-quality 3D motion captured dataset. EMAGE leverages masked body gesture priors during training to boost inference performance. It involves a Masked Audio Gesture Transformer, facilitating joint training on audio-to-gesture generation and masked gesture reconstruction to effectively encode audio and body gesture hints. Encoded body hints from masked gestures are then separately employed to generate facial and body movements. Moreover, EMAGE adaptively merges speech features from the audio's rhythm and content and utilizes four compositional VQ-VAEs to enhance the results' fidelity and diversity. Experiments demonstrate that EMAGE generates holistic gestures with state-of-the-art performance and is flexible in accepting predefined spatial-temporal gesture inputs, generating complete, audio-synchronized results. Our code and dataset are available.
Author(s): | Haiyang Liu and Zihao Zhu and Giorgio Becherini and Yichen Peng and Mingyang Su and You Zhou and Xuefei Zhe and Naoya Iwamoto and Bo Zheng and Michael J. Black |
Book Title: | IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR) |
Year: | 2024 |
Month: | June |
Department(s): | Perceiving Systems |
Bibtex Type: | Conference Paper (inproceedings) |
Paper Type: | Conference |
Event Name: | CVPR 2024 |
Event Place: | Seattle, USA |
Links: |
arXiv
project dataset code gradio colab video |
Video: | |
BibTex @inproceedings{emage:cvpr:2024, title = {{EMAGE}: Towards Unified Holistic Co-Speech Gesture Generation via Expressive Masked Audio Gesture Modeling}, author = {Liu, Haiyang and Zhu, Zihao and Becherini, Giorgio and Peng, Yichen and Su, Mingyang and Zhou, You and Zhe, Xuefei and Iwamoto, Naoya and Zheng, Bo and Black, Michael J.}, booktitle = {IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)}, month = jun, year = {2024}, doi = {}, month_numeric = {6} } |