SynTalker: Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation

ACMMM 2024
Bohong Chen1, Yumeng Li1, Yaoxiang Ding1, Tianjia Shao1, Kun Zhou1,
Zhejiang University, China1,
cars peace

SynTalker generated cospeech full body motion following user prompt.

Abstract

Current co-speech motion generation approaches usually focus on upper body gestures following speech contents only, while lacking supporting the elaborate control of synergistic full-body motion based on text prompts, such as talking while walking. The major challenges lie in 1) the existing speech-to-motion datasets only involve highly limited full-body motions, making a wide range of common human activities out of training distribution; 2) these datasets also lack annotated user prompts. To address these challenges, we propose SynTalker, which utilizes the off-the-shelf text-to-motion dataset as an auxiliary for supplementing the missing full-body motion and prompts. The core technical contributions are two-fold. One is the multi-stage training process which obtains an aligned embedding space of motion, speech, and prompts despite the significant distributional mismatch in motion between speech-to-motion and text-to-motion datasets. Another is the diffusion-based conditional inference process, which utilizes the separate-then-combine strategy to realize fine-grained control of local body parts. Extensive experiments are conducted to verify that our approach supports precise and flexible control of synergistic full-body motion generation based on both speeches and user prompts, which is beyond the ability of existing approaches.

cars peace

SynTalker takes speech audio and the corresponding transcripts as inputs, targeting at outputting realistic and stylized full-body motions that align with the speech content rhythmically and semantically. Compared with traditional co-speech generation model, besides speech, it further allows to use a short piece of text, namely a text prompt to provide additional descriptions for the desired motion style. The full-body motions are then generated to follow the style given by both speech and prompt as much as possible.

Extensive experiments show that, our approach is able to achieve significant performance in using both speech and text prompt to guide the generation of synergistic full-body motion precisely and flexibly, which is beyond the capability of the existing co-speech generation approaches.

Control With MultiModal Prompts Arbitrarily

Our system enables flexible control of synergistic full-body motion generation based on both speeches and user prompts simultaneously. Some results are shown below.

More Results

"In the following video, we present additional results including comparation. Regardless of whether using single-modal speech input or multi-modal input combining speech and text, our generated results consistently achieve SOTA performance."

BibTeX


@inproceedings{chen2024syntalker,
  author = {Bohong Chen and Yumeng Li and Yao-Xiang Ding and Tianjia Shao and Kun Zhou},
  title = {Enabling Synergistic Full-Body Control in Prompt-Based Co-Speech Motion Generation},
  booktitle = {Proceedings of the 32nd ACM International Conference on Multimedia},
  year = {2024},
  publisher = {ACM},
  address = {New York, NY, USA},
  pages = {10},
  doi = {10.1145/3664647.3680847}
}