Perceiving Systems, Computer Vision

Multi-Track Timeline Control for Text-Driven 3D Human Motion Generation

2024

Conference Paper

ps


Recent advances in generative modeling have led to promising progress on synthesizing 3D human motion from text, with methods that can generate character animations from short prompts and specified durations. However, using a single text prompt as input lacks the fine-grained control needed by animators, such as composing multiple actions and defining precise durations for parts of the motion. To address this, we introduce the new problem of timeline control for text-driven motion synthesis, which provides an intuitive, yet fine-grained, input interface for users. Instead of a single prompt, users can specify a multi-track timeline of multiple prompts organized in temporal intervals that may overlap. This enables specifying the exact timings of each action and composing multiple actions in sequence or at overlapping intervals. To generate composite animations from a multi-track timeline, we propose a new test-time denoising method. This method can be integrated with any pre-trained motion diffusion model to synthesize realistic motions that accurately reflect the timeline. At every step of denoising, our method processes each timeline interval (text prompt) individually, subsequently aggregating the predictions with consideration for the specific body parts engaged in each action. Experimental comparisons and ablations validate that our method produces realistic motions that respect the semantics and timing of given text prompts.

Author(s): Mathis Petrovich and Or Litany and Umar Iqbal and Michael J. Black and Gül Varol and Xue Bin Peng and Davis Rempe
Book Title: CVPR Workshop on Human Motion Generation
Year: 2024
Month: June

Department(s): Perceiving Systems
Bibtex Type: Conference Paper (inproceedings)
Paper Type: Workshop

Event Name: CVPR 2024
Event Place: Seattle, USA

Address: Seattle
Degree Type: PhD
State: Published
URL: https://mathis.petrovich.fr

Links: code
website
paper-arxiv
video
Video:

BibTex

@inproceedings{stmc,
  title = {Multi-Track Timeline Control for Text-Driven 3D Human Motion Generation},
  author = {Petrovich, Mathis and Litany, Or and Iqbal, Umar and Black, Michael J. and Varol, G{\"u}l and Peng, Xue Bin and Rempe, Davis},
  booktitle = {CVPR Workshop on Human Motion Generation},
  address = {Seattle},
  month = jun,
  year = {2024},
  doi = {},
  url = {https://mathis.petrovich.fr},
  month_numeric = {6}
}