Perceiving Systems, Computer Vision

AMASS Dataset

2019-09-13


AMASS is a large dataset of human motions - 45 hours and growing. AMASS enables the training of deep neural networks to model human motion. AMASS unifies multiple datasets by fitting the SMPL body model to mocap markers. The dataset includes SMPL-H body shapes and poses as well as DMPL soft tissue motions. If you want to include your own mocap sequences in the dataset, please contact us. The release includes tutorial code for training DNNs with AMASS. Also the MoSh++ code is now available. We also release SOMA, our complementary tool for automatic mocap labeling.

Author(s): Naureen Mahmood and Nima Ghorbani and Nikolaus F. Troje and Gerard Pons-Moll and Michael J. Black
Department(s): Perceiving Systems
Authors: Naureen Mahmood and Nima Ghorbani and Nikolaus F. Troje and Gerard Pons-Moll and Michael J. Black
Maintainers: Nima Ghorbani
Release Date: 2019-09-13
Repository: https://github.com/nghorbani/amass
External Link: https://amass.is.tue.mpg.de/