ExPose: EXpressive POse and Shape rEgression
2020-08-23
Training models to quickly and accurately estimate expressive humans (SMPL-X) from an RGB image, including the main body, face and hands, is challenging for a number of reasons. First, there exists no dataset with paired images and ground truth SMPL-X annotations. Secondly, the face and hands take up much fewer pixels than the main body, making inference harder. Third, full body images are further downsampled to use with contemporary methods. Here we provide the first dataset of 32.617 pairs of: (1) an in-the-wild RGB image, and (2) an expressive whole-body 3D human reconstruction (SMPL-X), created by carefully curating the results of our earlier SMPLify-X method on a big number of datasets. We also provide ExPose, the first deep-net model that quickly predicts expressive 3D human bodies from a single RGB image, trained on this dataset. ExPose is 200x faster than SMPLify-X, while being on par in overall accuracy. The dataset can be used to train other similar models.
Author(s): | Choutas, Vasileios and Pavlakos, Georgios and Bolkart, Timo and Tzionas, Dimitrios and Black, Michael J. |
Department(s): |
Perceiving Systems |
Research Projects(s): |
3D Pose from Images |
Publication(s): |
Monocular Expressive Body Regression through Body-Driven Attention
|
Authors: | Choutas, Vasileios and Pavlakos, Georgios and Bolkart, Timo and Tzionas, Dimitrios and Black, Michael J. |
Maintainers: | Choutas, Vasileios |
Release Date: | 2020-08-23 |
Repository: | https://github.com/vchoutas/expose |
External Link: | http://expose.is.tue.mpg.de |