Generating 3D People in Scenes without People
2020-06-18
Our PSI system aims to generate 3D people in a 3D scene from the view of an agent. The system takes as input the depth and the semantic segmentation from a camera view, and generates plausible SMPL-X body meshes, which are naturally posed in the 3D scene. Scripts of data pre-processing, training, fitting, evaluation and visualization, as well as the data, are incorporated.
Normally, our method first produces a plausible 3D body mesh based on the trained generative model, and then refines that body mesh via performing the geometry-aware fitting. As a starting point, watching the demo video and running `demo.ipynb` are highly recommended.
Author(s): | Yan Zhang, Mohamed Hassan, Heiko Neumann, Michael J. Black, Siyu Tang |
Department(s): |
Perceiving Systems |
Authors: | Yan Zhang, Mohamed Hassan, Heiko Neumann, Michael J. Black, Siyu Tang |
Release Date: | 2020-06-18 |
Version: | 1.0.0 |
Repository: | https://github.com/yz-cnsdqz/PSI-release |