POSA: Populating 3D Scenes by Learning Human-Scene Interaction
2021-04-28
POSA takes a 3D body and automatically places it in a 3D scene in a semantically meaningful way. This repository contains the training, random sampling, and scene population code used for the experiments in POSA. The code defines a novel representation of human-scene-interaction that is body centric. This can be exploited for 3D human tracking from video to model likely interactions between a body and the scene.
POSA takes a 3D body and automatically places it in a 3D scene in a semantically meaningful way.
This repository contains the training, random sampling, and scene population code used for the experiments in POSA.
The code defines a novel representation of human-scene-interaction that is body centric.
This can be exploited for 3D human tracking from video to model likely interactions between a body and the scene.
Author(s): | Hassan, Mohamed, Ghosh, Partha, Tesch, Joachim, Tzionas, Dimitrios, Black, Michael J. |
Department(s): |
Perceiving Systems |
Publication(s): |
Populating {3D} Scenes by Learning Human-Scene Interaction
|
Authors: | Hassan, Mohamed, Ghosh, Partha, Tesch, Joachim, Tzionas, Dimitrios, Black, Michael J. |
Release Date: | 2021-04-28 |
Repository: | https://github.com/mohamedhassanmus/POSA |