GRAB: A Dataset of Whole-Body Human Grasping of Objects
2020-08-25
Training computers to understand, model, and synthesize human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time. While "grasping" is commonly thought of as a single hand stably lifting an object, we capture the motion of the entire body and adopt the generalized notion of "whole-body grasps". Thus, we collect a new dataset, called GRAB (GRasping Actions with Bodies), of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size. The dataset contains 1.622.459 frames in total. Each one has (1) an expressive 3D SMPL-X human mesh (shaped and posed), (2) a 3D rigid object mesh (posed), and (3) contact annotations (wherever applicable).
Author(s): | Omid Taheri and Nima Ghorbani and Michael J. Black and Dimitrios Tzionas |
Department(s): |
Perceiving Systems |
Research Projects(s): |
Hands-Object Interaction |
Publication(s): |
GRAB: A Dataset of Whole-Body Human Grasping of Objects
|
Authors: | Omid Taheri and Nima Ghorbani and Michael J. Black and Dimitrios Tzionas |
Maintainers: | Omid Taheri |
Release Date: | 2020-08-25 |
Repository: | https://github.com/otaheri/GRAB |
External Link: | https://grab.is.tue.mpg.de |