GrabNet: Generating 3D hand grasps for unseen 3D objects
2020-08-24
There is a significant interest in the community in training models to grasp 3D objects. This is important for example for interacting human avatars, as well as for robotic grasping by imitating human grasping. We use our GRAB dataset (see entry above) of whole-body grasps, and extract hand-only information. We then train on this our deep-net model GrabNet to generate 3D hand grasps, using our hand model MANO, for unseen 3D objects. We provide both the GrabNet model and its training dataset for research purposes.
Author(s): | Omid Taheri and Nima Ghorbani and Michael J. Black and Dimitrios Tzionas |
Department(s): |
Perceiving Systems |
Research Projects(s): |
Hands-Object Interaction |
Publication(s): |
GRAB: A Dataset of Whole-Body Human Grasping of Objects
|
Authors: | Omid Taheri and Nima Ghorbani and Michael J. Black and Dimitrios Tzionas |
Maintainers: | Omid Taheri |
Release Date: | 2020-08-24 |
Repository: | https://github.com/otaheri/GrabNet |
External Link: | https://grab.is.tue.mpg.de |