InterCap: Joint Markerless 3D Tracking of Humans and Objects in Interaction
2022
Conference Paper
ps
Humans constantly interact with daily objects to accomplish tasks. To understand such interactions, computers need to reconstruct these from cameras observing whole-body interaction with scenes. This is challenging due to occlusion between the body and objects, motion blur, depth/scale ambiguities, and the low image resolution of hands and graspable object parts. To make the problem tractable, the community focuses either on interacting hands, ignoring the body, or on interacting bodies, ignoring hands. The GRAB dataset addresses dexterous whole-body interaction but uses marker-based MoCap and lacks images, while BEHAVE captures video of body object interaction but lacks hand detail. We address the limitations of prior work with InterCap, a novel method that reconstructs interacting whole-bodies and objects from multi-view RGB-D data, using the parametric whole-body model SMPL-X and known object meshes. To tackle the above challenges, InterCap uses two key observations: (i) Contact between the hand and object can be used to improve the pose estimation of both. (ii) Azure Kinect sensors allow us to set up a simple multi-view RGB-D capture system that minimizes the effect of occlusion while providing reasonable inter-camera synchronization. With this method we capture the InterCap dataset, which contains 10 subjects (5 males and 5 females) interacting with 10 objects of various sizes and affordances, including contact with the hands or feet. In total, InterCap has 223 RGB-D videos, resulting in 67,357 multi-view frames, each containing 6 RGB-D images. Our method provides pseudo ground-truth body meshes and objects for each video frame. Our InterCap method and dataset fill an important gap in the literature and support many research directions. Our data and code are areavailable for research purposes.
Award: | (Honorable Mention for Best Paper) |
Author(s): | Yinghao Huang and Omid Taheri and Michael J. Black and Dimitrios Tzionas |
Book Title: | Pattern Recognition |
Pages: | 281--299 |
Year: | 2022 |
Month: | September |
Series: | Lecture Notes in Computer Science, 13485 |
Editors: | Andres, Björn and Bernard, Florian and Cremers, Daniel and Frintrop, Simone and Goldlücke, Bastian and Ihrke, Ivo |
Publisher: | Springer |
Department(s): | Perceiving Systems |
Bibtex Type: | Conference Paper (inproceedings) |
Paper Type: | Conference |
DOI: | 10.1007/978-3-031-16788-1_18 |
Event Name: | 44th DAGM German Conference on Pattern Recognition (DAGM GCPR 2022) |
Event Place: | Konstanz |
Address: | Cham |
Award Paper: | Honorable Mention for Best Paper |
ISBN: | 978-3-031-16787-4 |
State: | Published |
URL: | https://intercap.is.tue.mpg.de |
Links: |
Code
Data YouTube Video |
Video: | |
BibTex @inproceedings{intercap_gcpr2022, title = {{InterCap}: Joint Markerless {3D} Tracking of Humans and Objects in Interaction}, author = {Huang, Yinghao and Taheri, Omid and Black, Michael J. and Tzionas, Dimitrios}, booktitle = {Pattern Recognition}, pages = {281--299}, series = {Lecture Notes in Computer Science, 13485}, editors = {Andres, Björn and Bernard, Florian and Cremers, Daniel and Frintrop, Simone and Goldlücke, Bastian and Ihrke, Ivo}, publisher = {Springer}, address = {Cham}, month = sep, year = {2022}, doi = {10.1007/978-3-031-16788-1_18}, url = {https://intercap.is.tue.mpg.de}, month_numeric = {9} } |