Modeling 3D Human Motion for Improved Pose Estimation (Talk)
Though substantial progress has been made in estimating 3D human poses from dynamic observations, recent methods still struggle to recover physically-plausible motions, and the presence of noise and occlusions remains challenging. In this talk, I'll introduce two methods that tackle these issues by leveraging models of 3D human motion - one physics-based and one learned. In the first approach, an initial 3D motion is refined using a physics-based trajectory optimization that leverages automatically-detected foot contacts from RGB video. In the second, a learned generative model is used as an expressive prior to regularize a test-time optimization towards the space of plausible 3D motions even under noisy and partial observations. Extensive results demonstrate that these methods resolve common implausible artifacts and enable robust recovery of 3D human motion from multiple modalities such as RGB(-D) videos and 3D keypoints.
Biography: Davis Rempe is a 4th year PhD student at Stanford University advised by Leo Guibas. He is currently interning at NVIDIA and in the past has interned at both Adobe and Snap. His research focuses on understanding and perceiving dynamic 3D humans and objects, especially using learned or physics-based models of motion.