Accurate capture of complex human and animal motion is critical for many applications. Consequently, we are pushing the state of the art in mocap in several ways. Top left: SOMA and MoSh turn raw mocap point clouds into realistic animated humans. Top right: To take mocap out of the lab, we train a neural network to regress full body pose from 6 IMUs. Bottom right: We combine IMUs with a hand-held camera to obtain 3D poses in natural videos. Bottle left: We build flying multi-camera systems to better capture natural motion in the wild.
To understand human and animal movement, we want to capture it, model it, and then simulate it. Most methods for capturing human motion are restricted to laboratory environments and/or limited volumes. Most do not take into account the complex and rich environment in which humans usually operate. Nor do they capture the kinds of everyday motions that people typically perform. To enable the capture of natural human behavior, we have to move motion capture out of the laboratory and into the world.
To that end, we pursue different technologies. In the lab, we automate the mocap process and make it easy to extract detailed and realistic 3D humans from mocap data. Our work here is focusing on extracting expressive SMPL-X bodies with detailed face and hand motion. We also capture objects and scenes to put human motion in context.
To move outside the lab, we use inertial measurement units (IMUs) worn on the body. These give information about pose and movement but putting on a full suite of sensors is impractical. We have developed methods to estimate full body motion from as few as six sensors worn on the legs, wrists, belt and head. Our most recent methods use deep neural networks to estimate pose from IMU measurements in real time in unconstrained scenarios.
IMUs however suffer from drift so we combine IMU data with a single hand-held video. In video we can detect the 2D joints of the body and use this to eliminate drift. We associate the IMU data with 2D data and solve for the transformation between the sensors. With this technology, we crated the popular 3DPW dataset, which contains video sequences with high-quality reference 3D poses. 3DPW is widely used to train and test video-based human pose and shape methods.
To go fully markerless, we have developed flying motion capture systems that work autonomously outside. Multiple micro-aerial vehicles coordinate their activities to detect and track a person, while estimating their 3D location. All processing is done onboard. We then use the captured video offline to estimate the 3D human pose and motion. The challenge here is to deal with noise in the camera calibration since the location of the flying cameras is only approximate.
Most recently, we have developed autonomous control algorithms for lighter-than-air vehicles. Such blimps have advantages over common multi-copters but are more complex to control. We are developing our blimp-based system to capture animal movement in natural conditions.
Our ongoing work is looking at capturing much more about humans -- their speech, gaze, interactions with objects, etc. The goal is always to track natural human behavior in settings that are as realistic as possible while making the process as lightweight and unobtrusive as possible.