Real image sequence (top left). Estimated 3D pose and shape (top right). Two of our MAVs cooperatively detecting and tracking a person on-board in real time (bottom left). Cropped ROIs of images from both MAVs and estimated 3D pose and shape overlaid on images (bottom center). DRL-based aerial mocap (bottom right).
The goal of AirCap is markerless, unconstrained, human and animal motion capture (mocap) outdoors. To that end, we have developed a flying mocap system using a team of aerial vehicles (MAVs) with only on-board, monocular RGB cameras. In AirCap, mocap involves two phases: i) online data acquisition, and ii) offline pose and shape estimation.
During online data acquisition, the micro air vehicles (MAVs) detect and track the 3D position of a subject [ ]. To do so, they perform on-board detection using a deep neural network (DNN). DNNs often fail to detect small people, which are typical in scenarios with aerial robots. By cooperatively tracking the person our system actively selects the relevant region of interest (ROI) in the images from each MAV. Then cropped high-resolution regions around the person are passed to the DNNs.
Then, human pose and shape are estimated offline using the RGB images and the MAV's self-localization (the camera extrinsics). Recent 3D human pose and shape regression methods produce noisy estimate of human pose. Our approach [ ] exploits multiple noisy 2D body joint detectors and noisy camera pose information. We then optimize for body shape, body pose, and camera extrinsics by fitting the SMPL body model to the 2D observations. This approach uses a strong body model to take low-level uncertainty into account and results in the first fully autonomous flying mocap system.
Offline mocap, does not enable active positioning of the MAVs to maximize the mocap accuracy. To address this, we introduce a deep reinforcement learning (RL) based multi-robot formation controller for MAVs. We formulate this problem as a sequential decision making task and solve it using an RL method [ ].
To enable fully on-board, online, mocap, we are developing a novel, distributed, multi-view fusion network for 3D pose and shape estimation of humans using uncalibrated moving cameras.
Source Code and Resources
News : Nodes and packages specific to our submission to ICRA 2019 added on our Github project page.
Github project page with all sources: AirCap Github Profile
List of required hardware
- Flying platform capable of carrying 2 kg of payload or more
- OpenPilot Revolution flight controllers
- HD Cameras
- NVIDIA Jetson TX1 embedded GPU
- On board computer (PC) - Intel Core I7 CPU - Ubuntu 16.04
List of required 3rd party Open Source Packages
Project Source Code
- LibrePilot modified flight controller firmware and ROS interface
Based on LibrePilot - SSD Multibox detection server
Based on SSD Multibox - AirCAP Main Public Code Repository
- Rotors Gazebo Simulation Environment specific to AirCap project
Based on Rotors Simulator