Rahul Tallamraju
Alumni
Note: Rahul Tallamraju has transitioned from the institute (alumni).
I am a PhD student at International Institute of Information Technology Hyderabad (IIIT-H), India. I am currently pursuing research on autonomous motion planning at MPI-IS, Tübingen, Germany. My doctoral research focuses on scalable real-time motion planning for multiple agile agents.
During my Ph.D. I have developed algorithms for cooperative multi-agent optimization and navigation in unstructured environments. For artificially intelligent agents to be useful in everyday life, they need to learn to operate safely in largely unstructured environments, perceive and reason about objects in the environment, monitor changes, and plan actions to simplify everyday activities of people. My thesis considers the following two collaborative autonomous tasks and focuses on the aforementioned aspects of artificial intelligence.
- Multi-agent planning for autonomous aerial motion capture (AirCap):
Aerial outdoor motion capture is a computer vision driven control problem. The challenge is to compute safe, feasible trajectories for flying cameras (drones) to improve the quality of 3-D reconstruction of a moving human subject. In this project, we developed decentralized stochastic algorithms that perform real-time trajectory optimization for aerial vehicles.
- Multi-agent cooperative object manipulation in unstructured environments:
Object manipulation through dynamic environments using multiple mobile manipulators is a computationally and kinodynamically challenging problem. This task requires agents to explore intelligent cooperative behaviors that enable them and a commonly manipulated object to navigate dynamic environments.
Presently, I am working on developing model-free perception-aware algorithms that derive actions from input observations using neural networks. For the aerial motion capture task, we leverage multi-agent deep reinforcement learning algorithms with a parallelized training setup and realistic synthetic environments, all wrapped using the ROS software framework (AirCapRL).
News
- [New!!! RA-L and IROS 2020 accepted] AirCapRL: Autonomous Aerial Human Motion Capture using Deep Reinforcement Learning -- arXiv.
- [Code of our IEEE RA-L + IROS 2020 submission] AirCapRL: Autonomous Aerial Human Motion Capture using Deep Reinforcement Learning -- Code here.
- [Supplementary Document for IEEE RA-L + IROS 2020 submission] AirCapRL: Autonomous Aerial Human Motion Capture using Deep Reinforcement Learning -- Document here.
Robotics Multi-Robot Systems Motion Planning Deep Reinforcement Learning
IEEE RA-L 2019: Active Perception based Formation Control for Multiple Aerial Vehicles
We present a novel robotic front-end for autonomous aerial motion-capture (mocap) in outdoor environments. In previous work, we presented an approach for cooperative detection and tracking (CDT) of a subject using multiple micro-aerial vehicles (MAVs). However, it did not ensure optimal view-point configurations of the MAVs to minimize the uncertainty in the person's cooperatively tracked 3D position estimate. In this article, we introduce an active approach for CDT. In contrast to cooperatively tracking only the 3D positions of the person, the MAVs can actively compute optimal local motion plans, resulting in optimal view-point configurations, which minimize the uncertainty in the tracked estimate. We achieve this by decoupling the goal of active tracking into a quadratic objective and non-convex constraints corresponding to angular configurations of the MAVs w.r.t. the person. We derive this decoupling using Gaussian observation model assumptions within the CDT algorithm. We preserve convexity in optimization by embedding all the non-convex constraints, including those for dynamic obstacle avoidance, as external control inputs in the MPC dynamics. Multiple real robot experiments and comparisons involving 3 MAVs in several challenging scenarios are presented.
AirCapRL: Autonomous Aerial Human Motion Capture using Deep Reinforcement Learning
We introduce a deep reinforcement learning (RL) based multi-robot formation controller for the task of autonomous aerial human motion capture (MoCap). We focus on vision-based MoCap, where the objective is to estimate the trajectory of body pose and shape of a single moving person using multiple micro aerial vehicles. State-of-the-art solutions to this problem are based on classical control methods, which depend on hand-crafted system and observation models. Such models are difficult to derive and generalize across different systems. Moreover, the non-linearity and non-convexities of these models lead to sub-optimal controls. In our work, we formulate this problem as a sequential decision making task to achieve the vision-based motion capture objectives, and solve it using a deep neural network-based RL method. We leverage proximal policy optimization (PPO) to train a stochastic decentralized control policy for formation control. The neural network is trained in a parallelized setup in synthetic environments. We performed extensive simulation experiments to validate our approach. Finally, real-robot experiments demonstrate that our policies generalize to real world conditions.
Decentralized MPC based Obstacle Avoidance for Multi-Robot Target Tracking Scenarios
In this work, we consider the problem of decentralized multi-robot target tracking and obstacle avoidance in dynamic environments. Each robot executes a local motion planning algorithm which is based on model predictive control (MPC). The planner is designed as a quadratic program, subject to constraints on robot dynamics and obstacle avoidance. Repulsive potential field functions are employed to avoid obstacles. The novelty of our approach lies in embedding these non-linear potential field functions as constraints within a convex optimization framework. Our method convexifies non-convex constraints and dependencies, by replacing them as pre-computed external input forces in robot dynamics. The proposed algorithm additionally incorporates different methods to avoid field local minima problems associated with using potential field functions in planning. The motion planner does not enforce predefined trajectories or any formation geometry on the robots and is a comprehensive solution for cooperative obstacle avoidance in the context of multi-robot target tracking. We perform simulation studies in different environmental scenarios to showcase the convergence and efficacy of the proposed algorithm.
Motion Planning for Multi-Mobile-Manipulator Payload Transport Systems
In this work, a kinematic motion planning algorithm for cooperative spatial payload manipulation is presented. A hierarchical approach is introduced to compute real-time collision-free motion plans for a formation of mobile manipulator robots. Initially, collision-free configurations of a deformable 2-D virtual bounding box are identified, over a planning horizon, to define a convex workspace for the entire system. Then, 3-D payload configurations whose projections lie within the defined convex workspace are computed. Finally, a convex decentralized model-predictive controller is formulated to plan collision-free trajectories for the formation of mobile manipulators. This approach facilitates real-time motion planning for the system and is scalable in the number of robots. The algorithm is validated in simulated dynamic environments.