I am a PostDoc researcher in Perceiving Systems at the Max Planck Institute for Intelligent Systems, where I completed my Ph.D. under the guidance of Professor Michael J. Black and Dr. Dimitrios Tzionas. My research focuses on creating Virtual Humans that move and interact with 3D scenes and objects in a manner that closely mimics real human behavior.
My research interests are broad and span multiple areas, including precise motion capture (MoCap) using multimodal sensors (IMUs, cameras, pressure sensors, etc.), 3D reconstruction from images, human-scene interaction, and 3D representation. Recently, I've expanded my focus to include cutting-edge AI topics such as diffusion models and Large Language Models (LLMs) for enhancing object interaction and scene understanding.
These interests include:
- Virtual Avatar Animation
- Human-Object Interaction
- Grasp Prediction
- Motion Synthesis and Tracking
- Object-Interaction Detection
- 3D Geometry Representation
- Diffusion Models
- Large Language Models (LLMs)
My work aims to advance the development of human-centered AI that can accurately perceive human actions, understand complex interactions, and generate realistic virtual representations for applications in ambient intelligence, virtual assistants, mixed reality, and the Metaverse.
Virtual Avatar Motion Human-Object Interaction Grasp Generation Motion Generation Motion Capture IMUs Computer Vision Deep Learning Machine Learning.