My research spans Computer Vision, Machine Learning, and Graphics. I focus on computing and understanding motion in the world from video. In generic scenes I study optical flow (the 2D image motion) and how it relates to physical properties of the world including 3D shape, material, illumination, and motion. I also develop new methods to capture natural, complex, human and animal motion for applications in computer vision, animation, and neuroscience.
I am interested in computer vision and machine learning with a focus on 3D scene understanding, parsing and reconstruction. During my Ph.D. I have developed probabilistic models for 3D traffic scene understanding from movable platforms.
I am leading the Robot Perception Group at the Perceiving Systems Department. In the project AirCap we have conceptualized and developed a team of micro aerial vehicles (MAVs) for markerless and outdoor human motion capture (MoCap). It is the first system consisting of multiple cooperating aerial robots that can autonomously detect, track and follow a person (subject) in a natural outdoor environment, without the need for any sensor or marker on the subject. After a capture episode, our system's novel back-end reconstructs, with a high level of accuracy, the 4D skeletal pose and body shape of the tracked human subject. The capture, performed by the robotic front-end, includes acquiring image data from the on-board cameras of the MAVs and poses of those MAVs. The back-end uses this data to perform reconstruction. For further information, please visit my group page.
I am studying Aerospace Engineering (M.Sc) at the University of Stuttgart with specialization on control and system engineering and information technology. I am also very interested in the implementation of new technologies such as neural networks, which are mainly used for drones. I am working as a student assistant within Robot Perception Group. My tasks include developing outdoor flight capabilities for a quadrocopter using on-board state estimation, supporting in maintaining other copters and performing experiments. Currently I am working on the project: "Autonomous Blimp Navigation using Model-Based Reinforcement Learning".
I am managing our capture facilities and the data collection at the capture hall of the Perceiving Systems Department. I work with computer vision researchers to design, coordinate, schedule and run human subjects trials involving body shape and motion analysis. To collect data we use several computer vision technologies, including our unique 4D body scanner and motion capture facility, and take anthropometric measurements. I am also responsible for ethics, data safety, and lab tours for visitors of the department.
Cognitive neuroscientist working at the intersection between biological vision and artificial intelligence
My research focus is body representation. I am interested in basic theoretical frameworks and mechanisms, but also in disturbed body representation. To this end, I conduct behavioral studies in patients with eating disorders or obesity, but also in healthy people.
My research focus and interest is in the area of 3D computer vision and computer graphics. I am especially interested in non-rigid shape analysis, statistical modelling of various kinds of shapes, and the analysis of motion data.
I am a Ph.D. student at Max Planck Institute for Intelligent Systems enrolled in the International Max Planck Research School (IMPRS) for Intelligent Systems (IS) program. I am currently researching how robotics and computer vision can be combined to create fully-autonomous robot companions.
I am a Postdoctoral Researcher in the Perceiving Systems department with Michael Black at the Max Planck Institute for Intelligent Systems in Tübingen, Germany. I received my PhD in Computer Science at Georgia Tech, advised by Devi Parikh. I have also been fortunate to spend summers at Toyota Technological Institute in Chicago (TTIC), Facebook AI Research (FAIR), Curai and Indiana University's Dept. of Psychological and Brain Sciences.
Interested in understanding human shape, motion and it’s interaction with environment.
I'm a student assistant supervised by research scientist Dr. Timo Bolkart and Dr. Silvia Zuffi. My work focuses on 3D shape and texture reconstruction and other related things in computer vision and machine learning.
As a research engineer at the Perceiving Systems Department, I work alongside researchers in the field of computer graphics to develop basic tools that would help them with their projects. On a daily basis, I interact with large human motion datasets consisting of human motion capture and 3/4D scans, deep neural networks, and articulated human body models.
I am broadly interested in Generative models (VAEs and GANs). More specifically the conditional versions of them. I view conditional generative models as a means of learning one-to-many mappings. Although such models especially GANs perform a superb density estimation of high dimensional data (images), they are far from perfect. These imperfections induces errors that manifest as spurious correlation (among many others) in the generated data. Although this may not be noticeable readily, it can render such models completely unusable for many applications. I intend to address this issue.
I am a PhD student of Dr. Michael J Black and I am interested in studying Human-Scene Interaction (HSI). How can we develop algorithms to reconstruct, analyze and generate these interactions. How can we jointly study human motion and the surrounding scene and what does each one tells us about the other. This spans many areas such as: 3D reconstruction, 3D learning, human pose estimation, human motion generation, learning on graphs and generative models.
My name is Galina Henz and I'm a Trial Coordinator in the Perceiving System Department. I'm responsible for recruiting participants, planning and conducting captures, as well as data management. So I'm taking care of the capture hall being well organised and ready for trials. I'm staying in touch with our scientist and with their input I'm improving data collection/capturing.
Yinghao Huang is a PhD candidate at Max Planck Institute for Intelligent Systems, supervised by Director Michael J. Black. His research interests fall in the areas of Machine Learning, Computer Vision and Computer Graphics. More specifically, he focuses on the topics of Human Body Modelling, 3D Human Shape and Pose Estimation, and other related things.
I am a Ph.D. student in the Department of Perceiving Systems at the Max Planck Institute for Intelligent Systems, advised by Professors Sergi Pujades and Michael J. Black. My work is focused on modelling the rib-cage from scans as well as its distortion when breathing. This work is part of the larger research project CAMed aiming to create 3D printed custom implants.
My research concerns learning models of perception and production of non-verbal communicative behavior. Such models can be used to create richer human-robot and human-avatar interaction, for medical diagnosis systems, and for contextual synthesis of different kinds of human behaviors, e.g., guiding synthesis of hand motion from body motion.
PhD Student from University of Pennsylvania.
Engineer for motion capture and 4D scanning systems.
My research focuses on the neural basis of social perception and emotion recognition in social scenes. In particular, I examine the complexity of neural circuitry involved in social touch scene processing in healthy adults and how these processes go astray in a clinical population. To address these vital questions, I employ advanced neuroimaging techniques and computational modeling. I am currently a postdoctoral researcher at the computational cognitive neuroscience lab at Johns Hopkins University in the US.
I am a PhD student supervised by Michael Black and Siyu Tang. My research lies on the intersection of computer vision, graphics and machine learning. Currently my focus is on developing deep learning algorithms for non-Euclidean domains and their various applications, such as building realistic 3D-mesh based human body models. Especially, I aim to develop novel 3D clothing models.
Are there people out there? How do they move? What is their body shape? What are they wearing? For machines to interact with humans and the physical world, we need to train them to answer these questions. My research is focused on combining ideas from computer vision and machine learning to enable machines to perceive humans. During my Ph.D. I worked mostly on geometric modelling and articulated tracking from images.
My work spans both the research aspect of creating the world most realistic human body models and the development of computationally efficient and scalable software that enables learning such models from large scale data sets. I completed an MSc in Statistics at Imperial College London, MSc in Artificial Intelligence at the Uni. of Manchester and BEng in Mechatronics and Robotics at the University of Liverpool.
I am a PhD Student at the department of Perceiving Systems, developing multi-aerial vehicle intelligence for practical research application with prototypes built here at the institute. My current work involves real time adaptive on board computer vision using recurrent deep neural networks.
Peter Vincent Gehler
I work on developing robust and efficient algorithms for the analysis, synthesis and prediction of complex real-world phenomena. My current research interests are : 1. geometric deep learning for the analysis and synthesis of 3D and 4D scenes; 2. deep probabilistic models for uncertainty quantification; 3. computationally efficient learning algorithms.
My research aims at understanding the world through the capture and analysis of heterogeneous data (MRI, CT, Point clouds, images, ...) in order to create applied digital instruments, that allow, for example, to generate novel views from a scene, to infer the human shape from a clothed scan, or to predict the amount of adipose tissue of a person from surface observations. To address this challenge, I adopt the approaches of Computer Vision, Signal Processing, Computer Graphics and Statistical Models. My research is often multi-disciplinary, as I need to combine knowledge from these different domains.
I am in the field of Virtual Humans and Affective Computing. I am interested in what makes us perceive an interacting agent as 'human'. Specifically, I am interested in affectivity and appearance. What does the shape, pose, movement, behavior and style of a person or a virtual human tells us about them? How does the interaction with each other affects our own actions? Furthermore, I study how individual factors (such as culture) influences this perception. On the side from research, I enjoy web technologies! I support the creation of websites for scientific data acquisition and dissemination related to 3D body shape, as well as web development for scientific experiments and perceptual studies.
Currently I am interning as an Applied Scientist at Amazon Research. I am a PhD student at Max Planck Institute for Intelligent Systems. I am currently doing my research work under the supervision of Prof. Michael J Black, in the field of Computer Vision and Machine Learning. Specially my research is focused on 3D modelling of human bodies and faces from multi-modal information. Please visit my google scholar page for an updated list of publications. I did my masters from Indian Institute of Science, Bangalore, India before joining my PhD where my focus was mainly on metric learning, face and object recognition.
I am interested in computer vision, in particular image matching, stereo vision, optical flow, and 3D reconstruction. Together with undergraduates I have constructed many computer vision datasets that are used in the well-known Middlebury benchmarks at vision.middlebury.edu.
My research is based in preclinical imaging at the Werner Siemens Imaging Center, and I am focused on novel molecular imaging techniques. My research involves awake and unrestrained rodents and measurements of a more truthful neurophysiological response (to drugs, stimuli, treatments, etc…). I am interested in building an model for tracking and capturing the most commonly used research rodents in preclinical applications.
I am interested in modeling and capturing the human body and hands motion with a focus on Human-Object Interaction (HOI). More specifically, my research is focused on precise body and hands motion estimation and generation in order to interact, grasp, and use new 3D objects. I am also interested in precise body mocap using multimodal sensors (IMUs, Cameras, Touch Sensors, Flex Sensors, etc) to be able to capture accurate interactions and feedback from the environment.
I am interested in the intersection between computer vision and machine learning with a focus on holistic visual scene understanding. In particular, I am interested in analyzing and modeling people in our complex visual scenes.
My current research focuses on people perception. I am interested in how information about biological attributes and personality traits is encoded in human body shape and motion patterns and how the human visual system interprets this information. I also work on how expectations and predictions affect the perceived realism of animated virtual characters, especially when there are inconsistencies introduced by retargeting motion from one person onto the body shape of another. To achieve high ecological validity in my perceptual studies, I use virtual reality technology and realistic biometric body models. I am currently a VISTA post-doc at the Centre for Vision Research at York University in Toronto.
I conduct research on the intersection of Computer Vision, Computer Graphics and Machine Learning. My motivation is to understand how people move and interact with the physical world to perform tasks. This involves accurately capturing real people and their whole-body interactions with scenes and objects, modeling their shape, pose and interaction relationships, applying these models to in-the-wild real-life scenarios, either as end application or for life-long learning, and using these models to generate realistic interacting avatars. Potential applications include Augmented/Virtual Reality, Human-Computer Interaction, Human-Robot Interaction, and robotics. The big goal is to develop human-centered AI and collaborative robots that perceive humans, understand their behavior and help them to achieve their goals. I have a PhD from the University of Bonn in Germany for my work with Juergen Gall on Hand-Object Interaction. My alma mater is Aristotle University of Thessaloniki in Greece.
I am interested in human understanding in videos. Particularly, I am exploring the use of synthetic images for learning human-related representations.
I'm a postdoctoral researcher at the Max-Planck-Institute for Intelligent Systems. My research focus lies in the area of 3D computer vision and computer graphics, especially non-rigid shape analysis, human body shape modelling and dynamic clothing shape modelling.
My primal research interests are on human behavior analysis, human behavior generation and human-scene interaction. I am currently at ETH Zurich.
My research focuses on representing the appearance of people and animals in images and video sequences. I am particularly interested in 2D and 3D models that capture the variability in shape of articulated and deformable objects like the human and animal body. Previous work focused on color image reproduction, multispectral color imaging, readability of colored text.