My research spans Computer Vision, Machine Learning, and Graphics. I focus on computing and understanding motion in the world from video. In generic scenes I study optical flow (the 2D image motion) and how it relates to physical properties of the world including 3D shape, material, illumination, and motion. I also develop new methods to capture natural, complex, human and animal motion for applications in computer vision, animation, and neuroscience.
I am interested in computer vision and machine learning with a focus on 3D scene understanding, parsing and reconstruction. During my Ph.D. I have developed probabilistic models for 3D traffic scene understanding from movable platforms.
I am interested in the intersection between computer vision and machine learning with a focus on holistic visual scene understanding. In particular, I am interested in analyzing and modeling people in our complex visual scenes.
I am studying Aerospace Engineering (M.Sc) at the University of Stuttgart with specialization on control and system engineering and information technology. I am also very interested in the implementation of new technologies such as neural networks, which are mainly used for drones. Currently I am working as a student assistant within Robot Perception Group. My tasks include developing outdoor flight capabilities for a quadrocopter using on-board state estimation, supporting in maintaining other copters and performing experiments.
I am leading the Robot Perception Group at the Perceiving Systems Department. We have developed MPC-based formation-control methods to jointly perceive moving people from multiple flying robots, each equipped with a monocular camera. Real robot experimental results have not only validated our approach but have set a strong foundation for future research direction in this context. For further information, please visit my group page.
I work with computer vision researchers to coordinate, schedule and run human subjects trials involving body shape and motion analysis at the Perceiving Systems Department. To collect data we use several computer vision technologies, including our unique 3D and 4D body scanners and our new 4D face scanner.
I am working in the intersection of Natural Language and Computer Vision. I am particularly interested on analyzing the connection between human motion and language.
Cognitive neuroscientist working at the intersection between biological vision and artificial intelligence
My research focus is body representation. I am interested in basic theoretical frameworks and mechanisms, but also in disturbed body representation. To this end, I conduct behavioral studies in patients with eating disorders or obesity, but also in healthy people.
My research focus and interest is in the area of 3D computer vision and computer graphics. I am especially interested in non-rigid shape analysis, statistical modelling of various kinds of shapes, and the analysis of motion data.
Daniel's research focused on understanding the link between semantics and vision. He believed that our intelligence and ability to perceive our surroundings is strongly influenced by language and meaning. He was also very interested in human emotions, facial expressions, sentiment analysis, multimodal learning, transfer learning and 3D modelling of human bodies and faces; amongst others.
I'm a student assistant supervised by research scientist Dr. Timo Bolkart and Dr. Silvia Zuffi. My work focuses on 3D shape and texture reconstruction and other related things in computer vision and machine learning.
I joined Max Planck ETH Center for Learning Systems as a Ph.D. student in September 2019, where I am supervised by Michael Black and Marc Pollefeys.
Bachelor student of computational linguistics
As a research engineer at the Perceiving Systems Department, I work alongside researchers in the field of computer graphics to develop basic tools that would help them with their projects. On a daily basis, I interact with large human motion datasets consisting of human motion capture and 3/4D scans, deep neural networks, and articulated human body models.
I am interested in human object interaction learning, to start with human ground interaction leaning (walking/ running) and later on would like to extended it to more complex hand manipulated objects.
Yinghao Huang is a PhD candidate at Max Planck Institute for Intelligent Systems, supervised by Director Michael J. Black. His research interests fall in the areas of Machine Learning, Computer Vision and Computer Graphics. More specifically, he focuses on the topics of Human Body Modelling, 3D Human Shape and Pose Estimation, and other related things.
Perception is a fundamental part of intelligence since perception is necessary to acquire knowledge and knowledge is necessary to understand perception. Therefore computer vision is one of the most important aspects in the realization of intelligent systems. My interest of research lies in computer vision and the combination with machine learning which, to my mind, will enable the realization of intelligent systems. Currently, I am working on optical flow and how to incorporate high-level information to alleviate this ill-posed problem.
My research concerns learning models of perception and production of non-verbal communicative behavior. Such models can be used to create richer human-robot and human-avatar interaction, for medical diagnosis systems, and for contextual synthesis of different kinds of human behaviors, e.g., guiding synthesis of hand motion from body motion.
Technician for motion capture and 4D scanning systems.
How can autonomous perception discover high-dimensional patterns in recorded data from our environment? I am approaching this question by working on structured computer vision tasks, such as Human Pose Estimation. I hope that insights from this area will improve our data analysis systems, so that they can assist us in better understanding our environment.
I am a Ph.D. student in Robotics at Georgia Tech, advised by Prof. James Rehg. I also work closely with Prof.Yin Li. I'm currently working as a Research Intern at the Max Planck Institute for Intelligent Systems with Dr. Siyu Tang, and Dr, Michael Black. Before I joined Georgia Tech, I did my Master study at Carnegie Mellon University. My research interests lie at the intersection between Computer Vision, Machine Learning, and Robotics. Especially, I'm interested in understanding human attention and human actions from First Person Vision prospective.
Yu Tang, Liu is interest in Control System development. His current master thesis is working on autonomous blimp navigation using reinforcement learning based methods.
Tracking, Systems Technical Assistance
I am a PhD student supervised by Dr. Michael Black and Dr. Siyu Tang. My research lies on the intersection of computer vision, graphics and machine learning. Currently my focus is on developing deep learning algorithms for non-Euclidean domains and their various applications, such as building realistic 3D-mesh based human body models. Especially, I aim to develop novel 3D clothing models.
Are there people out there? How do they move? What is their body shape? What are they wearing? For machines to interact with humans and the physical world, we need to train them to answer these questions. My research is focused on combining ideas from computer vision and machine learning to enable machines to perceive humans. During my Ph.D. I worked mostly on geometric modelling and articulated tracking from images.
I work on decomposing photographs into their intrinsic layers of reflectance and shading using deep learning methods for fast inference. In addition I started to work on interactive semantic segmentation using CNNs.
My work spans both the research aspect of creating the world most realistic human body models and the development of computationally efficient and scalable software that enables learning such models from large scale data sets. I completed an MSc in Statistics at Imperial College London, MSc in Artificial Intelligence at the Uni. of Manchester and BEng in Mechatronics and Robotics at the University of Liverpool.
I'm a second year PhD Student at the department of Perceiving Systems. I'm developing multi-aerial vehicle intelligence for practical research application with prototypes built here at the institute. My current work involves integrating detections from real time deep neural networks into cooperative multi vehicle sensor fusion.
My research aims at understanding the world through the capture and analysis of heterogeneous data (MRI, CT, Point clouds, images, ...) in order to create applied digital instruments, that allow, for example, to generate novel views from a scene, to infer the human shape from a clothed scan, or to predict the amount of adipose tissue of a person from surface observations. To address this challenge, I adopt the approaches of Computer Vision, Signal Processing, Computer Graphics and Statistical Models. My research is often multi-disciplinary, as I need to combine knowledge from these different domains.
I am in the field of Virtual Humans and Affective Computing. I am interested in what makes us perceive an interacting agent as 'human'. Specifically, I am interested in affectivity and appearance. What does the shape, pose, movement, behavior and style of a person or a virtual human tells us about them? How does the interaction with each other affects our own actions? Furthermore, I study how individual factors (such as culture) influences this perception. On the side from research, I enjoy web technologies! I support the creation of websites for scientific data acquisition and dissemination related to 3D body shape, as well as web development for scientific experiments and perceptual studies.
One of the requirements for enabling machines to perceive and interact in a human environment is to accurately perceive humans and their activities. My research is related to different aspects of movement perception and modeling. Since completing my PhD I'm focusing in human hand modeling, detection and pose estimation.
I am interested in computer vision, in particular image matching, stereo vision, optical flow, and 3D reconstruction. Together with undergraduates I have constructed many computer vision datasets that are used in the well-known Middlebury benchmarks at vision.middlebury.edu.
My research interests are in motion estimation and scene understanding. In particular I'm interested in exploring and modeling how the semantics and the motion of the scene are related.
My research is based in preclinical imaging at the Werner Siemens Imaging Center, and I am focused on novel molecular imaging techniques. My research involves awake and unrestrained rodents and measurements of a more truthful neurophysiological response (to drugs, stimuli, treatments, etc…). I am interested in building an model for tracking and capturing the most commonly used research rodents in preclinical applications.
My goal is to apply statistical human body models in various research domains such as psychology, cognitive science, and medicine. A primary goal is to make our body software accessible to more people. For this purpose I interact with various research groups who need body data and software for doing experiments. I manage these relationships, and support the transfer of body shapes as needed.
I am interested in modeling and capturing humans body+hands motions with a focus on human-environment interaction and haptic motion capturing. More specifically, my research is focused on precise Body and Hands mocap using IMUs, Images, or other modalities using deep-learning and machine-learning to be able to capture interactions and feedback from the environment.
I am working with Dr. Aamir Ahmad on the problem of Multi-Robot Obstacle Avoidance for Target Tracking Scenarios using Model-Predictive Optimization
My current research focuses on people perception. I am interested in how information about biological attributes and personality traits is encoded in human body shape and motion patterns and how the human visual system interprets this information. I also work on how expectations and predictions affect the perceived realism of animated virtual characters, especially when there are inconsistencies introduced by retargeting motion from one person onto the body shape of another. To achieve high ecological validity in my perceptual studies, I use virtual reality technology and realistic biometric body models. I am currently a VISTA post-doc at the Centre for Vision Research at York University in Toronto.
For my PhD I worked with Juergen Gall on Hand-Object Interaction. In particular we focused on capturing the motion of hands interacting with each other and/or with a rigid or an articulated object. We further studied the case of acquiring missing knowledge about the manipulated object, i.e. its shape or its kinematic model.
I am interested in human understanding in videos. Particularly, I am exploring the use of synthetic images for learning human-related representations.
I'm a postdoctoral researcher at the Max-Planck-Institute for Intelligent Systems. My research focus lies in the area of 3D computer vision and computer graphics, especially non-rigid shape analysis, human body shape modelling and dynamic clothing shape modelling.
human behavior analysis, human motion generation, human-scene interaction
My research focuses on representing the appearance of people and animals in images and video sequences. I am particularly interested in 2D and 3D models that capture the variability in shape of articulated and deformable objects like the human and animal body. Previous work focused on color image reproduction, multispectral color imaging, readability of colored text.