Header logo is ps

Department Talks

Our Recent Research on 3D Deep Learning

  • 07 August 2020 • 11:00—12:00
  • Vittorio Ferrari

I will present three recent projects within the 3D Deep Learning research line from my team at Google Research: (1) a deep network for reconstructing the 3D shape of multiple objects appearing in a single RGB image (ECCV'20). (2) a new conditioning scheme for normalizing flow models. It enables several applications such as reconstructing an object's 3D point cloud from an image, or the converse problem of rendering an image given a 3D point cloud, both within the same modeling framework (CVPR'20); (3) a neural rendering framework that maps a voxelized object into a high quality image. It renders highly-textured objects and illumination effects such as reflections and shadows realistically. It allows controllable rendering: geometric and appearance modifications in the input are accurately represented in the final rendering (CVPR'20).

Organizers: Yinghao Huang Arjun Chandrasekaran

Functions, Machine Learning, and Game Development

  • 10 August 2020 • 16:00—17:00
  • Daniel Holden
  • Remote talk on Zoom

Game Development requires a vast array of tools, techniques, and expertise, ranging from game design, artistic content creation, to data management and low level engine programming. Yet all of these domains have one kind of task in common - the transformation of one kind of data into another. Meanwhile, advances in Machine Learning have resulted in a fundamental change in how we think about these kinds of data transformations - allowing for accurate and scalable function approximation, and the ability to train such approximations on virtually unlimited amounts of data. In this talk I will present how these two fundamental changes in Computer Science affect game development - how they can be used to improve game technology as well as the way games are built - and the exciting new possibilities and challenges they bring along the way.

Organizers: Abhinanda Ranjit Punnakkal

  • Yoshihiro Kanamori
  • PS-Aquarium

Relighting of human images has various applications in image synthesis. For relighting, we must infer albedo, shape, and illumination from a human portrait. Previous techniques rely on human faces for this inference, based on spherical harmonics (SH) lighting. However, because they often ignore light occlusion, inferred shapes are biased and relit images are unnaturally bright particularly at hollowed regions such as armpits, crotches, or garment wrinkles. This paper introduces the first attempt to infer light occlusion in the SH formulation directly. Based on supervised learning using convolutional neural networks (CNNs), we infer not only an albedo map, illumination but also a light transport map that encodes occlusion as nine SH coefficients per pixel. The main difficulty in this inference is the lack of training datasets compared to unlimited variations of human portraits. Surprisingly, geometric information including occlusion can be inferred plausibly even with a small dataset of synthesized human figures, by carefully preparing the dataset so that the CNNs can exploit the data coherency. Our method accomplishes more realistic relighting than the occlusion-ignored formulation.

Organizers: Senya Polikovsky Jinlong Yang

Self-supervised 3D hand pose estimation

  • 23 July 2019 • 11:00—12:00
  • Chengde Wan
  • PS-Aquarium

Deep learning has significantly advanced state-of-the-art for 3D hand pose estimation, of which accuracy can be improved with increased amounts of labelled data. However, acquiring 3D hand pose labels can be extremely difficult. In this talk, I will present our recent two works on leveraging self-supervised learning techniques for hand pose estimation from depth map. In both works, we incorporate differentiable renderer to the network and formulate training loss as model fitting error to update network parameters. In first part of the talk, I will present our earlier work which approximates hand surface with a set of spheres. We then model the pose prior as a variational lower bound with variational auto-encoder(VAE). In second part, I will present our latest work on regressing the vertex coordinates of a hand mesh model with 2D fully convolutional network(FCN) in a single forward pass. In the first stage, the network estimates a dense correspondence field for every pixel on the image grid to the mesh grid. In the second stage, we design a differentiable operator to map features learned from the previous stage and regress a 3D coordinate map on the mesh grid. Finally, we sample from the mesh grid to recover the mesh vertices, and fit it an articulated template mesh in closed form. Without any human annotation, both works can perform competitively with strongly supervised methods. The later work will also be later extended to be compatible with MANO model.

Organizers: Dimitrios Tzionas

  • Shunsuke Saito
  • PS Aquarium

Realistic digital avatars are increasingly important in digital media with potential to revolutionize 3D face-to-face communication and social interactions through compelling digital embodiment of ourselves. My goal is to efficiently create high-fidelity 3D avatars from a single image input, captured in an unconstrained environment. These avatars must be close in quality to those created by professional capture systems, yet require minimal computation and no special expertise from the user. These requirements pose several significant technical challenges. A single photograph provides only partial information due to occlusions, and intricate variations in shape and appearance may prevent us from applying traditional template-based approaches. In this talk, I will present our recent work on clothed human reconstruction from a single image. We demonstrate that a careful choice of data representation that can be easily handled by machine learning algorithms is the key to robust and high-fidelity synthesis and inference for human digitization.

Organizers: Timo Bolkart

  • Dr Antonia Tzemanaki
  • PS-Aquarium

Over the past century, abdominal surgery has seen a rapid transition from open procedures to less invasive methods such as laparoscopy and robot-assisted minimally invasive surgery (R-A MIS), as they involve reduced blood loss, postoperative morbidity and length of hospital stay. Furthermore, R-A MIS has offered refined accuracy and more ergonomic instruments for surgeons, further minimising trauma to the patient. However, training surgeons in MIS procedures is becoming increasingly long and arduous, while commercially available robotic systems adopt a design similar to conventional laparoscopic instruments with limited novelty. Do these systems satisfy their users? What is the role and importance of haptics? Taking into account the input of end-users as well as examining the high intricacy and dexterity of the human hand can help to bridge the gap between R-A MIS and open surgery. By adopting designs inspired by the human hand, robotic tele-operated systems could become more accessible not only in the surgical domain but, beyond, in areas that benefit from user-centred design such as stroke rehabilitation, as well as in areas where safety issues prevent use of autonomous robots, such as assistive technologies and nuclear industry.

Organizers: Dimitrios Tzionas

  • Jinlong Yang
  • PS Aquarium

In the past few years, significant progress has been made on shape modeling of human body, face, and hands. Yet clothing shape is currently not well presented. Modeling clothing using physics-based simulation can sometimes involve tedious manual work and heavy computation. Therefore, a data-driven learning approach has emerged in the community. In this talk, I will present a stream of work that targeted to learn the shape of clothed human from captured data. It involves 3D body estimation, clothing surface registration and clothing deformation modeling. I will conclude this talk by outlining the current challenges and some promising research directions in this field.

Organizers: Timo Bolkart

  • Marilyn Keller
  • Aquarium

Since the release of the Kinect, RGB-D cameras have been used in several consumer devices, including smartphones. In this talk, I will present two challenging uses of this technology. With multiple RGB-D cameras, it is possible to reconstruct a 3D scene and visualize it from any point of view. In the first part of the talk, I will show how such a scene can be streamed and rendered as a point cloud in a compelling way and its appearance improved by the use of external cinema cameras. In the second part of the talk, I will present my work on how an RGB-D camera can be used for enabling real-walking in virtual reality by making the user aware of the surrounding obstacles. I present a pipeline to create an occupancy map from a point cloud on the fly on a mobile phone used as a virtual reality headset. This occupancy map can then be used to prevent the user from hitting physical obstacles when walking in the virtual scene.

Organizers: Sergi Pujades

  • Nikos Athanasiou
  • PS Aquarium

First, a short analysis of the key components of my participation in SemEval 2018, an emotion analysis contest from tweets. Namely, a transfer learning approach used for emotion classification and a context-aware attention mechanism. In my second paper, I explore how brain information can improve word representations. Neural activation models that have been proposed in the literature use a set of example words for which fMRI measurements are available in order to find a mapping between word semantics and localized neural activations. I use such models to predict neural activations on a full word lexicon. Then, I propose a cognitive computational model that estimates semantic similarity in the neural activation space and investigates the relative performance of this model for various natural language processing tasks. Finally, in my most recent work I explore cross-topic word representations. In traditional Distributional Semantic Models -like word2vec- the multiple senses of a polysemous word are conflated into a single vector space representation. In my work, I propose a DSM that learns multiple distributional representations of a word based on different topics. Moreover, we project the different topic representations in a common space and apply a smoothing technique to group redundant topic vectors.

Organizers: Soubhik Sanyal

  • Zhaoping Li
  • MPI-IS lecture hall (N0.002)

Since Hubel and Wiesel's seminal findings in the primary visual cortex (V1) more than 50 years ago, progress in vision science has been very limited along previous frameworks and schools of thoughts on understanding vision. Have we been asking the right questions? I will show observations motivating the new path. First, a drastic information bottleneck forces the brain to process only a tiny fraction of the massive visual input information; this selection is called the attentional selection, how to select this tiny fraction is critical. Second, a large body of evidence has been accumulating to suggest that the primary visual cortex (V1) is where this selection starts, suggesting that the visual cortical areas along the visual pathway beyond V1 must be investigated in light of this selection in V1. Placing attentional selection as the center stage, a new path to understanding vision is proposed (articulated in my book "Understanding vision: theory, models, and data", Oxford University Press 2014). I will show a first example of using this new path, which aims to ask new questions and make fresh progresses. I will relate our insights to artificial vision systems to discuss issues like top-down feedbacks in hierachical processing, analysis-by-synthesis, and image understanding.

Organizers: Timo Bolkart Aamir Ahmad

  • Yuliang Xiu
  • PS Aquarium

Multi-person articulated pose tracking is an important while challenging problem in human behavior understanding. In this talk, going along the road of top-down approaches, I will introduce a decent and efficient pose tracker based on pose flows. This approach can achieve real-time pose tracking without loss of accuracy. Besides, to better understand human activities in visual contents, clothes texture and geometric details also play indispensable roles. However, extrapolating them from a single image is much more difficult than rigid objects due to its large variations in pose, shape, and cloth. I will present a two-stage pipeline to predict human bodies and synthesize human novel views from one single-view image.

Organizers: Siyu Tang

Mind Games

IS Colloquium
  • 21 December 2018 • 11:00—12:00
  • Peter Dayan
  • IS Lecture Hall

Much existing work in reinforcement learning involves environments that are either intentionally neutral, lacking a role for cooperation and competition, or intentionally simple, when agents need imagine nothing more than that they are playing versions of themselves. Richer game theoretic notions become important as these constraints are relaxed. For humans, this encompasses issues that concern utility, such as envy and guilt, and that concern inference, such as recursive modeling of other players, I will discuss studies treating a paradigmatic game of trust as an interactive partially-observable Markov decision process, and will illustrate the solution concepts with evidence from interactions between various groups of subjects, including those diagnosed with borderline and anti-social personality disorders.