Realistic digital avatars are increasingly important in digital media with potential to revolutionize 3D face-to-face communication and social interactions through compelling digital embodiment of ourselves. My goal is to efficiently create high-fidelity 3D avatars from a single image input, captured in an unconstrained environment. These avatars must be close in quality to those created by professional capture systems, yet require minimal computation and no special expertise from the user. These requirements pose several significant technical challenges. A single photograph provides only partial information due to occlusions, and intricate variations in shape and appearance may prevent us from applying traditional template-based approaches. In this talk, I will present our recent work on clothed human reconstruction from a single image. We demonstrate that a careful choice of data representation that can be easily handled by machine learning algorithms is the key to robust and high-fidelity synthesis and inference for human digitization.
Biography: Shunsuke Saito is a fifth year Ph.D student at University of Southern California (USC) under the supervision of Prof. Hao Li. Prior to USC, he was a Visiting Researcher at University of Pennsylvania in 2014. He obtained his BE (2013), ME (2014) in Applied Physics at Waseda University (Japan). His work experience includes Yahoo Japan Corp. (2012-2013), Fove, Inc. (2015), Facebook Reality Lab Pittsburgh (2017), Pinscreen (2018), and Adobe (2019) as a Research Intern. His main research areas lie in human digitization with minimal inputs.