4D Dynamic Scene Reconstruction, Editing, and Generation. (Talk)
People live in a 4D dynamic moving world. While videos serve as the most convenient medium to capture this dynamic world, they lack the capability to present the 4D nature of our world. Therefore, 4D video reconstruction, free-viewpoint rendering, and high-quality editing and generation offer innovative opportunities for content creation, virtual reality, telepresence, and robotics. Although promising, they also pose significant challenges in terms of efficiency, 4D motion and dynamics, temporal and subject consistency, and text-3D/video alignment. In light of these challenges, this talk will discuss our recent progress on how to represent and learn the 4D dynamic moving world, from its underlying dynamics to the reconstruction, editing, and generation of 4D dynamic scenes. This talk will motivate discussions about future directions on multi-modal 4D dynamic human-object-scene reconstruction, generation, and perception.
Biography: Jiawei Liu is a final-year Ph.D. candidate at National University of Singapore (NUS) advised by Prof. Mike Shou. His research focuses on 4D dynamic scene reconstruction, editing, and generation, with applications on content creation, virtual reality, and robotics. He is a recipient of Singapore Data Science Consortium (SDSC) Dissertation Research Fellowship 2023. He is passionate about developing new 4D computer vision methods to allow machines and AI assistants to perceive the 4D world and interact with humans.