GVA: Reconstructing Vivid 3D Gaussian Avatars from Monocular Videos

Department of Computer Vision Technology (VIS), Baidu Inc.

Demo video. Hand jitter originates from the unsmoothed raw motion capture data.

Abstract

In this paper, we present a novel method that facilitates the creation of vivid 3D Gaussian avatars from monocular video inputs (GVA). Our innovation lies in addressing the intricate challenges of delivering high-fidelity human body reconstructions and aligning 3D Gaussians with human skin surfaces accurately. The key contributions of this paper are twofold. Firstly, we introduce a pose refinement technique to improve hand and foot pose accuracy by aligning normal maps and silhouettes. Precise pose is crucial for correct shape and appearance reconstruction. Secondly, we address the problems of unbalanced aggregation and initialization bias that previously diminished the quality of 3D Gaussian avatars, through a novel surface-guided re-initialization method that ensures accurate alignment of 3D Gaussian points with avatar surfaces. Experimental results demonstrate that our proposed method achieves high-fidelity and vivid 3D Gaussian avatar reconstruction. Extensive experimental analyses validate the performance qualitatively and quantitatively, demonstrating that it achieves state-of-the-art performance in photo-realistic novel view synthesis while offering fine-grained control over the human body and hand pose.

Method

The framework utilizes a monocular video to obtain refined body and hand poses. The Gaussian avatar model is adjusted based on the whole-body skeleton to match the pose in the image. Consistency with image observations is maintained through differentiable rendering and optimization of Gaussian properties. An surfaceguided re-initialization mechanism enhances rendering quality and Gaussian point distribution. The model can adapt to new poses from videos or generated sequences.

Interpolate start reference image

More results






BibTeX

@article{liu24-GVA,
  author    = {Liu, Xinqi and Wu, Chenming and Liu, Jialun and Liu, Xing and Zhao, Chen and Feng, Haocheng and Ding, Errui and Wang, Jingdong},
  title     = {GVA: Reconstructing Vivid 3D Gaussian Avatars from Monocular Videos},
  journal   = {Arxiv},
  year      = {2024},
}