Here we show results generated with MorpheuS. We visualize the background geometry together with the reconstruction of target object for illustration. Click on the arrows or drag to see more results.
Neural rendering has demonstrated remarkable success in dynamic scene reconstruction. Thanks to the expressiveness of neural representations, prior works can accurately capture the motion and achieve high-fidelity reconstruction of the target object.
Despite this, real-world video scenarios often feature large unobserved regions where neural representations struggle to achieve realistic completion. To tackle this challenge, we introduce MorpheuS, a framework for dynamic 360° surface reconstruction from a casually captured RGB-D video. Our approach models the target scene as a canonical field that encodes its geometry and appearance, in conjunction with a deformation field that warps points from the current frame to the canonical space. We leverage a view-dependent diffusion prior and distill knowledge from it to achieve realistic completion of unobserved regions.
Experimental results on various real-world and synthetic datasets show that our method can achieve high-fidelity 360° surface reconstruction of a deformable object from a monocular RGB-D video.
@inproceedings{wang2024morpheus,
title={MorpheuS: Neural Dynamic 360deg Surface Reconstruction from Monocular RGB-D Video},
author={Wang, Hengyi and Wang, Jingwen and Agapito, Lourdes},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={20965--20976},
year={2024}
}