Serviceeinschränkungen vom 12.-22.02.2026 - weitere Infos auf der UB-Homepage

Treffer: WonderHuman: Hallucinating Unseen Parts in Dynamic 3D Human Reconstruction.

Title:
WonderHuman: Hallucinating Unseen Parts in Dynamic 3D Human Reconstruction.
Source:
IEEE transactions on visualization and computer graphics [IEEE Trans Vis Comput Graph] 2025 Dec; Vol. 31 (12), pp. 10912-10923.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: IEEE Computer Society Country of Publication: United States NLM ID: 9891704 Publication Model: Print Cited Medium: Internet ISSN: 1941-0506 (Electronic) Linking ISSN: 10772626 NLM ISO Abbreviation: IEEE Trans Vis Comput Graph Subsets: MEDLINE
Imprint Name(s):
Original Publication: New York, NY : IEEE Computer Society, c1995-
Entry Date(s):
Date Created: 20251006 Date Completed: 20251106 Latest Revision: 20251107
Update Code:
20251107
DOI:
10.1109/TVCG.2025.3618268
PMID:
41052123
Database:
MEDLINE

Weitere Informationen

In this paper, we present WonderHuman to reconstruct dynamic human avatars from a monocular video for high-fidelity novel view synthesis. Previous dynamic human avatar reconstruction methods typically require the input video to have full coverage of the observed human body. However, in daily practice, one typically has access to limited viewpoints, such as monocular front-view videos, making it a cumbersome task for previous methods to reconstruct the unseen parts of the human avatar. To tackle the issue, we present WonderHuman, which leverages 2D generative diffusion model priors to achieve high-quality, photorealistic reconstructions of dynamic human avatars from monocular videos, including accurate rendering of unseen body parts. Our approach introduces a Dual-Space Optimization technique, applying Score Distillation Sampling (SDS) in both canonical and observation spaces to ensure visual consistency and enhance realism in dynamic human reconstruction. Additionally, we present a View Selection strategy and Pose Feature Injection to enforce the consistency between SDS predictions and observed data, ensuring pose-dependent effects and higher fidelity in the reconstructed avatar. In the experiments, our method achieves SOTA performance in producing photorealistic renderings from the given monocular video, particularly for those challenging unseen parts.