Serviceeinschränkungen vom 12.-22.02.2026 - weitere Infos auf der UB-Homepage

Treffer: DiffCap: Diffusion-Based Real-Time Human Motion Capture Using Sparse IMUs and a Monocular Camera.

Title:
DiffCap: Diffusion-Based Real-Time Human Motion Capture Using Sparse IMUs and a Monocular Camera.
Source:
IEEE transactions on visualization and computer graphics [IEEE Trans Vis Comput Graph] 2025 Dec; Vol. 31 (12), pp. 10272-10283.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: IEEE Computer Society Country of Publication: United States NLM ID: 9891704 Publication Model: Print Cited Medium: Internet ISSN: 1941-0506 (Electronic) Linking ISSN: 10772626 NLM ISO Abbreviation: IEEE Trans Vis Comput Graph Subsets: MEDLINE
Imprint Name(s):
Original Publication: New York, NY : IEEE Computer Society, c1995-
Entry Date(s):
Date Created: 20250806 Date Completed: 20251106 Latest Revision: 20251107
Update Code:
20251107
DOI:
10.1109/TVCG.2025.3596403
PMID:
40768445
Database:
MEDLINE

Weitere Informationen

Combining sparse IMUs and a monocular camera is a new promising setting to perform real-time human motion capture. This paper proposes a diffusion-based solution to learn human motion priors and fuse the two modalities of signals together seamlessly in a unified framework. By delicately considering the characteristics of the two signals, the sequential visual information is considered as a whole and transformed into a condition embedding, while the inertial measurement is concatenated with the noisy body pose frame by frame to construct a sequential input for the diffusion model. Firstly, we observe that the visual information may be unavailable in some frames due to occlusions or subjects moving out of the camera view. Thus incorporating the sequential visual features as a whole to get a single feature embedding is robust to the occasional degenerations of visual information in those frames. On the other hand, the IMU measurements are robust to occlusions and always stable when signal transmission has no problem. So incorporating them frame-wisely could better explore the temporal information for the system. Experiments have demonstrated the effectiveness of the system design and its state-of-the-art performance in pose estimation compared with the previous works. The code will be released.