Serviceeinschränkungen vom 12.-22.02.2026 - weitere Infos auf der UB-Homepage

Treffer: Realistic Avatar Control Through Video-Driven Animation for Augmented Reality

Title:
Realistic Avatar Control Through Video-Driven Animation for Augmented Reality
Contributors:
Saveetha Engineering College [Chennai], Mieczyslaw Lech Owoc, Felix Enigo Varghese Sicily, Kanchana Rajaram, Prabavathy Balasundaram
Source:
7th International Conference on Computational Intelligence in Data Science (ICCIDS). :306-314
Publisher Information:
CCSD; Springer Nature Switzerland, 2024.
Publication Year:
2024
Collection:
collection:IFIP-LNCS
collection:IFIP
collection:IFIP-AICT
collection:IFIP-TC
collection:IFIP-TC12
collection:IFIP-ICCIDS
collection:IFIP-AICT-717
Subject Geographic:
Original Identifier:
HAL: hal-05140754
Document Type:
Konferenz conferenceObject<br />Conference papers
Language:
English
ISBN:
978-3-031-69981-8
Relation:
info:eu-repo/semantics/altIdentifier/doi/10.1007/978-3-031-69982-5_23
DOI:
10.1007/978-3-031-69982-5_23
Accession Number:
edshal.hal.05140754v1
Database:
HAL

Weitere Informationen

Part 2: Applications of AI/ML in Image Processing
This paper proposes an efficient real-time framework to generate detailed avatar animations solely from monocular camera videos, avoiding costly motion capture equipment. It extracts 3D facial and body landmarks using Blaze Pose key points on the input video. A novel adaptor mapping function then transforms the 2D landmark topology into diverse 3D avatar rigs, enabling the animation of different characters. The unified approach produces high-fidelity lip sync, expressions, gestures, and full-body motions in real-time. Extensive experiments demonstrate the framework generates realistic avatar mimicry of humans in video for immersive real-time applications in VR/AR entertainment and animation. A novel adaptor mapping function transforms 2D landmarks extracted by Blaze Pose into diverse 3D avatar rigs, overcoming topology limitations. The unified approach produces detailed facial expressions, lip sync, gestures, and body motions in real-time, enabling the avatar to mimic humans in video. Extensive experiments validate that the framework generates realistic avatar animations comparable to motion capture, with applications in real-time VR/AR. Key innovations include the novel mapping function to transform 2D landmarks into 3D avatar motions, and the real-time performance to animate avatars that closely imitate people in monocular video.