Serviceeinschränkungen vom 12.-22.02.2026 - weitere Infos auf der UB-Homepage

Treffer: Manual-Free Gaze Interaction via Bayesian-Based Implicit Intention Prediction.

Title:
Manual-Free Gaze Interaction via Bayesian-Based Implicit Intention Prediction.
Authors:
Source:
IEEE transactions on visualization and computer graphics [IEEE Trans Vis Comput Graph] 2025 Dec; Vol. 31 (12), pp. 10789-10800.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: IEEE Computer Society Country of Publication: United States NLM ID: 9891704 Publication Model: Print Cited Medium: Internet ISSN: 1941-0506 (Electronic) Linking ISSN: 10772626 NLM ISO Abbreviation: IEEE Trans Vis Comput Graph Subsets: MEDLINE
Imprint Name(s):
Original Publication: New York, NY : IEEE Computer Society, c1995-
Entry Date(s):
Date Created: 20250929 Date Completed: 20251106 Latest Revision: 20251107
Update Code:
20251107
DOI:
10.1109/TVCG.2025.3615198
PMID:
41021964
Database:
MEDLINE

Weitere Informationen

Eye gaze is regarded as a promising interaction modality in extended reality (XR) environments. However, to address the challenges posed by the Midas touch problem, the determination of selection intention frequently relies on the implementation of additional manual selection techniques, such as explicit gestures (e.g., controller/hand inputs or dwell), which are inherently limited in their functionality. We hereby present a machine learning (ML) model based on the Bayesian framework, which is employed to predict user selection intention in real-time, with the unique distinction that all data used for training and prediction are obtained from gaze data alone. The model utilizes a Bayesian approach to transform gaze data into selection probabilities, which are subsequently fed into an ML model to discern selection intentions. In Study 1, a high-performance model was constructed, enabling real-time inference using solely gaze data. This approach was found to enhance performance, thereby validating the efficacy of the proposed methodology. In Study 2, a user study was conducted to validate a manual-free technique based on the prediction model. The advantages of eliminating explicit gestures and potential applications were also discussed.