Self-Directed Machine Learning Based Prediction Of Multifaceted Gaze-Based Interactions For Extended Reality In Immersive Environments
Abstract
The intent of this investigation is to utilize multifaceted gaze-based along with gesture-based relations in an extended reality (XR) environment with gadgets, with a focus on the benefits of eye gaze methods of interaction. An online video far survey, an in-lab the reader experiment, and task analysis based on Objectives, Drivers, Techniques, and Entry rules (ODTE) are used to evaluate the user interactions in detail. Eye gaze features are thought to benefit people with disabilities in particular by offering a straightforward and practical way to input data or detailed insights into users' attention. The study shows that gaze features that are frequently employed to characterize eye movement can also be applied to the modeling of interaction intent. By establishing a tiered ar- chitecture, the technologies used in VR while eye tracking allow for customiza- tion and lower implementation costs.
Metrics
Downloads
Published
How to Cite
Issue
Section
License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
CC Attribution-NonCommercial-NoDerivatives 4.0