Self-Directed Machine Learning Based Prediction Of Multifaceted Gaze-Based Interactions For Extended Reality In Immersive Environments

Authors

  • Akey Sungheetha
  • Mesfin Abebe
  • Ketema Adere Gemeda
  • Subramanian Selvakumar
  • Rajesh Sharma R
  • Chinnaiyan R
  • Sengottaiyan N

Abstract

The intent of this investigation is to utilize multifaceted gaze-based along with gesture-based relations in an extended reality (XR) environment with gadgets, with a focus on the benefits of eye gaze methods of interaction. An online video far survey, an in-lab the reader experiment, and task analysis based on Objectives, Drivers, Techniques, and Entry rules (ODTE) are used to evaluate the user interactions in detail. Eye gaze features are thought to benefit people with disabilities in particular by offering a straightforward and practical way to input data or detailed insights into users' attention. The study shows that gaze features that are frequently employed to characterize eye movement can also be applied to the modeling of interaction intent. By establishing a tiered ar- chitecture, the technologies used in VR while eye tracking allow for customiza- tion and lower implementation costs.

Metrics

Metrics Loading ...

Downloads

Published

2024-02-13

How to Cite

Sungheetha, A. ., Abebe, M. ., Gemeda, K. A. ., Selvakumar, S. ., R, R. S. ., R, C. ., & N, S. . (2024). Self-Directed Machine Learning Based Prediction Of Multifaceted Gaze-Based Interactions For Extended Reality In Immersive Environments. Migration Letters, 21(S5), 1363–1371. Retrieved from https://migrationletters.com/index.php/ml/article/view/8160

Issue

Section

Articles

Most read articles by the same author(s)