Fast and Precise Touch-Based Text Entry for Head-Mounted Augmented Reality with Variable Occlusion

Home > Publications > Fast and Precise Touch-Based Text Entry for Head-Mounted Augmented Reality with Variable Occlusion

Fast and Precise Touch-Based Text Entry for Head-Mounted Augmented Reality with Variable Occlusion

John J. Dudley, Keith Vertanen, Per Ola Kristensson

ACM Transactions on Computer-Human Interaction (TOCHI), 2018.

We present the VISAR keyboard: An augmented reality (AR) head-mounted display (HMD) system that supports text entry via a virtualised input surface. Users select keys on the virtual keyboard by imitating the process of single-hand typing on a physical touchscreen display. Our system uses a statistical decoder to infer users' intended text and to provide error-tolerant predictions. There is also a high-precision fall-back mechanism to support users in indicating which keys should be unmodified by the auto-correction process. A unique advantage of leveraging the well-established touch input paradigm is that our system enables text entry with minimal visual clutter on the see-through display, thus preserving the user’s field-of-view. We iteratively designed and evaluated our system and show that the final iteration of the system supports a mean entry rate of 17.75wpm with a mean character error rate less than 1%. This performance represents a 19.6% improvement relative to the state-of-the-art baseline investigated: A gaze-then-gesture text entry technique derived from the system keyboard on the Microsoft HoloLens. Finally, we validate that the system is effective in supporting text entry in a fully mobile usage scenario likely to be encountered in industrial applications of AR HMDs.

Paper:
Reference: