Efficient Computer Interfaces Using Continuous Gestures, Language Models, and Speech
M.Phil thesis, University of Cambridge, 2004.
Despite advances in speech recognition technology, users of dictation systems still face a significant amount of work to correct errors made by the recognizer. The goal of this work is to investigate the use of a continuous gesture-based data entry interface to provide an efficient and fun way for users to correct recognition errors. Towards this goal, techniques are investigated which expand a recognizer's results to help cover recognition errors. Additionally, models are developed which utilize a speech recognizer's n-best list to build letter-based language models.