Performance Comparisons of Phrase Sets and Presentation Styles for Text Entry Evaluations
IUI '12: Proceedings of the ACM International Conference on Intelligent User Interface, 2012.
In this paper we empirically compare five different publicly-available phrase sets in two large-scale (N = 225 and N = 150) crowdsourced text entry experiments. We investigate the impact of asking participants to memorize phrases before writing them versus allowing the participants to see the phrase during text entry. We find that asking participants to memorize phrases increased entry rates at the cost of slightly increased error rates. This holds for both a familiar and for an unfamiliar text entry method. We find statistically significant differences between some of the phrase sets in both entry and error rates. Based on our data, we arrive at a set of recommendations for choosing a suitable phrase set for text entry evaluations.