Finding information and finding locations in a multimodal interface: A case study of an intelligent kiosk
Increasingly, technology developers are turning to interactive, intelligent kiosks to provide routine communicative functions such as greeting and informing people as they enter public, corporate, retail, or healthcare spaces. A number of studies have found intelligent kiosks to be usable with study participants reporting them to be appealing, useful, and even entertaining. However, the field still lacks insight into the ways in which people use multimodal interfaces to seek information and accomplish tasks. The Memphis Intelligent Kiosk Initiative project, or MIKI, was designed for multimodal use and although in usability testing it exemplified good interface design in a number of areas, the complexity of multiple modalities - including animated graphics, speech technology and an avatar greeter - complicated usability testing, leaving developers seeking improved instruments. In particular, factors such as gender and technical background of the user seemed to change the way that various kiosk tasks were perceived, deficiencies were observed in speech interaction as well as the location information in a 3D animated map.
Proceedings of the 2nd IASTED International Conference on Human-Computer Interaction, HCI 2007
Kim, L., McCauley, T., & Polkosky, M. (2007). Finding information and finding locations in a multimodal interface: A case study of an intelligent kiosk. Proceedings of the 2nd IASTED International Conference on Human-Computer Interaction, HCI 2007, 111-117. Retrieved from https://digitalcommons.memphis.edu/facpubs/3818