Skip to main content

Research Repository

Advanced Search

A multimodal approach to assessing user experiences with agent helpers

Adolphs, Svenja; Clark, Leigh; Ofemile, Abdulmalik; Rodden, Tom

Authors

SVENJA ADOLPHS svenja.adolphs@nottingham.ac.uk
Professor of English Language and Linguistics

Leigh Clark

Abdulmalik Ofemile

TOM RODDEN TOM.RODDEN@NOTTINGHAM.AC.UK
Professor of Computer Science



Abstract

The study of agent helpers using linguistic strategies such as vague language and politeness has often come across obstacles. One of these is the quality of the agent's voice and its lack of appropriate fit for using these strategies. The first approach of this article compares human vs. synthesised voices in agents using vague language. This approach analyses the 60,000-word text corpus of participant interviews to investigate the differences of user attitudes towards the agents, their voices and their use of vague language. It discovers that while the acceptance of vague language is still met with resistance in agent instructors, using a human voice yields more positive results than the synthesised alternatives. The second approach in this article discusses the development of a novel multimodal corpus of video and text data to create multiple analyses of human-agent interaction in agent-instructed assembly tasks. The second approach analyses user spontaneous facial actions and gestures during their interaction in the tasks. It found that agents are able to elicit these facial actions and gestures and posits that further analysis of this nonverbal feedback may help to create a more adaptive agent. Finally, the approaches used in this article suggest these can contribute to furthering the understanding of what it means to interact with software agents.

Citation

Adolphs, S., Clark, L., Ofemile, A., & Rodden, T. (2016). A multimodal approach to assessing user experiences with agent helpers. ACM Transactions on Interactive Intelligent Systems, 6(4), doi:10.1145/2983926

Journal Article Type Article
Acceptance Date Jul 1, 2016
Publication Date Dec 1, 2016
Deposit Date Aug 31, 2017
Journal ACM Transactions on Interactive Intelligent Systems
Print ISSN 2160-6455
Electronic ISSN 2160-6463
Publisher Association for Computing Machinery (ACM)
Peer Reviewed Peer Reviewed
Volume 6
Issue 4
Article Number 29
DOI https://doi.org/10.1145/2983926
Keywords Human-agent interaction, vague language, instruction giving, gestures, facial actions, emotions
Public URL http://eprints.nottingham.ac.uk/id/eprint/45263
Publisher URL http://dl.acm.org/citation.cfm?doid=3015563.2983926
Copyright Statement Copyright information regarding this work can be found at the following address: http://eprints.nottingham.ac.uk/end_user_agreement.pdf