Skip to main content

Research Repository

See what's under the surface

A multimodal approach to assessing user experiences with agent helpers

Adolphs, Svenja; Clark, Leigh; Ofemile, Abdulmalik; Rodden, Tom


Svenja Adolphs

Leigh Clark

Abdulmalik Ofemile

Tom Rodden


The study of agent helpers using linguistic strategies such as vague language and politeness has often come across obstacles. One of these is the quality of the agent's voice and its lack of appropriate fit for using these strategies. The first approach of this article compares human vs. synthesised voices in agents using vague language. This approach analyses the 60,000-word text corpus of participant interviews to investigate the differences of user attitudes towards the agents, their voices and their use of vague language. It discovers that while the acceptance of vague language is still met with resistance in agent instructors, using a human voice yields more positive results than the synthesised alternatives. The second approach in this article discusses the development of a novel multimodal corpus of video and text data to create multiple analyses of human-agent interaction in agent-instructed assembly tasks. The second approach analyses user spontaneous facial actions and gestures during their interaction in the tasks. It found that agents are able to elicit these facial actions and gestures and posits that further analysis of this nonverbal feedback may help to create a more adaptive agent. Finally, the approaches used in this article suggest these can contribute to furthering the understanding of what it means to interact with software agents.

Journal Article Type Article
Publication Date Dec 1, 2016
Journal ACM Transactions on Interactive Intelligent Systems
Print ISSN 2160-6455
Electronic ISSN 2160-6463
Publisher Association for Computing Machinery (ACM)
Peer Reviewed Peer Reviewed
Volume 6
Issue 4
Article Number 29
APA6 Citation Adolphs, S., Clark, L., Ofemile, A., & Rodden, T. (2016). A multimodal approach to assessing user experiences with agent helpers. ACM Transactions on Interactive Intelligent Systems, 6(4), doi:10.1145/2983926
Keywords Human-agent interaction, vague language, instruction giving, gestures, facial actions, emotions
Publisher URL
Copyright Statement Copyright information regarding this work can be found at the following address: http://eprints.nottingh.../end_user_agreement.pdf

Downloadable Citations