Skip to main content

Research Repository

Advanced Search

Outputs (35)

Understanding the Interplay Between the Digital and the Physical in Shared Augmented Reality Gaming: Probing through Urban Legends (2025)
Journal Article
Xu, J., Luna, S. M., Tigwell, G. W., Lalone, N., Saker, M., Laato, S., Dunham, J., Wang, Y., Chamberlain, A., & Papangelis, K. (2025). Understanding the Interplay Between the Digital and the Physical in Shared Augmented Reality Gaming: Probing through Urban Legends. ACM Transactions on Computer-Human Interaction, https://doi.org/10.1145/3749841

Shared Augmented Reality (Shared AR) is an emerging technology that enables multiple users to interact synchronously within a collocated AR environment. Yet, there is limited research on the group interactions and dynamics in Shared AR, particularly... Read More about Understanding the Interplay Between the Digital and the Physical in Shared Augmented Reality Gaming: Probing through Urban Legends.

Embrace Angels: an Artistic Exploration of how Humans and Robots can Embrace (2025)
Presentation / Conference Contribution
Lancel, K., Maat, H., Ramchurn, R., Chamberlain, A., Higgins, A., Benford, S., Tennent, P., Woodward, K., Price, D., Giannachi, G., Garrett, R., & Kitsidis, C. (2025, June). Embrace Angels: an Artistic Exploration of how Humans and Robots can Embrace. Presented at Third UK AI Conference 2025, London, UK

Abstract (Poster - Video Presentation)
This video was presented at UK AI The Third UK AI Conference 2025, June 23 - 24, London, UK and relates to Embrace Angels.

Conference
https://uk-ai.org/ukai2025/

Acknowledgements
The Turing AI World Lea... Read More about Embrace Angels: an Artistic Exploration of how Humans and Robots can Embrace.

AI Lens: A Live Generative AI Camera System for Dynamic Image Transformation (2025)
Presentation / Conference Contribution
Ramchurn, R., Chamberlain, A., Higgins, A., Benford, S., Tennent, P., Woodward, K., Price, D., Giannachi, G., Garrett, R., Kitsidis, C., Lancel, K., & Maat, H. (2025, June). AI Lens: A Live Generative AI Camera System for Dynamic Image Transformation. Presented at Third UK AI Conference 2025, London, UK

Abstract (Poster - Video Presentation)
This paper introduces AI Lens, a novel AI-powered camera system that applies live generative transformations to video in real time. By replacing a traditional camera monitor with AI-generated imagery, the syste... Read More about AI Lens: A Live Generative AI Camera System for Dynamic Image Transformation.

AI and Intelligent Synthetic Skin: Advancing Beyond Human Skin for Sensory Human-Robot Interaction (HRI) (2025)
Presentation / Conference Contribution
Zhou, F., Chamberlain, A., & Benford, S. (2025, June). AI and Intelligent Synthetic Skin: Advancing Beyond Human Skin for Sensory Human-Robot Interaction (HRI). Poster presented at Third UK AI Conference 2025, London, UK

We present a wearable toolkit designed to enhance shared perception between humans and robots, facilitating real-time data collection to support embedded AI based applications. Advancements in wearable sensor-based technologies mean there are opportu... Read More about AI and Intelligent Synthetic Skin: Advancing Beyond Human Skin for Sensory Human-Robot Interaction (HRI).

Exploring Deaf And Hard of Hearing Peoples' Perspectives On Tasks In Augmented Reality: Interacting With 3D Objects And Instructional Comprehension (2025)
Presentation / Conference Contribution
Mojib Luna, S., Xu, J., Tigwell, G. W., LaLone, N., Saker, M., Chamberlain, A., Schwartz, D. I., & Papangelis, K. (2025, April). Exploring Deaf And Hard of Hearing Peoples' Perspectives On Tasks In Augmented Reality: Interacting With 3D Objects And Instructional Comprehension. Presented at 2025 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan

Tasks in augmented reality (AR), such as 3D interaction and instructional comprehension, are often designed for users with uniform sensory abilities. Such an approach, however, can overlook the more nuanced needs of Deaf and Hard of Hearing (DHH) use... Read More about Exploring Deaf And Hard of Hearing Peoples' Perspectives On Tasks In Augmented Reality: Interacting With 3D Objects And Instructional Comprehension.

Robots in Pain, Humans in Play: Soma as a Qualitative Method for Investigating Intelligent Human-Robot Configurations (2025)
Presentation / Conference Contribution
Chamberlain, A., Ngo, V., McGarry, G., Kucukyilmaz, A., Benford, S., & Higgins, A. (2025, February). Robots in Pain, Humans in Play: Soma as a Qualitative Method for Investigating Intelligent Human-Robot Configurations. Presented at Designing for Bodies: Practices, Imaginaries and Discourses, University of Southern Denmark, Kolding

In this piece we start to explore the ways in which we somatise our interaction with robots as a symbiotic system - both as human and robot, and as human-robot. We also consider the ways that we might want to take our physical nature (existence), for... Read More about Robots in Pain, Humans in Play: Soma as a Qualitative Method for Investigating Intelligent Human-Robot Configurations.

Performance metrics outperform physiological indicators in robotic teleoperation workload assessment (2024)
Journal Article
Odoh, G., Landowska, A., Crowe, E. M., Benali, K., Cobb, S., Wilson, M. L., Maior, H. A., & Kucukyilmaz, A. (2024). Performance metrics outperform physiological indicators in robotic teleoperation workload assessment. Scientific Reports, 14(1), Article 30984. https://doi.org/10.1038/s41598-024-82112-4

Robotics holds the potential to streamline the execution of repetitive and dangerous tasks, which are difficult or impossible for a human operator. However, in complex scenarios, such as nuclear waste management or disaster response, full automation... Read More about Performance metrics outperform physiological indicators in robotic teleoperation workload assessment.

Human AI conversational systems: when humans and machines start to chat (2024)
Journal Article
Borsci, S., Chamberlain, A., Nichele, E., Bødker, M., & Turchi, T. (2024). Human AI conversational systems: when humans and machines start to chat. Personal and Ubiquitous Computing, 28(6), 857–860. https://doi.org/10.1007/s00779-024-01837-1

When humans and machines start to chat: beyond anthropocentrism

Digital and embedded artificial intelligent (AI) agents with conversational capabilities have gained significant attention in recent years [1, 2]. Using natural language communication... Read More about Human AI conversational systems: when humans and machines start to chat.

An overview of high-resource automatic speech recognition methods and their empirical evaluation in low-resource environments (2024)
Journal Article
Fatehi, K., Torres Torres, M., & Kucukyilmaz, A. (2025). An overview of high-resource automatic speech recognition methods and their empirical evaluation in low-resource environments. Speech Communication, 167, Article 103151. https://doi.org/10.1016/j.specom.2024.103151

Deep learning methods for Automatic Speech Recognition (ASR) often rely on large-scale training datasets, which are typically unavailable in low-resource environments (LREs). This lack of sufficient and representative training data poses a significan... Read More about An overview of high-resource automatic speech recognition methods and their empirical evaluation in low-resource environments.

Understanding user needs of personalisation-based automated systems with development and application of novel ideation cards (2024)
Presentation / Conference Contribution
Duvnjak, J., Kucukyilmaz, A., & Houghton, R. (2024, July). Understanding user needs of personalisation-based automated systems with development and application of novel ideation cards. Presented at 15th International Conference on Applied Human Factors and Ergonomics (AHFE 2024), Nice, France

Personalisation is a commonly utilised technology in socially focused online platforms. It has gathered widespread usage through its ability to match a system to the needs of users through their data. This allows systems to be more user-friendly or e... Read More about Understanding user needs of personalisation-based automated systems with development and application of novel ideation cards.

TAS for Cats: An Artist-led Exploration of Trustworthy Autonomous Systems for Companion Animals (2023)
Presentation / Conference Contribution
Schneiders, E., Chamberlain, A., Fischer, J. E., Benford, S., Castle-Green, S., Ngo, V., Kucukyilmaz, A., Barnard, P., Row Farr, J., Adams, M., Tandavanitj, N., Devlin, K., Mancini, C., & Mills, D. (2023, July). TAS for Cats: An Artist-led Exploration of Trustworthy Autonomous Systems for Companion Animals. Presented at First International Symposium on Trustworthy Autonomous Systems (TAS 23), Edinburgh, UK

Cat Royale is an artist-led exploration of trustworthy autonomous systems (TAS) created by the TAS Hub's creative ambassadors Blast Theory. A small community of cats inhabits a purpose built 'cat utopia' at the centre of which a robot arm tries to en... Read More about TAS for Cats: An Artist-led Exploration of Trustworthy Autonomous Systems for Companion Animals.

Five Provocations for a More Creative TAS (2023)
Presentation / Conference Contribution
Benford, S., Hazzard, A., Vear, C., Webb, H., Chamberlain, A., Greenhalgh, C., Ramchurn, R., & Marshall, J. (2023, July). Five Provocations for a More Creative TAS. Presented at First International Symposium on Trustworthy Autonomous Systems (TAS 23), Edinburgh, UK

Conventional wisdom has it that trustworthy autonomous systems (AS) should be explainable, dependable, controllable and safe tools for humans to use. Reflecting on a portfolio of artistic applications of TAS leads us adopt an alternative stance and t... Read More about Five Provocations for a More Creative TAS.

Somabotics Toolkit for Rapid Prototyping Human-Robot Interaction Experiences using Wearable Haptics (2023)
Presentation / Conference Contribution
Zhou, F., Price, D., Pacchierotti, C., & Kucukyilmaz, A. (2023, July). Somabotics Toolkit for Rapid Prototyping Human-Robot Interaction Experiences using Wearable Haptics. Poster presented at IEEE World Haptics Conference, Delft, Netherlands

This work-in-progress paper presents a prototyping toolkit developed to design haptic interaction experiences. With developments in wearable and sensor technologies, new opportunities arise everyday to create rich haptic interaction experiences actin... Read More about Somabotics Toolkit for Rapid Prototyping Human-Robot Interaction Experiences using Wearable Haptics.

Socio-Technical Trust For Multi-Modal Hearing Assistive Technology (2023)
Presentation / Conference Contribution
Williams, J., Azim, T., Piskopani, A. M., Chamberlain, A., & Zhang, S. (2023, June). Socio-Technical Trust For Multi-Modal Hearing Assistive Technology. Presented at ICASSPW 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing Workshops, Proceedings, Rhodes Island, Greece

The landscape of opportunity is rapidly changing for audio-visual (AV) hearing assistive technology. While hearing assistive devices, such as hearing aids, have traditionally been developed for populations of deaf and hard of hearing (DHH) communitie... Read More about Socio-Technical Trust For Multi-Modal Hearing Assistive Technology.

Tasks of a Different Color: How Crowdsourcing Practices Differ per Complex Task Type and Why This Matters (2023)
Presentation / Conference Contribution
Wang, Y., Papangelis, K., Lykourentzou, I., Saker, M., Chamberlain, A., Khan, V.-J., Liang, H.-N., & Yue, Y. (2023, April). Tasks of a Different Color: How Crowdsourcing Practices Differ per Complex Task Type and Why This Matters. Presented at CHI '23 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany

Crowdsourcing in China is a thriving industry. Among its most interesting structures, we find crowdfarms, in which crowdworkers self-organize as small organizations to tackle macrotasks. Little, however, is known as to which practices these crowdfarm... Read More about Tasks of a Different Color: How Crowdsourcing Practices Differ per Complex Task Type and Why This Matters.

Resolving conflicts during human-robot co-manipulation (2023)
Presentation / Conference Contribution
Al-Saadi, Z., Hamad, Y. M., Aydin, Y., Kucukyilmaz, A., & Basdogan, C. (2023, March). Resolving conflicts during human-robot co-manipulation. Presented at ACM/IEEE International Conference on Human-Robot Interaction, Stockholm, Sweden

This paper proposes a machine learning (ML) approach to detect and resolve motion conflicts that occur between a human and a proactive robot during the execution of a physically collaborative task. We train a random forest classifier to distinguish b... Read More about Resolving conflicts during human-robot co-manipulation.

Designing for Trust: Autonomous Animal-Centric Robotic & AI Systems (2022)
Presentation / Conference Contribution
Chamberlain, A., Benford, S., Fischer, J., Barnard, P., Greenhalgh, C., Row Farr, J., Tandavanitj, N., & Adams, M. (2022, December). Designing for Trust: Autonomous Animal-Centric Robotic & AI Systems. Poster presented at Proceedings of the Ninth International Conference on Animal-Computer Interaction, Newcastle, UK

From cat feeders and cat flaps to robot toys, humans are deploying increasingly autonomous systems to look after their pets. In parallel, industry is developing the next generation of autonomous systems to look after humans in the home – most notably... Read More about Designing for Trust: Autonomous Animal-Centric Robotic & AI Systems.

ScoutWav: Two-Step Fine-Tuning on Self-Supervised Automatic Speech Recognition for Low-Resource Environments (2022)
Presentation / Conference Contribution
Fatehi, K., Torres, M. T., & Kucukyilmaz, A. (2022, September). ScoutWav: Two-Step Fine-Tuning on Self-Supervised Automatic Speech Recognition for Low-Resource Environments. Presented at Interspeech 2022, Incheon, Korea

Recent improvements in Automatic Speech Recognition (ASR) systems obtain extraordinary results. However, there are specific domains where training data can be either limited or not representative enough, which are known as Low-Resource Environments (... Read More about ScoutWav: Two-Step Fine-Tuning on Self-Supervised Automatic Speech Recognition for Low-Resource Environments.

Supporting Responsible Research and Innovation within a University-based digital research programme: reflections from the “hoRRIzon” project (2022)
Journal Article
Portillo, V., Craigon, P., Dowthwaite, L., Greenhalgh, C., & Pérez-Vallejos, E. (2022). Supporting Responsible Research and Innovation within a University-based digital research programme: reflections from the “hoRRIzon” project. Journal of Responsible Technology, 12, Article 100045. https://doi.org/10.1016/j.jrt.2022.100045

Integration of Responsible Research and Innovation (RRI) principles into a research project is key to ensure outputs are ethically acceptable and socially desirable. However, translating RRI principles into practice is challenging as there are no rec... Read More about Supporting Responsible Research and Innovation within a University-based digital research programme: reflections from the “hoRRIzon” project.

A confirmatory factorial analysis of the Chatbot Usability Scale: a multilanguage validation (2022)
Journal Article
Borsci, S., Schmettow, M., Malizia, A., Chamberlain, A., & van der Velde, F. (2023). A confirmatory factorial analysis of the Chatbot Usability Scale: a multilanguage validation. Personal and Ubiquitous Computing, 27, 317–330. https://doi.org/10.1007/s00779-022-01690-0

The Bot Usability Scale (BUS) is a standardised tool to assess and compare the satisfaction of users after interacting with chatbots to support the development of usable conversational systems. The English version of the 15-item BUS scale (BUS-15) wa... Read More about A confirmatory factorial analysis of the Chatbot Usability Scale: a multilanguage validation.