Group in Charge: PI Takayuki KANDA group (JP)
The Kyoto University team worked on integration, data collection, and user studies. The original proposed plan for this work package consisted of two parts. First was system integration, to integrate the modules developed by the three partners into a single system. The second part of the original proposed plan for this work package was data collection and user studies, to learn ways to improve the system and evaluate its performance.
In-lab user study with the robot trained to imitate ‘interactive motions’ via social learning
Using the human-human interaction training data and social learning methods described in WP3, we plan to evaluate the effectiveness of the proposed learning methods in an in-lab user study. We aim to evaluate the performance of the learned behaviors by recruiting participants to role-play as customers in interactions with the shopkeeper robot. Specifically, we will focus the evaluation on the ability of the robot to correctly imitate interactive motions, both by quantitative metrics (such as the percent of correct pointing actions) and qualitative metrics (such a the participant’s subjective ratings of the robot’s behavior).
Achievements and Related publications:
We have achieved to evaluate the learning framework for the robot with 'interactive motions' and plan to make the dataset available to the project members.
Semantic processing of interaction data
Towards the effort of integrating the systems at Kyoto University and LAAS, work on applying the Overworld semantic processing system to the multiple shopkeeper interaction dataset is ongoing. The Overworld system can be used to endow the robot with perspective taking and physical simulation and reasoning abilities. When combined with social learning, this will enable to robot to behave appropriately in complex scenarios which the data-driven imitation learning approach has been hitherto incapable of performing. Several new interaction scenarios for demonstrating and evaluating the new robot capabilities have been proposed.
Achievements and Related publications:
We have achieved to process the interaction data collected by Kyoto University with the semantic processing system of LAAS. We made some tests to evaluate the effectiveness of the semantic processing system in the framework of social interaction learning.
Building a real-world data collection system
Towards the goal of conducting a field study in a real-world (not in-lab) shop we setup a data collection system (similar to the one described in WP3 Learning of Social Interactions) in a hat shop in a real shopping mall (see Figure wp6.1). We installed depth sensors (Kinect Azure). By combining multiple sensors, it is possible to continuously track the whole-body movements of all people in a space. Furthermore, it is also possible to record video and audio data. In this way, important information related to the interaction can be collected. Additionally, action detectors were designed to detect important customer actions such as reaching for a hat and trying on a hat.

Real-world user study with autonomous robot in a hat shop
We developed an autonomous shopworker robot for a hat shop that can recognize customers’ shopping activities, encourage them to try on hats by providing appropriate comments, and indirectly exert social pressure. It uses skeleton data and a random forest classifier for activity recognition. We achieved considerably higher action recognition accuracy using real customers’ activity data, which was collected by deploying a prototype robot in the shop. We made a labeled skeleton data set and our annotation tool publicly available. The results of our 11-day field trial at the hat shop show that the robot can provide its services reasonably well and gain positive impressions from customers.

Achievements and Related publications:
Results of the field study in the hat shop, including analysis of human interactions with the robot and the dataset, are presented in:
S Edirisinghe, S Satake, T Kanda. Field trial of an autonomous shopworker robot for friendly encouragement and exerting social pressure. 19th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2024). March 2024.
We developed an autonomous hatshop robot for encouraging customers to try on hats by providing comments that appropriately fit their actions, and in such a way also indirectly exerting social pressure. To enable it to offer such a service smoothly in a real shop, we developed a large system (around 150k lines of code with 23 ROS packages) integrated with various technologies, like people tracking, shopping activity recognition and navigation. The robot needed to move in narrow corridors, detect customers, and recognise their shopping activities. We employed an iterative development process, repeating trial-and-error integration with the robot in the actual shop, while also collecting real-world data during field-testing. This process enabled us to improve our shopping activity recognition system by collecting real-world data, and to adapt our software modules to the target shop environment. We report the lessons learnt during our system development process. The results of our 11-day field trial show that our robot was able to provide its services reasonably well. Many customers expressed a positive impression of the robot and its services.
Reflection about Human-Robot Interaction mediation
A reflection has been initiated at LAAS about HRI mediation. There is a wide diversity of platforms for (tele)operating robots. This diversity is due to the numerous types of robots and to the different types of users and use cases. Every robot, use case, and type of user brings a unique set of expectations.
We propose that human-robot interaction should consider the opportunity to use an external device (such as a smartphone) to not only teleoperate but also mediate the interaction with a robot, either in telepresence or in co-presence. With smartphones and tablets we already possess very powerful devices with performant hardware: Good microphone and speakers, Good cameras, High quality display with (infinite) possibilities in graphical interface design, Display with touch recognition, Access to user data, preferences and profiles. Last, but not least, the way to use such a device is
common knowledge for a large amount of the population.
This work has benefit from the discussions with our \partnerb colleagues during the integration week in march 2024 and their experience in tele-presence and co-presence with robot in real world settings.
Achievements and Related publications:
We started the reflection and carried out a first proof of concept. It has lead to one publication and to the setting up of a new ANR project selected for AAPG2025 ANR Interaction Paradigm for Telepresence Robotics – IPaTRo project.
Fadma Amiche, Marie Juillot, Adrien Vigné, Anke Brock, Aurélie Clodic. First Steps of Designing Adaptable User Interfaces to Mediate Human-Robot Interaction. Short contribution at 2024 IEEE Conference on Telepresence.
There is a wide diversity of platforms for teleoperating robots. Every robot, use case, and type of user brings a unique set of expectations. We propose that human-robot interaction should consider the opportunity to use an external device (such as a smartphone) to not only teleoperate but also mediate the interaction with a robot, either in telepresence or in co-presence. In this paper, we present first steps and ideas towards the development of this media. It involves, as first end-users, the members of the robotics department of our laboratory and considers all robotic platforms which are hosted there (assistance robots, terrestrial robots, robots humanoids, quadrupeds, etc.). We describe our user-centred design methodology, detailing the needs analysis and brainstorming sessions conducted. Following this, we outline the design and prototyping phases, showcasing the iterative development of our interface. Finally, we discuss the results from our evaluations and the implications for future work in creating flexible and inclusive interfaces for diverse user groups.