2023, Contributo in atti di convegno, ENG
Sebastian Beyrodt, Matteo Lavit Nicora, Fabrizio Nunnari, Lara Chehayeb, Pooja Prajod, Tanja Schneeberger, Elisabeth André, Matteo Malosio, Patrick Gebhard, Dimitra Tsovaltzi
This study evaluates a socially interactive agent to create an em- bodied cobot. It tests a real-time continuous emotional modeling method and an aligned transparent behavioral model, BASSF (bore- dom, anxiety, self-efficacy, self-compassion, flow). The BASSF model anticipates and counteracts counterproductive emotional experi- ences of operators working under stress with cobots on tedious tasks. The flow experience is represented in the three-dimensional pleasure, arousal, and dominance (PAD) space. The embodied cov- atar (cobot and avatar) is introduced to support flow experiences through emotion regulation guidance. The study tests the model's main theoretical assumptions about flow, dominance, self-efficacy, and boredom. Twenty participants worked on a task for an hour, assembling pieces in collaboration with the covatar. After the task, participants completed questionnaires on flow, their affective expe- rience, and self-efficacy, and they were interviewed to understand their emotions and regulation during the task. The results suggest that the dominance dimension plays a vital role in task-related settings as it predicts the participants' self-efficacy and flow. How- ever, the relationship between flow, pleasure, and arousal requires further investigation. Qualitative interview analysis revealed that participants regulated negative emotions, like boredom, also with- out support, but some strategies could negatively impact well-being and productivity, which aligns with theory.
2022, Contributo in atti di convegno, ENG
Samuele Sandrini; Marco Faroni; Nicola Pedrocchi
A good estimation of the actions' cost is key in task planning for human-robot collaboration. The duration of an action depends on agents' capabilities and the correlation between actions performed simultaneously by the human and the robot. This paper proposes an approach to learning actions' costs and coupling between actions executed concurrently by humans and robots. We leverage the information from past executions to learn the average duration of each action and a synergy coefficient representing the effect of an action performed by the human on the duration of the action performed by the robot (and vice versa). We implement the proposed method in a simulated scenario where both agents can access the same area simultaneously. Safety measures require the robot to slow down when the human is close, denoting a bad synergy of tasks operating in the same area. We show that our approach can learn such bad couplings so that a task planner can leverage this information to find better plans.
2021, Contributo in atti di convegno, ENG
Zedda E.
Robots are becoming more present in our daily activities and in particular in the health context. To improve the human-robot interaction in a training session it is important to design and develop social behavior and personality in robots. Recent studies found that personality is an essential feature for creating socially assistive robots. For this purpose, I want to investigate if the robot personality (extrovert or introvert) can improve the user's cognitive performances in elders with Mild Cognitive Impairments during one-o-one cognitive training.
2020, Progetto, ENG
Gianluca BALDASSARRE - Valerio SPERATI - Beste OZCAN
+me is an experimental soft interactive toy with a panda form designed in collaboration with developmental therapists. Thanks to embedded electronics, +me can emit attractive responses as colored lights and amusing sounds when touched on paws. The device is connected to a control tablet to allow an adult caregiver to modify the input-output contingencies, so as to produce several rewarding response patterns according to child's reaction. These features make +me a potential support tool for therapy of children with Autism Spectrum Disorders (ASD): the attractiveness and the functional versatility of the device could be exploited during play activities with a therapist to capture the interest of children, to encourage their social interactions, then reinforcing pivotal social skills as imitation, eye-contact, turn-taking. +me is currently in experimental phase, in pilot tests on children (24-48 months) with ASD and Typical Development (TD). The experiments are conducted in collaboration with the Sec. of Child and Adolescent Neuropsychiatry -Univ. of Rome "Sapienza"- whose researchers showed great interest to technological aid for therapy. +me is an indirect -yet really promising- outcome of the ongoing FET project "Goal-based Open-ended Autonomous Learning Robots, GOAL-Robots" (Grant Agreement No. 713010). The main goal of the proposal is to develop the +me from the current experimental prototype towards a "ready-for-market" product, certified for safe use in EU. The proposal aims to 1) engineer the prototype, through a third party company expert in electronic manufacturing, in order to realise a small-scale production of 10/20 working +me samples; 2) at the same time, continue the experimentation on TD and ASD children, to strengthen the scientific aspects of the device; 3) disclose the obtained results through scientific publications and participation to promotional events market-oriented, in forecast of a next possible commercialisation.
2020, Contributo in atti di convegno, ENG
Zedda E.
Robots are becoming more and more present in our daily activities. In order to improve user interaction with them, it is important to design behaviors in robots that show social attitude and ability to adapt to the users. For this purpose, robots should adapt their behavior recognizing the user's emotion, also considering the actual user with cognitive and physical disabilities. However, most contemporary approaches rarely attempt to consider recognized emotional features in an active manner to modulate robot decision-making and dialogue for the benefit of the user. In this project, I aim to design and implement a module in a humanoid robot to create an adaptive behavior in a Social Robot for older adults who may have cognitive impairments.
2017, Contributo in atti di convegno, ENG
Agnese Augello, Ignazio Infantino, Antonio Lieto, Umberto Maniscalco, Giovanni Pilato, Filippo Vella
The capacity for AI systems of explaining their decisions represents nowadays a huge challenge for both academia and industry (e.g. let us think at the autonomous cars sector). In this paper we sketch a preliminary proposal suggesting the adoption of a dual process approach for computational explanation. Our proposal is declined in the field of Human-Robot Social Interaction; namely, in a gesture recognition task.
2016, Contributo in atti di convegno, ENG
Cesta, Amedeo; Orlandini, Andrea; Bernardi, Giulio; Umbrico, Alessandro
The collaboration between humans and robots is a current technological trend that faces various challenges, among these the seamless integration of the respective working capabilities. Industrial robots have demonstrated their capacity to meet the needs of many applications, offering accuracy and efficiency, while humans have both experience and the capability to elaborate over such experience that are absolutely not replaceable at any time. Clearly a symbiotic integration of humans and robots in working scenarios opens to new problems: for an effective collaboration an intelligent coordination is required. This paper presents an interactive environment for facilitating the collaboration between humans and a robot in performing shared tasks in industrial environments. In particular we introduce a tool based on AT planning technology to help the smooth intertwining of activities of the two actors in the work environment. The paper presents a case study from a real world environment, describes a comprehensive architectural approach to the problem of coordinated interaction, and then presents details on the current status of the tool.
2015, Articolo in rivista, ENG
L. Orlando Russo, G. Airò Farulla; Pianu, D.; Salgarella, A. R.; Controzzi, M.; Cipriani, C.; Oddo, C. M.; Geraci, C.; Rosa, S.; Indaco, M.
We present a novel system for remotely controlling an anthropomorphic robotic hand using gestures. Our system uses a low-cost depth sensor as the only input device. The basic idea behind the resented system is that a user is able to perform hand shapes in front of the depth sensor; the system is able to segment the user's hand from the background and to recognize a set of hand postures; recognized postures are then sent over the web and reproduced by the a remote robotic hand. The system is expected to enhance communication among Deaf-blind signers by allowing, for the first time, remote transmission of Sign Language (SL) in the tactile modality. Usually, Deaf-blind signers can only receive external information by tactile exploration of the environment. Communication is therefore possible by using a special variety of SL, the tactile-SL (t-SL). The proposed system will basically work as a "telephone" for Deaf-blind people. It will receive a SL input (which can be produced by all signers, including Deaf-blind) and it will provide a t-SL output (the only one accessible to Deaf-blind signers). The system will convert a SL message into its tactile variety. It does not manipulate the semantic nor the structural content of the message, and therefore it is language-independent (i.e. it can be used with any SL, t-SL pair). The system has been designed after consulting the main Italian Deaf-blind associations. A Deaf-blind representative also tested a preliminary version of the system. In this paper, we report the results of a first set of experiments, showing that the developed system is accurate enough to reproduce handshapes (a crucial component of the SL message). This system is a first prototype for the Parloma project, which aims at designing a remote communication system for Deaf-blind people, but could also be useful in other scenarios (e.g., tele-rehabilitation, tele-presence . . . ).
2012, Contributo in atti di convegno, ENG
Cesta, Amedeo; Cortellessa, Gabriella, Orlandini, Andrea, Tiberio, Lorenza
Most robotic systems are usually used and evaluated in laboratory setting for a limited period of time. The limitation of lab evaluation is that it does not take into account the different challenges imposed by the fielding of robotic solutions into real contexts. Our current work evaluates a robotic telepresence platform to be used with elderly people. This paper describes our progressive effort toward a comprehensive, ecological and longitudinal evaluation of such robots outside the lab. It first discusses some results from a twofold short term evaluation performed in Italy. Specifically we report results from both a usability assessment in laboratory and a subsequent study obtained by interviewing 44 healthcare workers as possible secondary users (people connecting to the robot) and 10 older adults as possible primary users (people receiving visits through the robot). It then describes a complete evaluation plan designed for a long term assessment to be applied "outside the lab" dwelling on the initial application of such methodology to test sites in Italy.
2012, Contributo in atti di convegno, ENG
Kruijff-Korbayová I., Cuayáhuitl H., Kiefer B., Schröder M. Cosi P., Paci G., Sommavilla G., Tesser F., Sahli H. Athanasopoulos G., Wang W., Enescu V., Verhelst W.
We describe a conversational system for child-robot interaction built with an event-based integration approach using the NAO robot platform with the URBI middleware within the ALIZ-E project. Our integrated system includes components for the recognition, interpretation and generation of speech and gestures, dialogue management and user modeling. We describe our approach to processing spoken input and output and highlight some practical implementation issues. We also present preliminary results from experiments where young Italian users interacted with the system.