A social robot is a physically embodied entity immersed in a complex, dynamic, and social environment, being capable to perceive and understand others, including humans, and sufficiently empowered to behave in a manner conducive to its own goals and those of its community by engaging in social interactions with them. (an adaptation of B. Duffy’s definition)
ANIMATAS (MSCA – ITN – 2017 – 765955 2) is a H2020 Marie Sklodowska Curie European Training Network funded by Horizon 2020 (the European Union’s Framework Programme for Research and Innovation), coordinated by Sorbonne Université – SU (Paris, France).
The ANIMATAS project focuses on the following objectives:
1) Exploration of the fundamental questions on the perception by people of the interconnections between robots or virtual characters’ appearances and behaviours.
2) Development of new social learning mechanisms that can deal with different types of human intervention and allow robots and virtual characters to learn in an unconstrained manner.
3) Development of new approaches for robots and virtual characters’ personalised adaptation to human users in unstructured and dynamically evolving social interactions.
The AMIGOS project is a Portuguese funded project (by FCT) that investigates the role of emotions and adaptation in interactions between a robot and a group of users, contrasting to the typical one-robot one user paradigm in HumanRobot Interaction (HRI).
In AMIGOS we address the issue of social adaptation for robots in group settings focusing on computational modeling of emotions. Emotions play a critical role in HRI. Several authors have reported the relevance of emotions in the establishment of social interactions between one robot and one user, in particular the role of empathy. Despite these efforts, further research is necessary to verify whether similar results hold (1) when aiming for longer term social interactions, and (2) when the robot is in the presence of a group of people.
To endow a robot with the ability to cope with changes in the number of users around it, the AMIGOS project will use of interactive machine learning techniques that allow the robot to quickly adjust its behavior, depending on both the situational context (i.e., the number of users in the environment) and the preferences of a particular user, and generate adaptive social responses.
The INSIDE project is a project funded by the Portuguese Science Foundation and is a collaboration with CMU University under the CMU Portugal program. The main coordinator of the project is Prof. Francisco Melo.
The INSIDE project explores symbiotic interactions between humans and robots in joint cooperative activities, addressing the problem of how social robots plan their course of action to coordinate with and accommodate for the co-actions of their human teammates.
Further it also looks at how context and environment information collected from a set of networked sensors (such as cameras and biometric sensors) be exploited to create more natural and engaging interactions between humans and robots.
INSIDE strives to develop new hardware and software solutions that will support a real-world interaction with children with ASD in a joint cooperative task with therapeutical purposes.
The Co-Writer Project is a collaboration between IST/INESC-ID and EPFL and aims to explore how a robot can help with the acquisition of writing. This method targets especially children to learn and improve their writing skills. This way, this research explores a new way to support writing through human-robot interaction.
The Emote project (EMbOdied-perceptive Tutors for Empathy-based learning), was an European funded (FP7- 317923) research project exploring empathy in virtual and robotic tutors. The project started in December 2012 and ended in March 2016.
In EMOTE we researched how robot tutors that respond to learners can offer a new and exciting approach to learning. Human teachers respond to a myriad of cues and a key aspect for the human teacher-learner experience is empathy. Now, with the advent of social robotics, robot tutors will be able to do just that. We developed a new generation of robot tutors that empathise with children. We developed two applications to use with our robot tutors, in paricular a sustainable city multi-player game, where children learn about sustainability in the interaction with the empathic tutor (using the NAO robot).
The Lirec (LIving with robots and interactive companions) project, was an European funded (FP7 -215554) research project exploring how we live with digital and interactive companions. The LIREC project aimed at the creation of a new generation of interactive, emotionally intelligent companions that is capable of long-term relationships with humans. LIREC was a collaboration between 10 European partners, including six universities, two research institutes and two companies spread across 7 European countries. LIREC covers specialized areas ranging from psychology, ethology, human-computer interaction, human-robot interaction, robotics and graphical characters.
The main challenge we faced in the LIREC project was: “how to build long-term relationships with artificial companions”?
That is, we explored how do we create a new computer technology that supports long-term relationships between humans and synthetic companions (robots).
In the LIREC project we explored two scenarios: the iCat chess player, where the iCat robot played chess with children, maintaining information about the child and learning to adapt itself to the preferences of a child as a long term process. The second scenario we developed in the LIREC project featured a social game companion (a robot) that played risk with several players.
The Mind RACES project (‘From Reactive to Anticipatory Cognitive Embodied Systems- FP6-511931) was a three-year EC funded project (Sixth Framework Programme – Information Society and Technologies – Cognitive Systems) involving 8 Partners. The project started in October 1st 2004 and was formally completed on December 2007.
The general goal of theto investigate different anticipatory cognitive mechanisms and architectures in order to build Cognitive Systems endowed with the ability to predict the outcome of their actions, to build a model of future events, to control their perception anticipating future stimuli and to emotionally react to possible future scenarios. Such Anticipatory Cognitive Systems will contribute to the successful implementation of the desired ambient intelligence.
At INESC-ID we created a new type of anticipatory mechanism (the emotivector) that we used to trigger affective responses in agents and robots based on the salience of the perceived events.