iCub Facility Director, Istituto Italiano di Tecnologia, Italy
Giorgio Metta is director of the iCub Facility department at the Istituto Italiano di Tecnologia (IIT) where he coordinates the development of the iCub robotic platform/project. He holds a MSc cum laude (1994) and PhD (2000) in electronic engineering both from the University of Genoa. From 2001 to 2002 he was postdoctoral associate at the MIT AI-Lab. He was previously with the University of Genoa and since 2012 Professor of Cognitive Robotics at the University of Plymouth (UK). He is deputy director of IIT delegate to the international relations and external funding. In this role he is member of the board of directors of euRobotics aisbl, the European reference organization for robotics research. Giorgio Metta research activities are in the fields of biologically motivated and humanoid robotics and, in particular, in developing humanoid robots that can adapt and learn from experience. Giorgio Metta is author of approximately 200 scientific publications. He has been working as principal investigator and research scientist in about a dozen international as well as national funded projects.
Motor Biases in Visual Attention for a Humanoid Robot
Tantalizing evidence derived from psychophysics and developmental psychology experiments has shown that attention is task-dependent. Two characteristics of human attention are very relevant for humanoid robots, namely, the ability to predict the context (task dependence) from the observed stimuli, and the ability to learn an appropriate movement strategy perhaps over developmental time scales. In this work we aimed at implementing these features to control attention in a humanoid robot by including a set of trajectory predictors in the simple but effective form of Kalman filters, and, more importantly, a reinforcement learning based process that utilizes the predictors and the complete set of actions of the robot repertoire to generate a suitably optimal action sequence. Preliminary experiments show that the system indeed works correctly: it uses the predictors to discriminate the environmental context (e.g. static vs. dynamic) and produces a valid control policy that drives the robot to fixation of the task relevant static or moving stimuli.
Professor of Computer Science and Chair of the Research Unit Human-Centered Multimedia, Augsburg University, Germany
Professor Elisabeth André is a Full Professor of Computer Science at Augsburg University, and Chair of the Research Unit Human-Centered Multimedia. She received her Diploma and Doctoral Degrees in Computer Science from Saarland University. Elisabeth André has a long track record in multimodal human-machine interaction, embodied conversational agents, affective computing and social signal processing. She is on the editorial board of various renowned international journals, such as IEEE Transactions on Affective Computing (TAC), Journal of Autonomous Agents and Multi-Agent Systems (JAAMAS), ACM Transactions on Intelligent Interactive Systems (TIIS) and AI Communications. In 2007 Elisabeth André was nominated Fellow of the Alcatel-Lucent Foundation for Communications Research. In 2010, she was elected a member of the prestigious German Academy of Sciences Leopoldina, the Academy of Europe and AcademiaNet. She is also an ECCAI Fellow (European Coordinating Committee for Artificial Intelligence).
Modeling the Dynamics of Gaze-Contingent Social Behaviors in Human-Agent Interaction
Recent advances in the development of nonintrusive and high performance eye tracking technology have paved the way towards novel conversational interfaces with gaze-aware agents. On the one hand, there is evidence that gaze-aware agents may facilitate natural language understanding, improve the flow of the conversation and help establish rapport between agents and human interlocutors. On the other hand, studies have also shown that even minor discrepancies in the dynamics of gaze may seriously affect the perception of the agents and the interaction with them. In order to come across as natural, an agent does not only have to coordinate its own attentive behaviors, such as head movements and body shifts. In addition, the agent needs to dynamically adjust its verbal and nonverbal behaviors to the attentive cues of other artificial and human interlocutors. In my talk, I will present a bidirectional model of attention that is based on parallel event charts to orchestrate the analysis and generation of multimodal attentive behaviors. I will report on a series of studies we conducted over the past years to explore the role of gaze-contingent social behavior in human-agent interaction. The talk will be illustrated by various applications with gaze-aware virtual characters and humanoid robots that engage in social interactions with human users taking on the role of presenters, instructors or companions.