Center for Interdisziplinary Research
 
 

Modeling Communication with Robots and Virtual Humans

Date: April 5 - 8, 2006
Organizers: Ipke Wachsmuth (Bielefeld), Günther Knoblich (Newark)

Workshop of the ZiF: Research Group 2005/2006 "Embodied Communication in Humans and Machines"

Will it be possible one day to have machines that live up to human communication abilities and that take on the role of social partners? The second thematic workshop of the research group Embodied Communication in Humans and Machines focused on the possibility of genuine social interaction and communication with robots and virtual agents. In addition to potential applications, the development of robots and virtual agents is of great heuristic value for isolating, implementing and testing essential properties of multi-layered, embodied communication.

Virtual Human Max

Constructing communicating machines and computer programs will provide insights as to how the classical sender-receiver metaphor of communication can be replaced by a view that acknowledges a close coupling of interacting social partners, involving a variety of parallel, interactive communicative channels, as Ipke Wachsmuth (Bielefeld), one of the organizers of the research year, said, introducing the field.
One important mechanism coupling communication partners is feedback. Stefan Kopp (Bielefeld) reported about the work on this topic several fellows of the research group had conducted. Feedback signals like head nodding or simple words like yes or no ensure that information provided in a conversation is really shared. Further feedback signals serve to indicate emotions and attitudes and to establish stabilizing links that connect communicators in one dynamical system. Feedback mechanism might be used, Catherine Pelachaud (Montreuil) and Isabella Poggi (Rome) added, to equip artificial agents with characteristics of active listeners who provide a speaker with information about their inner states. Convincing behaviour artificial characters need a fuller and more natural behavioural repertoire as they exhibit today, Matthew Stone (Edinburgh) emphasized in his presentation. To achieve this goal his group records human actors enacting certain scripts. From the recordings speech and motion units are generated which include intonation and emotion display. From these basic units temporally consistent and semantically coherent behaviour can be synthesized for an artificial character. Coming from a similar perspective Paul Tepper (Northwestern University, Evanston, IL) explained how gestural images can be decomposed into semantic units (e.g. image descriptions) that are linked to morphological features (e.g. hand shapes). With this method novel iconic gestures for artificial agents can be generated without drawing on a static lexicon of predefined gestures.
Rüdiger Dillmann from the collaborative research center on learning and cooperation in multimodal humanoid robots (Karlsruhe), introduced the robot ARMAR III who is equipped with four colour cameras, two hands with five fingers, and a touch sensitive artificial skin. The aim of his research is to enable a robot to learn skills like putting used dishes into a dishwasher through observation of the corresponding human actions. To achieve this, the robot has to be able to analyze the observed actions and to map them onto its own behavioural repertoire. Dillmann predicted that in about ten years robots should be able to support us in doing the dishes. W. Lewis Johnson, Jonathan Gratch and David Traum (all University of Southern California, Marina del Rey) focused on the application side of artificial agent research. Johnson demonstrated an animated training program that was developed to teach a foreign language together with the basics of a foreign culture in simulated everyday situations. Gratch reported about efforts to provide artificial agents with emotional expressions. Emotions, he said, are central for regulating cognition, both for the individual and for the guidance of communicative acts. David Traum proposed to follow a spiral methodology: to start with a simple system and to add more complicated features step by step, depending on the anticipated population using the system. This methodology, he promised, will make artificial systems for communication more robust, more accurate, and better able to deal with complex tasks.
Martin Loetzsch (Sony, Paris) demonstrated how Sony Aibo robot dogs, famous for their soccer skills, manage to communicate about the location and the movement of a ball in their environment. This task can only succeed when the robots attend to the same object, because this is an essential prerequisite for using the same utterance to refer to the same object. As a glimpse into the future Yosuke Matsusaka (Waseda University, Japan) presented a robot who can participate in group conversation. He follows the gazes of the human speakers and takes the turn to answer questions, contributes new or corrects misleading information. He can even complain about a conversation held in a language he does not understand "Speak Japanese please!" His turn-taking behaviour made his human audience more willing to perceive him as a social partner. The future perspective of sharing our every day live with artificial beings raises the question how humans react to those kinds of creatures. Several speakers addressed this issue. Justine Cassell (Nortwestern University, Evanston, IL) reported experiments where she examined the differences between people's communicative behaviour when conversing with Embodied Conversational Agents (ECAs ) and when conversing with other humans. Well implemented virtual agents elicited natural communicative behaviour. Thus, virtual agents can be used to test theories on the intrinsically dyadic nature of human communication and can serve as flexible models of human partners with different communicative abilities. Another striking finding was that autistic children are more willing to interact with highly artificial virtual agents than with other humans. Therefore, virtual agents may provide a way to improve autistics' communicative abilities. Elisabeth André (Augsburg) analysed gaze behaviour between human and artificial interlocutors to see whether humans accept an artificial character as a conversational partner who is attended to in the same way as human partners are attended to. People look more towards artificial agents than towards humans, she found, but they adhere to social norms even when interacting with a virtual human. Following a talk of Nicole Krämer (Köln) it was also discussed whether it is necessary and useful to provide artificial characters with a formal Theory of Mind or whether it would be more useful to equip them with a simulation device to be able to grasp human intentions and needs.
Progress in artificial intelligence advances at a snail's pace, Kristínn Thórisson (Reykjavik) complained. AI has to solve the problem of constructing and managing large distributed systems with richer behavioural repertoire, he said. Therefore he proposed a constructionist AI that should be built in a modular fashion to enable researchers to share their work. This would help to reduce the technical challenges posed by the construction of virtual agents.
The workshop demonstrated that there is still a long way to go before robots and virtual agents will be able to engage in the rich, multi-layered, and closely-coupled communication that humans achieve so swiftly. Nonetheless, the challenges researchers face in their attempts to construct social machines, provide important clues to the processes that underlie our amazing communicative abilities.

Lecture Notes in Artificial Intelligence (LNAI 4930) is now available online. You can find information about it here or access the online version.



Print
ZiF - Center for Interdisciplinary Research - Homepage > List of ZiF Workshops > 2006 >