Content in Context: Generating Language and Iconic Gesture without a Gestionary

Paul Tepper†, Stefan Kopp†‡ and Justine Cassell†
Northwestern University† University of Bielefeld‡
Evanston, IL 60208 Bielefeld, Germany
{ptepper,s-kopp, justine}@northwestern.edu

In this paper we describe new research on the planning and realization of paired natural language and gesture for embodied conversational agents, where the form of the gesture is derived on the fly without relying on a lexicon of gesture shapes, or "gestionary". As in our previous work, we rely on the study of spontaneous gesture to inform us about the relationships between spontaneous hand gestures and language, and we rely on models of natural language generation to inspire our computational architectures. Unlike our previous work, however, here we work towards a formalization of both the imagistic and linguistic components of people's cognitive representations of domain knowledge, and we concentrate on the micro-planning stage of natural language generation. This involves modeling the generation process in a way that allows the same representations and communicative intentions to be pursued across a range of communicative modalities and, ultimately, identically in both input and output.