Embodied Conversational Agents: Representation and Intelligence in User Interface
How do we decide how to represent an intelligent system in its interface, and how do we decide how the interface represents information about the world and about its own workings to a user? This article addresses these questions by examining the interaction between representation and intelligence in user interfaces. The rubric "representation" covers at least three topics in this context: how a computational system is represented in its user interface, how the interface conveys its representations of information and of the world to human users, and how the system's internal representation affects the human user's interaction with that system. I will argue that each of these kinds of representations (of the system, of information and the world, of the interaction) is key to how users make the kind of attributions of intelligence that facilitate their interactions with intelligent systems. I will argue for representing a system as a human in those cases where social collaborative behavior is key, and I will argue for the system representing its knowledge to humans in multiple ways on multiple modalities. I will demonstrate my claims by discussing issues of representation and intelligence in an embodied conversational agent - an interface in which the system is represented as a person, in which information is conveyed to human users via multiple modalities such as voice and hand gestures, and in which the internal representation is modality-independent, and both propositional and non-propositional.