In this paper, we argue for embodied conversational characters as the logical extension of the metaphor of human – computer interaction as a conversation. We argue that the only way to fully model the richness of human face-to-face communication is to rely on conversational analysis that describes sets of conversational behaviors as fulfilling conversational functions, both interactional and propositional. We demonstrate how to implement this approach in Rea, an embodied conversational agent that is capable of both multimodal input understanding and output generation in a limited application domain. Rea supports both social and task-oriented dialogue. We discuss issues that need to be addressed in creating embodied conversational agents, and describe the architecture of the Rea interface.