Mind model for multimodal communicative creatures and humanoids

Kristinn R. Thórisson*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

37 Citations (Scopus)

Abstract

This paper presents a computational model of real - time task - oriented dialog skills. The model, termed Ymir, bridges multimodal perception and multimodal action and supports the creation of autonomous computer characters that afford full - duplex, real - time face - to - face interaction with a human. Ymir has been prototyped in software, and a humanoid created, called Gandalf, capable of fluid multimodal dialog. Ymir demonstrates several new ideas in the creation of communicative computer agents, including perceptual integration of multimodal events, distributed planning and decision making, an explicit handling of real time, and perceptuo-motor system layered and motor control with human characteristics. This paper describes the model's architecture and explains its main elements. Examples of implementation and performance are given, and the architecture's limitations and possibilities are discussed.

Original languageEnglish
Pages (from-to)449-486
Number of pages38
JournalApplied Artificial Intelligence
Volume13
Issue number4-5
DOIs
Publication statusPublished - 1 May 1999

Fingerprint

Dive into the research topics of 'Mind model for multimodal communicative creatures and humanoids'. Together they form a unique fingerprint.

Cite this