Explicit Goal-Driven Autonomous Self-Explanation Generation

Kristinn R. Thórisson*, Hjörleifur Rörbeck, Jeff Thompson, Hugo Latapie

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Explanation can form the basis, in any lawfully behaving environment, of plans, summaries, justifications, analysis and predictions, and serve as a method for probing their validity. For systems with general intelligence, an equally important reason to generate explanations is for directing cumulative knowledge acquisition: Lest they be born knowing everything, a general machine intelligence must be able to handle novelty. This can only be accomplished through a systematic logical analysis of how, in the face of novelty, effective control is achieved and maintained—in other words, through the systematic explanation of experience. Explanation generation is thus a requirement for more powerful AI systems, not only for their owners (to verify proper knowledge and operation) but for the AI itself—to leverage its existing knowledge when learning something new. In either case, assigning the automatic generation of explanation to the system itself seems sensible, and quite possibly unavoidable. In this paper we argue that the quality of an agent’s explanation generation mechanism is based on how well it fulfils three goals – or purposes – of explanation production: Uncovering unknown or hidden patterns, highlighting or identifying relevant causal chains, and identifying incorrect background assumptions. We present the arguments behind this conclusion and briefly describe an implemented self-explaining system, AERA (Autocatlytic Endogenous Reflective Architecture), capable of goal-directed self-explanation: Autonomously explaining its own behavior as well as its acquired knowledge of tasks and environment.

Original languageEnglish
Title of host publicationArtificial General Intelligence - 16th International Conference, AGI 2023, Proceedings
EditorsPatrick Hammer, Marjan Alirezaie, Claes Strannegård
PublisherSpringer, Cham
Pages286-295
Number of pages10
ISBN (Print)9783031334689
DOIs
Publication statusPublished - 24 May 2023
Event16th International Conference on Artificial General Intelligence, AGI 2023 - Stockholm, Sweden
Duration: 16 Jun 202319 Jun 2023

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13921 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference16th International Conference on Artificial General Intelligence, AGI 2023
Country/TerritorySweden
CityStockholm
Period16/06/2319/06/23

Bibliographical note

Funding Information:
This work was supported in part by Cisco Systems, the Icelandic Institute for Intelligent Machines and Reykjavik University.

Publisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

Other keywords

  • Artificial Intelligence
  • Autonomy
  • Causal Reasoning
  • Explanation Generation
  • General Machine Intelligence
  • Self-Explanation

Fingerprint

Dive into the research topics of 'Explicit Goal-Driven Autonomous Self-Explanation Generation'. Together they form a unique fingerprint.

Cite this