Evaluating Interpretability Methods for DNNs in Game-Playing Agents

Aðalsteinn Pálsson*, Yngvi Björnsson

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


There is a trend in game-playing agents to move towards an Alpha-Zero-style architecture, including using a deep neural network as a model for evaluating game positions. Model interpretability in such agents is problematic. We evaluate the applicability and effectiveness of several saliency-map-based methods for improving the interpretability of a deep neural network trained for evaluating game positions, using the game of Breakthrough as our testbed. We show that the more applicable methods provide insights into the importance of the different game pieces and other domain-dependent knowledge learned by the model.

Original languageEnglish
Title of host publicationAdvances in Computer Games - 17th International Conference, ACG 2021, Revised Selected Papers
EditorsCameron Browne, Akihiro Kishimoto, Jonathan Schaeffer
PublisherSpringer Science and Business Media Deutschland GmbH
Number of pages11
ISBN (Print)9783031114878
Publication statusPublished - 2022
Event17th International Conference on Advances in Computer Games, ACG 2021 - Virtual, Online
Duration: 23 Nov 202125 Nov 2021

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13262 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference17th International Conference on Advances in Computer Games, ACG 2021
CityVirtual, Online

Bibliographical note

Publisher Copyright:
© 2022, Springer Nature Switzerland AG.

Other keywords

  • deep neural-networks
  • game-playing
  • model-interpretability


Dive into the research topics of 'Evaluating Interpretability Methods for DNNs in Game-Playing Agents'. Together they form a unique fingerprint.

Cite this