Iterative Global Sensitivity Analysis Algorithm with Neural Network Surrogate Modeling

Yen Chen Liu, Jethro Nagawkar, Leifur Leifsson*, Slawomir Koziel, Anna Pietrenko-Dabrowska

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Global sensitivity analysis (GSA) is a method to quantify the effect of the input parameters on outputs of physics-based systems. Performing GSA can be challenging due to the combined effect of the high computational cost of each individual physics-based model, a large number of input parameters, and the need to perform repetitive model evaluations. To reduce this cost, neural networks (NNs) are used to replace the expensive physics-based model in this work. This introduces the additional challenge of finding the minimum number of training data samples required to train the NNs accurately. In this work, a new method is introduced to accurately quantify the GSA values by iterating over both the number of samples required to train the NNs, terminated using an outer-loop sensitivity convergence criteria, and the number of model responses required to calculate the GSA, terminated with an inner-loop sensitivity convergence criteria. The iterative surrogate-based GSA guarantees converged values for the Sobol’ indices and, at the same time, alleviates the specification of arbitrary accuracy metrics for the surrogate model. The proposed method is demonstrated in two cases, namely, an eight-variable borehole function and a three-variable nondestructive testing (NDT) case. For the borehole function, both the first- and total-order Sobol’ indices required 200 and 10 5 data points to terminate on the outer- and inner-loop sensitivity convergence criteria, respectively. For the NDT case, these values were 100 for both first- and total-order indices for the outer-loop sensitivity convergence, and 10 6 and 10 3 for the inner-loop sensitivity convergence, respectively, for the first- and total-order indices, on the inner-loop sensitivity convergence. The differences of the proposed method with GSA on the true functions are less than 3% in the analytical case and less than 10% in the physics-based case (where the large error comes from small Sobol’ indices).

Original languageEnglish
Title of host publicationComputational Science – ICCS 2021 - 21st International Conference, Proceedings
EditorsMaciej Paszynski, Dieter Kranzlmüller, Dieter Kranzlmüller, Valeria V. Krzhizhanovskaya, Jack J. Dongarra, Peter M.A. Sloot, Peter M.A. Sloot, Peter M.A. Sloot
PublisherSpringer Science and Business Media Deutschland GmbH
Pages298-311
Number of pages14
ISBN (Print)9783030779696
DOIs
Publication statusPublished - 2021
Event21st International Conference on Computational Science, ICCS 2021 - Virtual, Online
Duration: 16 Jun 202118 Jun 2021

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12745 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference21st International Conference on Computational Science, ICCS 2021
CityVirtual, Online
Period16/06/2118/06/21

Bibliographical note

Funding Information:
Acknowledgements. This material is based upon work supported by the U.S. National Science Foundation under grants no. 1739551 and 1846862, as well as by the Icelandic Centre for Research (RANNIS) grant no. 174573053.

Publisher Copyright:
© 2021, Springer Nature Switzerland AG.

Other keywords

  • Global sensitivity analysis
  • Neural networks
  • Sobol’ indices
  • Surrogate modeling
  • Termination criteria

Fingerprint

Dive into the research topics of 'Iterative Global Sensitivity Analysis Algorithm with Neural Network Surrogate Modeling'. Together they form a unique fingerprint.

Cite this