Embedded Model Error Quantification and Propagation

Abstract

Conventional applications of Bayesian calibration typically assume the model replicates the true mechanism behind data generation. However, this idealization is often not achieved in practice, and computational models frequently carry different physical parameterizations and assumptions than the underlying `truth'. Ignoring model errors can then lead to overconfident calibrations and predictions around values that are, in fact, biased. Most statistical methods for bias correction are specific to observable quantities, do not retain physical constraints in subsequent predictions, and experience identifiability challenges in distinguishing between data noise and model error. We develop a general Bayesian framework for non-intrusive mph{embedded} model correction that addresses some of these difficulties, by inserting a stochastic correction to the model input parameters. The physical inputs and correction parameters are then simultaneously inferred. With a polynomial chaos characterization of the correction term, the approach allows efficient quantification, propagation, and decomposition of uncertainty that includes contributions from data noise, parameter posterior uncertainty, and model error. We demonstrate the key strengths of this method on both synthetic examples and realistic engineering applications.

Date
Apr 19, 2018
Location
Anaheim, CA