The representation and estimation of model structural errors is an important step in model development and in the Bayesian estimation of model parameters. Ignoring model inadequacies may lead to biased parameter estimates and poor predictive skill, even in presence of high-quality data. Current Bayesian methods for the explicit representation of model error correct such biases but still face significant challenges that are yet to be resolved, particularly for physical models. For example, it is unclear how to properly disambiguate data noise from model errors, or how to extrapolate towards the prediction of unobservable model outputs of interest. Furthermore, particular care must be taken to tailor the statistical structure of a model error representation in order to avoid violating physical constraints required by the physical system.
In order to overcome these challenges, we introduce a framework in which model errors are captured by allowing variability in specific model components and parameterizations for the purpose of achieving uncertain predictions that are consistent with the data, in both senses of mean and discrepancy; and appropriately disambiguate model and data errors. Model parameters are cast as random variables, reformulating the calibration problem as a density estimation, which is subsequently tackled with Bayesian inference. We use likelihood approximations, or moment-matching criteria in the spirit of Approximate Bayesian Computation (ABC), to build tractable likelihoods for the associated Bayesian inverse problem. We demonstrate the key strengths of the method on both synthetic cases and on practical applications.