Solutions to inverse problems often assume the computational model can replicate the true mechanism behind data generation. However, physical models carry misspecification due to different parameterizations and assumptions. Ignoring such model errors can lead to overconfident calibrations and poor predictive capability, even when high-quality data are used. As a result, outer-loop tasks, such as active learning or optimal design lead to biased results with poorly calibrated uncertainties.
This work will present a Bayesian inference framework for representing, quantifying, and propagating uncertainties due to model structural errors by embedding stochastic correction terms in the model. The physical input parameters and model-error parameters are then simultaneously optimized in an inverse modeling context.
We will demonstrate the methodology on example problems developing machine-learned interatomic potential (MLIAP) models. The resulting predictive uncertainties capture model error and will be employed in an active learning loop to enable efficient construction of uncertainty-augmented MLIAPs.