Aircraft Airworthiness & Sustainment Conference (AA&S) and Propulsion Safety & Sustainment Conference (PS&S)

SmartUQ at Aircraft Airworthiness & Sustainment Conference (AA&S)
and Propulsion Safety & Sustainment Conference (PS&S)

Washington, DC
April 22 - 26

We invite you to stop by our booth #106 at AA&S and PS&S conference; meet experts in engineering analytics and uncertainty quantification, see demonstrations, and explore how SmartUQ can improve your analysis.

2019 Aircraft Airworthiness & Sustainment Conference (AA&S) and Propulsion Safety & Sustainment Conference (PS&S)

Conference Presentation

Bayesian Calibration Uncertainty Analysis A CFD Turbulence Case Study

April 25 - 4:00 PM to 4:30 PM
Presented by Dr. Mark Andrews, Uncertainty Quantification Technology Steward

The growing use of simulations in the engineering design process promises to reduce the need for extensive physical testing, decreasing both development time and cost. However, model accuracy depends on many factors including noise, bias, parameter uncertainty, and model form uncertainty. To counter these effects and ensure that models faithfully match reality to the extent required, simulation models must be calibrated to physical measurements. Further, the models must be validated, and their accuracy must be quantified before they can be relied on in lieu of physical testing. Bayesian calibration provides a solution for both requirements: it optimizes tuning of model parameters to improve simulation accuracy, and estimates any remaining discrepancy which is useful for model diagnosis and validation. Also, because model discrepancy is assumed to exist in this framework, it enables robust calibration even for inaccurate models. We will present a case study to investigate the potential benefits of using Bayesian calibration, sensitivity analyses, and Monte Carlo analyses for model improvement and validation. We will calibrate a 7-parameter k-σ CFD turbulence model simulated in COMSOL Multiphysics®. The model predicts coefficient of lift and drag for an airfoil defined using a 6049-series airfoil parameterization from the National Advisory Committee for Aeronautics (NACA). We will calibrate model predictions using publicly available wind tunnel data from the University of Illinois Urbana-Champaign’s (UIUC) database. Bayesian model calibration requires intensive sampling of the simulation model to determine the most likely distribution of calibration parameters, which can be a large computational burden. We greatly reduce this burden by following a surrogate modeling approach, using Gaussian process emulators to mimic the CFD simulation. We train the emulator by sampling the simulation space using a type of Design of Experiment (DOE) called a Latin Hypercube Design (LHD), and assess the accuracy of the emulator using leave-one-out Cross Validation (CV) error. The Bayesian calibration framework involves calculating the discrepancy between simulation results and physical test results. We also use Gaussian process emulators to model this discrepancy. The discrepancy emulator will be used as a tool for model validation; characteristic trends in residual errors after calibration can indicate underlying model form errors which were not addressed via tuning the model calibration parameters. In this way, we will separate and quantify model form uncertainty and parameter uncertainty. The results of a Bayesian calibration include a posterior distribution of calibration parameter values. These distributions will be sampled using Monte Carlo methods to generate model predictions, whereby new predictions have a distribution of values which reflects the uncertainty in the tuned calibrated parameter. The resulting output distributions will be compared against physical data and the uncalibrated model to assess the effects of the calibration and discrepancy model. We will also perform global, variance based sensitivity analysis on the uncalibrated model and the calibrated models, and investigate any changes in the sensitivity indices from uncalibrated to calibrated. Additionally, we will perform sensitivity analysis on the discrepancy model to identify what parameters most strongly contribute to difference between simulation data and test data.