SmartUQ at ASME Turbo Expo

Phoenix, AZ
June 16 - 21

We invite you to stop by our booth #441 at ASME Turbo Expo conference; meet experts in engineering analytics and uncertainty quantification, see demonstrations, and explore how SmartUQ can improve your analysis.

ASME Turbo Expo

Conference Presentation

A Statistical Imputation Method for Handling Missing Values in Generalized Polynomial Chaos Expansions

June 20 - 4:00 PM to 4:30 PM
Presented by Dr. Mark Andrews, Uncertainty Quantification Technology Steward

Generalized Polynomial Chaos Expansion (gPCE) is widely used in uncertainty quantification and sensitivity analysis for applications in the aerospace industry. gPCE uses the spectrum projection to fit a polynomial model, the gPCE model, to a sparse grid Design of Experiments (DOEs). The gPCE model can be used to make predictions, analytically determine uncertainties, and calculate sensitivity indices. However, the model’s accuracy is very dependent on having complete DOEs. When a sampling point is missing from the sparse grid DOE, this severely impacts the accuracy of the gPCE analysis and often necessitates running a new DOE. Missing data points are a common occurrence in engineering testing and simulation. This problem complicates the use of the gPCE analysis. In this paper, we present a statistical imputation method for addressing this missing data problem. This methodology allows gPCE modeling to handle missing values in the sparse grid DOE. Using a series of numerical results, the study demonstrates the convergence characteristics of the methodology with respect to reaching steady state values for the missing points. The article concludes with a discussion of the convergence rate, advantages, and feasibility of using the proposed methodology.

Conference Tutorial

Industry Challenges in Uncertainty Quantification: Narrowing the Simulation - Test Gap with Statistical Calibration

June 20 - 8:00 AM to 10:00 AM
Presented by Dr. Mark Andrews, Uncertainty Quantification Technology Steward

As the complexity of numerical simulations increase, so does the uncertainty, making decision makers more skeptical of simulation results. By applying advanced statistical methods as such Uncertainty Quantification (UQ), simulation models can become a trustworthy source of information for decision analytics. UQ methods have been broadly applied to generally all computer simulations such as fluid dynamics and heat transfer or solid mechanics and structures. Establishing how well the numerical simulation represents reality is one way to make simulation results more trustworthy for decision makers. This assessment is usually accomplished by comparing the simulation results to physical data. However, the observed mismatch between the simulation results and the physical test results can blur an engineer’s understanding how well the simulation represents reality. This tutorial focuses on statistical model calibration, a process used to quantify the uncertainties in the simulation model which provides an understanding of this mismatch and a means to narrow the simulation – physical test gap. Using a case study, the tutorial will sequentially walk through model calibration process used to quantify uncertainties for simulations and physical experiments. In addition to simulations, UQ has a fundamental role in the emerging field of the Digital Twin, which has applications in aerospace, automotive, manufacturing and medical device industries. For both Digital Twins and general computer simulations, UQ can supply an assessment of uncertainty at critical points in the development process called the ‘authoritative digital surrogate truth source’. This key information based on simulation and physical data is shared with stakeholders and used for predicting the probability of success of meeting technical requirements at these critical points as well as prescribing the next steps in order to ensure to success in meeting performance metrics.

Introduction to Probabilistic Analysis and Uncertainty Quantification

June 20 - 10:15 AM to 12:15 PM
Presented by Dr. Mark Andrews, Uncertainty Quantification Technology Steward

Experienced practitioners who construct complex simulation models of critical systems know that replicating real-world performance is challenging due to uncertainties in found in simulation and physical tests. It arises from sources like measurement inaccuracies, material properties, boundary and initial conditions, and modeling approximations. Using case studies, this tutorial will introduce probabilistic and Uncertainty Quantification (UQ) methods, benefits, and tools. UQ is a systematic process that puts error bands on the results by incorporating real world variability and probabilistic behavior into engineering and systems analysis. UQ answers the question: what is likely to happen when the system is subjected to uncertain and variable inputs. Answering this question facilitates significant risk reduction, robust design, and greater confidence in engineering decisions. Modern UQ techniques use powerful statistical models to map the input-output relationships of the system, significantly reducing the number of simulations or tests required to get statistically defensible answers. The tutorial will discuss the basic UQ and probabilistic methods, such as Gaussian processes, polynomial chaos expansion, sparse grids, Latin hypercube designs, model calibration, model validation, sensitivity analysis, and how to account for aleatoric and epistemic uncertainties. This course will also discuss the broad applications these probabilistic techniques have in analyzing numerous forms of engineering systems including Digital Thread/Digital Twins.