Uncertainty Aggregation in System Performance Assessment
Probabilistic performance assessment evaluates a system’s capability to accomplish the required functions under uncertainty. The results of probabilistic performance assessment can be used to support decision making such as data collection, design optimization and operational risk management. There are two different approaches for probabilistic performance assessment – (1) physics model-based, and (2) test-based or data-driven. In both approaches, performance assessment is affected by different types of aleatory (natural variability) and epistemic (lack of knowledge) uncertainty sources. Lack of sufficient data or knowledge causes epistemic uncertainty, both in model inputs (statistical uncertainty) and models (model uncertainty). Therefore, methods for the systematic incorporation of various sources of epistemic uncertainty in performance assessment are investigated. Systems may contain a single-component, multiple components arranged in a hierarchical manner (multi-level systems), components that perform with a time lag (such as feedback control systems) and components with simultaneous interactions between them (such as multi-physics systems). In this dissertation, a Bayesian framework is developed for the quantification and aggregation of multiple uncertainty sources in different system configurations, in order to accomplish performance assessment.