Show simple item record

Model Validation and Design under Uncertainty

dc.creatorRebba, Ramesh
dc.date.accessioned2020-08-23T15:50:27Z
dc.date.available2006-12-07
dc.date.issued2005-12-07
dc.identifier.urihttps://etd.library.vanderbilt.edu/etd-11222005-184433
dc.identifier.urihttp://hdl.handle.net/1803/14719
dc.description.abstractFull-scale testing of large engineering systems for assessing performance could be infeasible and expensive. With the growth of advanced computing capabilities, model-based simulation plays an increasingly important role in the design of such systems. When computational models are developed, the assumptions and approximations introduce various types of errors in the code predictions. In order to accept the model prediction with confidence, the computational models need to be rigorously verified and validated. When the input parameters of the model are uncertain, model prediction has uncertainty. On the other hand, the validation experiments also have measurement errors. Thus model validation involves comparing prediction with test data when both are uncertain. Appropriate validation metrics that address various uncertainties and errors are developed in this study, for both component-level and system-level models. Both classical and Bayesian statistics are used for this purpose. Another goal of model validation is to extend what we can learn about the model’s predictive capability within the tested region to an inference about the predictive capability in the untested region of actual application and quantify the confidence in the extrapolation being performed. Sometimes the response quantity of interest in the target application may be different from the validated response quantity. Validation inferences may need to be extrapolated from nominal to off-nominal (tail) conditions or component level data may have to be used to make partial inference on the validity of system-level prediction. In all of the above cases, the methodology of Bayesian networks is developed to extrapolate inferences from the validation domain to the application domain. This study also proposes a methodology to estimate the errors in computational models and to include them in reliability-based design optimization (RBDO). Various sources of uncertainties, errors and approximations in model form selection and numerical solution are included in a first order-based RBDO methodology.
dc.format.mimetypeapplication/pdf
dc.subjectEngineering -- Mathematical models -- Evaluation
dc.subjectverification
dc.subjecterror estimation
dc.subjectBayesian statistics
dc.subjectextrapolation
dc.subjecthypothesis testing
dc.subjectmodel validation
dc.subjectReliability (Engineering)
dc.titleModel Validation and Design under Uncertainty
dc.typedissertation
dc.contributor.committeeMemberProf. Prodyot. K. Basu
dc.contributor.committeeMemberProf. Bruce Cooil
dc.contributor.committeeMemberProf. Gautam Biswas
dc.type.materialtext
thesis.degree.namePHD
thesis.degree.leveldissertation
thesis.degree.disciplineCivil Engineering
thesis.degree.grantorVanderbilt University
local.embargo.terms2006-12-07
local.embargo.lift2006-12-07
dc.contributor.committeeChairProf. Sankaran Mahadevan


Files in this item

Icon

This item appears in the following Collection(s)

Show simple item record