dc.creator | Rebba, Ramesh | |
dc.date.accessioned | 2020-08-23T15:50:27Z | |
dc.date.available | 2006-12-07 | |
dc.date.issued | 2005-12-07 | |
dc.identifier.uri | https://etd.library.vanderbilt.edu/etd-11222005-184433 | |
dc.identifier.uri | http://hdl.handle.net/1803/14719 | |
dc.description.abstract | Full-scale testing of large engineering systems for assessing performance could be infeasible and expensive. With the growth of advanced computing capabilities, model-based simulation plays an increasingly important role in the design of such systems. When computational models are developed, the assumptions and approximations introduce various types of errors in the code predictions. In order to accept the model prediction with confidence, the computational models need to be rigorously verified and validated. When the input parameters of the model are uncertain, model prediction has uncertainty. On the other hand, the validation experiments also have measurement errors. Thus model validation involves comparing prediction with test data when both are uncertain. Appropriate validation metrics that address various uncertainties and errors are developed in this study, for both component-level and system-level models. Both classical and Bayesian statistics are used for this purpose.
Another goal of model validation is to extend what we can learn about the model’s predictive capability within the tested region to an inference about the predictive capability in the untested region of actual application and quantify the confidence in the extrapolation being performed. Sometimes the response quantity of interest in the target application may be different from the validated response quantity. Validation inferences may need to be extrapolated from nominal to off-nominal (tail) conditions or component level data may have to be used to make partial inference on the validity of system-level prediction. In all of the above cases, the methodology of Bayesian networks is developed to extrapolate inferences from the validation domain to the application domain.
This study also proposes a methodology to estimate the errors in computational models and to include them in reliability-based design optimization (RBDO). Various sources of uncertainties, errors and approximations in model form selection and numerical solution are included in a first order-based RBDO methodology. | |
dc.format.mimetype | application/pdf | |
dc.subject | Engineering -- Mathematical models -- Evaluation | |
dc.subject | verification | |
dc.subject | error estimation | |
dc.subject | Bayesian statistics | |
dc.subject | extrapolation | |
dc.subject | hypothesis testing | |
dc.subject | model validation | |
dc.subject | Reliability (Engineering) | |
dc.title | Model Validation and Design under Uncertainty | |
dc.type | dissertation | |
dc.contributor.committeeMember | Prof. Prodyot. K. Basu | |
dc.contributor.committeeMember | Prof. Bruce Cooil | |
dc.contributor.committeeMember | Prof. Gautam Biswas | |
dc.type.material | text | |
thesis.degree.name | PHD | |
thesis.degree.level | dissertation | |
thesis.degree.discipline | Civil Engineering | |
thesis.degree.grantor | Vanderbilt University | |
local.embargo.terms | 2006-12-07 | |
local.embargo.lift | 2006-12-07 | |
dc.contributor.committeeChair | Prof. Sankaran Mahadevan | |