At the start of this lecture, we review statistical topics and fitting techniques from Unit G (particularly Lecture G3, on goodness of fit). In particular, we review hypothesis testing fundamentals (type-I error, type-II error, statistical power, sensitivity, false positive rate, true negative rate, receiver operating characteristic, ROC, alpha, beta) and then go into examples of using Chi-squared and Kolmogorov–Smirnov tests for goodness of fit for arbitrary distributions. We also introduce Anderson–Darling (for flexibility and higher power) and Shapiro–Wilk (for high-powered normality testing).
We then pivot to formally defining simulation verification, validation, and calibration and then introducing techniques that incorporate rigorous statistical tools into the validation and calibration process. We focus specifically on the use of the t-test (for confirming that populations of simulation data are consistent with the mean behaviors from the real systems they are meant to represent) and the power analysis (for understanding the conditions when a failure to detect a difference between simulation and real system allows for inferring that the simulation is sufficiently close to the real system).
No comments:
Post a Comment