Tuesday, October 29, 2024

Lecture H (2024-10-29): Verification, Validation, and Calibration of Simulation Models

During this lecture slot, we start with slides from Lecture G3 (on goodness of fit) that were missed during the previous lecture due to timing. In particular, we review hypothesis testing fundamentals (type-I error, type-II error, statistical power, sensitivity, false positive rate, true negative rate, receiver operating characteristic, ROC, alpha, beta) and then go into examples of using Chi-squared and Kolmogorov–Smirnov tests for goodness of fit for arbitrary distributions. We also introduce Anderson–Darling (for flexibility and higher power) and Shapiro–Wilk (for high-powered normality testing).

We close with where we originally intended to start – with definitions of testing, verification, validation, and calibration. We will pick up from here next time.



Thursday, October 24, 2024

Lecture G3 (2024-10-24): Input Modeling, Part 3: Parameter Estimation and Goodness of Fit

In this lecture, we (nearly) finish our coverage of Input Modeling, where the focus of this lecture is on parameter estimation and assessing goodness of fit. We review input modeling in general and then briefly review fundamentals of hypothesis testing. We discuss type-I error, p-values, type-II error, effect sizes, and statistical power. We discuss the dangers of using p-values at very large sample sizes (where small p-values are not meaningful) and at very small sample sizes (where large p-values are not meaningful). We give some examples of this applied to best-of-7 sports tournaments and voting. We then discuss different shape parameters (including location, scale, and rate), and then introduce summary statistics (sample mean and sample variance) and maximum likelihood estimation (MLE), with an example for a point estimate of the rate of an exponential. We introduce the chi-squared (lower power) and Kolmogorov–Smirnov (KS, high power) tests for goodness of fit, but we will go into them in more detail at the start of the next lecture.



Tuesday, October 22, 2024

Lecture G2 (2024-10-22): Input Modeling, Part 2: Selection of Model Structure

In this lecture, we continue discussing the choice of input models in stochastic simulation. Here, we pivot from talking about data collection to selection of the broad family of probabilistic distributions that may be a good fit for data. We start with an example where a histogram leads us to introduce additional input models into a flow chart. The rest of the lecture is about choosing models based on physical intuition and the shape of the sampled data (e.g., the shape of histograms). We close with a discussion of probability plots – Q-Q plots and P-P plots, as are used with "fat-pencil tests" – as a good tool for justifying the choice of a family for a certain data set. The next lecture will go over the actual estimation of the parameters for the chosen families and how to quantitatively assess goodness of fit.



Friday, October 18, 2024

Lecture G1 (2024-10-17): Input Modeling, Part 1: Data Collection

In this lecture, we introduce the detailed process of input modeling. Input models are probabilistic models that introduce variation in simulation models of systems. Those input models must be chosen to match statistical distributions in data. Over this unit, we cover collection of data for this process, choice of probabilistic families to fit to these data, and then optimized parameter choice within those families and evaluation of fit with goodness of fit. In this lecture, we discuss issues related to data collection.



Thursday, October 3, 2024

Lecture F (2024-10-03): Midterm Review

During this lecture, we review the topics covered up to this point in the course as preparation for the upcoming midterm exam. Students are encouraged to bring their own questions to class so that we can focus on the topics that students feel like they need the most help with.



Tuesday, October 1, 2024

Lecture E2 (2024-10-01): Random-Variate Generation

In this lecture, we review pseudo-random number generation and then introduce random-variate generation by way of inverse-transform sampling. In particular, we start with a review of the two most important properties of a pseudo-random number generator (PRNG), uniformity and independence, and discuss statistically rigorous methods for testing for these two properties. For uniformity, we focus on a Chi-square/Chi-squared test for larger numbers of samples and a Kolmogorov–Smirnov (KS) test for smaller numbers of samples. For independence, we discuss autocorrelation tests and runs test, and then we demonstrate a runs above-and-below-the-mean test. We then shift to discussing inverse-transform sampling for continuous random variates and discrete random variates and how the resulting random-variate generators might be implemented in a tool like Rockwell Automation's Arena.



Popular Posts