In this lecture, we review estimating absolute performance from simulation, with focus on choosing the number of necessary replications of transient simulations of terminating systems. The lecture starts by overviewing point estimation, bias, and different types of point estimators. This includes an overview of quantile estimation and how to use quantile estimation to use simulations as null-hypothesis-prediction generators. We the introduce interval estimation with confidence intervals and prediction intervals. Confidence intervals, which are visualizations of t-tests, provide an alternative way to choose the number of required replications without doing a formal power analysis.
IEE 475: Simulating Stochastic Systems
Archived lectures from undergraduate course on stochastic simulation given at Arizona State University by Ted Pavlic
Friday, November 7, 2025
Tuesday, November 4, 2025
Lecture J1 (2025-11-04): Estimation of Absolute Performance, Part I (Introduction to Point and Interval Estimation)
In this lecture, we introduce the estimation of absolute performance measures in simulation – effectively shifting our focus from validating input models to validating and making inferences about simulation outputs. Most of this lecture is a review of statistics and reasons for the assumptions for various parametric and non-exact non-parametric methods. We also introduce a few more advanced statistical topics, such as non-parametric methods and special high-power tests for normality. We then switch to focusing on simulations and their outputs, starting with the definition of terminating and non-terminating systems as well as the related transient and steady-state simulations. We will pick up next time with discussing details related to performance measures (and methods) for transient simulations next time and steady-state simulations after that. Our goal was to discuss the difference between point estimation and interval estimation for simulation, but we will hold off to discuss that topic in the next lecture.
Thursday, October 30, 2025
Lecture I (2025-10-30): Statistical Reflections
In this lecture, we review statistical fundamentals – such as the origins of the t-test, the meaning of type-I and type-II error (and alternative terminology for both, such as false positive rate and false negative rate) and the connection to statistical power (sensitivity). We review the Receiver Operating Characteristic (ROC) curve and give a qualitative description of where it gets its shape in a hypothesis test. We close with a validation example (from Lecture H) where we use a power analysis on a one-sample t-test to help justify whether we have gathered enough data to trust that a simulation model is a good match for reality when it has a similar mean output performance to the real system. Peppered throughout the lecture are also comments about why normality is required for t-tests, why there is a minimum expected count for chi-squared tests, and how to avoid statistical inference issues when making multiple comparisons.
Tuesday, October 28, 2025
Lecture H (2025-10-28): Verification, Validation, and Calibration of Simulation Models
At the start of this lecture, we review statistical topics and fitting techniques from Unit G (particularly Lecture G3, on goodness of fit). In particular, we review hypothesis testing fundamentals (type-I error, type-II error, statistical power, sensitivity, false positive rate, true negative rate, receiver operating characteristic, ROC, alpha, beta) and then go into examples of using Chi-squared and Kolmogorov–Smirnov tests for goodness of fit for arbitrary distributions. We also introduce Anderson–Darling (for flexibility and higher power) and Shapiro–Wilk (for high-powered normality testing).
We then pivot to formally defining simulation verification, validation, and calibration and then introducing techniques that incorporate rigorous statistical tools into the validation and calibration process. We focus specifically on the use of the t-test (for confirming that populations of simulation data are consistent with the mean behaviors from the real systems they are meant to represent) and the power analysis (for understanding the conditions when a failure to detect a difference between simulation and real system allows for inferring that the simulation is sufficiently close to the real system).
Thursday, October 23, 2025
Lecture G3 (2025-10-23) Input Modeling, Part 3 (Parameter Estimation and Goodness of Fit)
In this lecture, we (nearly) finish our coverage of Input Modeling, where the focus of this lecture is on parameter estimation and assessing goodness of fit. We review input modeling in general and then briefly review fundamentals of hypothesis testing. We discuss type-I error, p-values, type-II error, effect sizes, and statistical power. We discuss the dangers of using p-values at very large sample sizes (where small p-values are not meaningful) and at very small sample sizes (where large p-values are not meaningful). We give some examples of this applied to best-of-7 sports tournaments and voting. We then discuss different shape parameters (including location, scale, and rate), and then introduce summary statistics (sample mean and sample variance) and maximum likelihood estimation (MLE), with an example for a point estimate of the rate of an exponential. We introduce the chi-squared (lower power) and Kolmogorov–Smirnov (KS, high power) tests for goodness of fit, but we will go into them in more detail at the start of the next lecture.
Tuesday, October 21, 2025
IEE 475: Lecture G2 (2025-10-21): Input Modeling, Part 2 (Selection of Model Structure)
In this lecture, we continue discussing the choice of input models in stochastic simulation. Here, we pivot from talking about data collection to selection of the broad family of probabilistic distributions that may be a good fit for data. We start with an example where a histogram leads us to introduce additional input models into a flow chart. The rest of the lecture is about choosing models based on physical intuition and the shape of the sampled data (e.g., the shape of histograms). We close with a discussion of probability plots – Q-Q plots and P-P plots, as are used with "fat-pencil tests" – as a good tool for justifying the choice of a family for a certain data set. The next lecture will go over the actual estimation of the parameters for the chosen families and how to quantitatively assess goodness of fit.
Thursday, October 16, 2025
Lecture G1 (2025-10-16): Input Modeling, Part 1 (Data Collection)
In this lecture, we introduce the detailed process of input modeling. Input models are probabilistic models that introduce variation in simulation models of systems. Those input models must be chosen to match statistical distributions in data. Over this unit, we cover collection of data for this process, choice of probabilistic families to fit to these data, and then optimized parameter choice within those families and evaluation of fit with goodness of fit. In this lecture, we discuss issues related to data collection.
Popular Posts
-
In this lecture, we review pseudo-random number generation and then introduce random-variate generation by way of inverse-transform sampling...
-
In this lecture, we introduce the detailed process of input modeling. Input models are probabilistic models that introduce variation in simu...
-
In this lecture, we review basic probability space concepts from the previous lecture. We then go on to discuss the common probabilistic mod...
-
This lecture covers Variance Reduction Techniques (VRT) for stochastic simulation, covering: Common Random Numbers (CRNs), Control Variates ...
-
In this lecture, we introduce the three different simulation methodologies (agent-based modeling, system dynamics modeling, and discrete eve...
-
In this lecture, we continue discussing the choice of input models in stochastic simulation. Here, we pivot from talking about data collecti...
-
In this lecture, we wrap up the course content in IEE 475. We first do a quick overview of the four variance reduction techniques (VRT's...
-
In this lecture, we prepare for the final exam and give a brief review of all topics from the course. Students are encouraged to bring their...
-
In this lecture, we close out our review of DES fundamentals and hand simulation. After going through a hand-simulation example one last tim...
-
During this lecture slot, we start with slides from Lecture G3 (on goodness of fit) that were missed during the previous lecture due to timi...