In this halloween-themed lecture, we go into more detail on the foundations of hypothesis testing – specifically hypothesis testing with small sample sizes. This allows us to talk about where the Student's t test comes from (and why it is defined that way) as well as where the Chi-square test comes from (and why it is defined that way). Throughout the lecture, we highlight the importance of statistical power and do a power analysis example for a paired-difference t-test.
Archived lectures from undergraduate course on stochastic simulation given at Arizona State University by Ted Pavlic
Thursday, October 28, 2021
Tuesday, October 26, 2021
Lecture H (2021-10-26): Verification, Validation, and Calibration of Simulation Models
In this lecture, we review summary statistics, MLE, and goodness-of-fit tests (particularly Chi-square and Kolmogorov–Smirnov, with some mention of Anderson–Darling and Shapiro–Wilk), with a particular focus on the type-I error, type-II error, and statistical power. We then introduce verification, validation, and calibration of simulation models and close with an example for the simulation of a bank. We use rigorous statistical methods to drive the calibration process that leads to updating the model of the bank and ensuring its outputs are a good statistical match for outputs in a real bank. This involves making use of a power analysis for a one-sample, two-sided t-test. We will cover the paired t-test version of this problem in the next lecture.
Thursday, October 21, 2021
Lecture G3 (2021-10-21): Input Modeling, Part 3
In this lecture, we start out with Q-Q and P-P probability plots that we did not have time to cover from last time. We then transition to a review about type-I error and p values and try to motivate the topics of STATISTICAL POWER and EFFECT SIZES, which we will dive into more in the next few lectures. We then discuss summary statistics and how to use methods such as maximum likelihood estimation (MLE) to come up with good choices of parameters for distributions picked in the input modeling process. Next time, we will discuss testing the (goodness of) fit for those parameterized distributions.
Tuesday, October 19, 2021
Lecture G2 (2021-10-19): Input Modeling, Part 2
In this lecture, we continue our discussion of input modeling in depth. We start with a more detailed example of how data collection can guide the choice of the structural features of a system. We then move to the point in the process when the structure of the model is set but the input models have to be chosen based on collected data. We cover methods for generating histograms and matching those histograms to common distributions (both discrete and continuous). We stop just before discussing Q-Q plots and P-P plots, which we will pick up next time along with discussing how to parameterize these chosen distributions.
Thursday, October 14, 2021
Lecture G1 (2021-10-14): Input Modeling, Part 1
In this lecture, we introduce the 3-lecture unit on "Input Modeling." We start with motivations from thinking about stochastic simulation models and then describe the potential problems that can occur in collecting data. We close with a set of rules that can be helpful to follow when collecting data. We will start on choosing probabilistic families, parameterizing them, and testing goodness of fit next lecture (and extending over the next lecture).
Popular Posts
-
In this lecture, we go over course policies for the Fall 2022 session of IEE 475.
-
In this lecture, we introduce the three different simulation methodologies (agent-based modeling, system dynamics modeling, and discrete eve...
-
In this lecture, we wrap up the course content in IEE 475. We first do a quick overview of the four variance reduction techniques (VRT's...
-
This lecture section is a cumulative review of material from the semester and is meant to serve as a study guide for students preparing for ...
-
In this lecture, we continue to discuss hypothesis testing -- introducing parametric, non-parametric, exact, and non-exact tests and reviewi...
-
Today's lecture covers the basics of probability (including introduction to measure spaces) and random variables. We also go over some r...
-
This lecture continues to discuss issues related to estimating absolute performance from transient and steady-state simulations (of terminat...
-
In this lecture, we (nearly) finish our coverage of Input Modeling, where the focus of this lecture is on parameter estimation and assessing...
-
In this lecture, we prepare for the final exam and give a brief review of all topics from the course.
-
In this lecture, we review four different Variance Reduction Techniques (VRT's). Namely, we discuss common random numbers (CRNs), contro...