Many Halloween-inspired examples are given during the lecture, and the relationship between ghost busting and statistics is exploited.
Archived lectures from undergraduate course on stochastic simulation given at Arizona State University by Ted Pavlic
Thursday, October 31, 2019
Lecture I: Statistical Reflections (2019-10-31) – Halloween Themed
Discussion of error rates and statistical power in hypothesis testing, along with a deeper investigation behind how the Student's t-test and the Chi-square test work and why they require the assumptions they do. An example paired-difference t-test with power analysis is done, and then lecture closes with a discussion of the multiple comparisons problem (applied to simulation problems) and tools, such as Bonferroni correction, that can be used to prevent "statistical fishing."
Many Halloween-inspired examples are given during the lecture, and the relationship between ghost busting and statistics is exploited.
Many Halloween-inspired examples are given during the lecture, and the relationship between ghost busting and statistics is exploited.
Tuesday, October 29, 2019
Lecture H: Output Verification, Validation, and Calibration (2019-10-29)
This lecture covers testing, validation, verification, and the general process of simulation model calibration. Specific quantitative topics involve power analysis of a one-sample, two-tailed t-test as well as the application of a paired t-test for analyzing validity of a simulation model using data from a real system.
There is a period at the end of the lecture where I accidentally refer to an OC curve as plotting effect size versus statistical power. I meant to say that it plots effect size versus false negative rate (beta, or type-II error). This is fixed in the posted slides for the class.
Thursday, October 24, 2019
Lecture G3: Input Modeling, Part 3 (2019-10-24)
Part 3 of a 3-part lecture series on input modeling for stochastic simulation. This lecture describes point estimation of parameters by maximum likelihood as well as the use of goodness-of-fit techniques (Chi-square, Kolmogorov-Smirnov, Anderson-Darling, and Shapiro-Wilk) to evaluate those best fits. It also includes a general discussion about hypothesis testing and cautionary notes about p-values.
Tuesday, October 22, 2019
Lecture G2: Input Modeling, Part 2 (2019-10-22)
Part 2 of a 3-part lecture series on input modeling for stochastic simulation. This lecture describes going from data to coarse logic flows and then using tools like probability plots to choose distributions for elements of those flows.
Thursday, October 10, 2019
Lecture G1: Input Modeling, Part 1 (2019-10-10)
This lecture gives an overview of the process of input modeling and drills down into considerations that should be taken during the data collection process.
Thursday, October 3, 2019
Lecture F: Midterm Review (2019-10-03)
This lecture is intended as a midterm review, but much of the content covered goes over inverse-transform sampling (both continuous and discrete).
During the lecture, questions were answered that involved whiteboard work. That whiteboard work was captured electronically in the two following images.
Tuesday, October 1, 2019
Lecture E2: Random-Variate Generation (2019-10-01)
In today's course, we revisited the tests of uniformity and independence necessary for random-number generation. We also started to formally introduce inverse-transform sampling. We will cover the discrete versions of inverse-transform sampling at the start of the next lecture.
Subscribe to:
Posts (Atom)
Popular Posts
-
In this lecture, we close out our review of DES fundamentals and hand simulation. After going through a hand-simulation example one last tim...
-
This lecture covers Variance Reduction Techniques (VRT) for stochastic simulation, covering: Common Random Numbers (CRNs), Control Variates ...
-
In this lecture, we review basic probability space concepts from the previous lecture. We then go on to discuss the common probabilistic mod...
-
In this lecture, we review topics from the first half of the semester that will be tested over in the upcoming midterm. Most of the class in...
-
In this lecture, we introduce the detailed process of input modeling. Input models are probabilistic models that introduce variation in simu...
-
In this lecture, we review pseudo-random number generation and then introduce random-variate generation by way of inverse-transform sampling...
-
In this lecture, we introduce the three different simulation methodologies (agent-based modeling, system dynamics modeling, and discrete eve...
-
In this lecture, we (nearly) finish our coverage of Input Modeling, where the focus of this lecture is on parameter estimation and assessing...
-
In this lecture, we continue to discuss hypothesis testing -- introducing parametric, non-parametric, exact, and non-exact tests and reviewi...
-
In this lecture, we wrap up the course content in IEE 475. We first do a quick overview of the four variance reduction techniques (VRT's...