Thursday, October 31, 2019

Lecture I: Statistical Reflections (2019-10-31) – Halloween Themed

Discussion of error rates and statistical power in hypothesis testing, along with a deeper investigation behind how the Student's t-test and the Chi-square test work and why they require the assumptions they do. An example paired-difference t-test with power analysis is done, and then lecture closes with a discussion of the multiple comparisons problem (applied to simulation problems) and tools, such as Bonferroni correction, that can be used to prevent "statistical fishing."

Many Halloween-inspired examples are given during the lecture, and the relationship between ghost busting and statistics is exploited.

Tuesday, October 29, 2019

Lecture H: Output Verification, Validation, and Calibration (2019-10-29)

This lecture covers testing, validation, verification, and the general process of simulation model calibration. Specific quantitative topics involve power analysis of a one-sample, two-tailed t-test as well as the application of a paired t-test for analyzing validity of a simulation model using data from a real system. There is a period at the end of the lecture where I accidentally refer to an OC curve as plotting effect size versus statistical power. I meant to say that it plots effect size versus false negative rate (beta, or type-II error). This is fixed in the posted slides for the class.

Thursday, October 24, 2019

Lecture G3: Input Modeling, Part 3 (2019-10-24)

Part 3 of a 3-part lecture series on input modeling for stochastic simulation. This lecture describes point estimation of parameters by maximum likelihood as well as the use of goodness-of-fit techniques (Chi-square, Kolmogorov-Smirnov, Anderson-Darling, and Shapiro-Wilk) to evaluate those best fits. It also includes a general discussion about hypothesis testing and cautionary notes about p-values.

Tuesday, October 22, 2019

Lecture G2: Input Modeling, Part 2 (2019-10-22)

Part 2 of a 3-part lecture series on input modeling for stochastic simulation. This lecture describes going from data to coarse logic flows and then using tools like probability plots to choose distributions for elements of those flows.

Thursday, October 10, 2019

Lecture G1: Input Modeling, Part 1 (2019-10-10)

This lecture gives an overview of the process of input modeling and drills down into considerations that should be taken during the data collection process.

Thursday, October 3, 2019

Lecture F: Midterm Review (2019-10-03)

This lecture is intended as a midterm review, but much of the content covered goes over inverse-transform sampling (both continuous and discrete).

During the lecture, questions were answered that involved whiteboard work. That whiteboard work was captured electronically in the two following images.





Tuesday, October 1, 2019

Lecture E2: Random-Variate Generation (2019-10-01)

In today's course, we revisited the tests of uniformity and independence necessary for random-number generation. We also started to formally introduce inverse-transform sampling. We will cover the discrete versions of inverse-transform sampling at the start of the next lecture.

Popular Posts