In this lecture, we prepare for the final exam and give a brief review of all topics from the course. Students are encouraged to bring their own questions so that the focus of the class is on the topics that students feel they need the most help with.
IEE 475: Simulating Stochastic Systems
Archived lectures from undergraduate course on stochastic simulation given at Arizona State University by Ted Pavlic
Tuesday, December 3, 2024
Tuesday, November 26, 2024
Lecture L (2024-11-26) Course Wrap-Up
In this lecture, we wrap up the course content in IEE 475. We first do a quick overview of the four variance reduction techniques (VRT's) covered in Unit K. That is, we cover: common random numbers (CRN's), antithetic variates (AV's), importance sampling, and control variates. We then remember some general comments about the goal of modeling and commonalities seen across simulation platforms (as well as the different types of simulation platforms in general).
Thursday, November 21, 2024
Lecture K2 (2024-11-21): Variance Reduction Techniques, Part 2 (AVs and Importance Sampling)
In this lecture, we review four different Variance Reduction Techniques (VRT's). Namely, we discuss common random numbers (CRNs), control variates, antithetic variates (AVs), and importance sampling. Each one of these is a different approach to reducing the variance in the estimation of relative or absolute performance of a simulation model. Variance reduction is an alternative way to increase the power of a simulation that is hopefully less costly than increasing the number of replications.
Tuesday, November 19, 2024
Lecture K1 (2024-11-19): Variance Reduction Techniques, Part 1 (CRNs and Control Variates)
In this lecture, we start by reviewing approaches for absolute and relative performance estimation in stochastic simulation. This begins with a reminder of the use of confidence intervals for estimation of performance for a single simulation model. We then move to different ways to use confidence intervals on mean DIFFERENCES to compare two different simulation models. We then move to the ranking and selection problem for three or more different simulation models, which allows us to talk about analysis of variance (ANOVA) and post hoc tests (like the Tukey HSD or Fisher's LSD). After that review, we move on to introducing variance reduction techniques (VRTs) which reduce the size of confidence intervals by experimentally controlling/accounting for alternative sources of variance (and thus reducing the observed variance in response variables). We discuss Common Random Numbers (CRNs), which use a paired/blocked design to reduce the variance caused by different random-number streams. We start to discuss control variates (CVs), but that discussion will be picked up at the start of the next lecture.
Thursday, November 14, 2024
Lecture J4 (2024-11-14): Estimation of Relative Performance
In this lecture, we review what we have learned about one-sample confidence intervals (i.e., how to use them as graphical versions of one-sample t-tests) for absolute performance estimation in order to motivate the problem of relative performance estimation. We introduce two-sample confidence intervals (i.e., confidence intervals on DIFFERENCES based on different two-sample t-tests) that are tested against a null hypothesis of 0. This means covering confidence interval half widths for the paired-difference t-test, the equal-variance (pooled) t-test, and Welch's unequal variance t-test. Each of these different experimental conditions sets up a different standard error of the mean formula and formula for degrees of freedom that are used to define the actual confidence interval half widths (centered on the difference in sample means in the pairwise comparison of systems). We then generalize to the case of more than 2 systems, particularly for "ranking and selection (R&S)." This lets us review the multiple-comparisons problem (and Bonferroni correction) and how post hoc tests (after an ANOVA) are more statistically powerful ways to do comparisons.
Tuesday, November 12, 2024
Lecture J3 (2024-11-12): Estimation of Absolute Performance, Part III: Non-Terminating Systems/Steady-State Simulations
In this lecture, we start by further reviewing confidence intervals (where they come from and what they mean) and prediction intervals and then use them to motivate a simpler way to determine how many replications are needed in a simulation study (focusing first on transient simulations of terminating systems). We then shift our attention to steady-state simulations of non-terminating systems and the issue of initialization bias. We discuss different methods of "warming up" a steady-state simulation to reduce initialization bias and then merge that discussion with the prior discussion on how to choose the number of replications. In the next lecture, we'll finish up with a discussion of the method of "batch means" in steady-state simulations.
Lecture J2 (2024-11-07): Estimation of Absolute Performance, Part II: Terminating Systems/Transient Simulations
In this lecture, we review estimating absolute performance from simulation, with focus on choosing the number of necessary replications of transient simulations of terminating systems. The lecture starts by overviewing point estimation, bias, and different types of point estimators. This includes an overview of quantile estimation and how to use quantile estimation to use simulations as null-hypothesis-prediction generators. We the introduce interval estimation with confidence intervals and prediction intervals. Confidence intervals, which are visualizations of t-tests, provide an alternative way to choose the number of required replications without doing a formal power analysis.
Popular Posts
-
This lecture covers Variance Reduction Techniques (VRT) for stochastic simulation, covering: Common Random Numbers (CRNs), Control Variates ...
-
In this lecture, we close out our review of DES fundamentals and hand simulation. After going through a hand-simulation example one last tim...
-
In this lecture, we introduce the detailed process of input modeling. Input models are probabilistic models that introduce variation in simu...
-
In this lecture, we review basic probability space concepts from the previous lecture. We then go on to discuss the common probabilistic mod...
-
In this lecture, we review pseudo-random number generation and then introduce random-variate generation by way of inverse-transform sampling...
-
In this lecture, we review topics from the first half of the semester that will be tested over in the upcoming midterm. Most of the class in...
-
In this lecture, we introduce the three different simulation methodologies (agent-based modeling, system dynamics modeling, and discrete eve...
-
In this lecture, we (nearly) finish our coverage of Input Modeling, where the focus of this lecture is on parameter estimation and assessing...
-
In this lecture, we continue to discuss hypothesis testing -- introducing parametric, non-parametric, exact, and non-exact tests and reviewi...
-
In this lecture, we wrap up the course content in IEE 475. We first do a quick overview of the four variance reduction techniques (VRT's...