In this wrap-up lecture, we finish the treatment of Variance Reduction Techniques (VRT's) for stochastic simulation. We cover (or review) Common Random Numbers (CRN's), Control Variates (CV's), Antithetic Variates (AV's), and Importance Sampling. The lecture ends with some brief big-picture comments about the lifelong learning process of simulation modeling.
Archived lectures from undergraduate course on stochastic simulation given at Arizona State University by Ted Pavlic
Tuesday, November 24, 2020
Thursday, November 19, 2020
Lecture K2 (2020-11-19): Variance Reduction Techniques, Part 2 - AVs and Importance Sampling
In this lecture, we review different forms of Variance Reduction Techniques (VRT's) for stochastic simulation, which attempt to re-design simulation experiments to control for sources of variance and thus increase statistical power when making an estimate with a small number of replications. We start with common random numbers (CRN's) and Control Variates. We then pivot to discussing Antithetic Variates. The goal was to also cover Importance Sampling, but due to time constraints that topic will be saved for the next lecture period.
Tuesday, November 17, 2020
Lecture K1 (2020-11-17): Variance Reduction Techniques, Part 1 - CRN's and Control Variates
This lecture primarily finishes the coverage of estimation of relative performance by walking through the three different 2-sample mean tests (paired-difference t test, pooled-variance t-test, and Welch's unpooled-variance t-test) and the assumptions required to use them. Confidence intervals for each of the mean differences are defined, requiring formulas for standard error of the mean and degrees of freedom for each of the three experimental cases. We also briefly discuss how to extend this to more than 2 systems (with ANOVA's and post-hoc tests), but due to time we shift into an introduction of the related topic of Variance Reduction Techniques (VRT's). We introduce common random numbers (CRN's), but the full discussion of CRN's and Control Variates will be saved until the next lecture.
Friday, November 13, 2020
Lecture J4 (2020-11-12): Estimation of Relative Performance
In this lecture, we move from estimation of absolute performance from simulation studies to estimation of relative performance. We start with connecting confidence intervals with linear regression, as an alternative application of one-sample confidence intervals. We review the use of one-sample confidence intervals for relative performance estimation, and then we pivot to discussing the visualization of 2-sample tests with confidence intervals.
Wednesday, November 11, 2020
Lecture J3 (2020-11-10): Estimation of Absolute Performance, Part 3
This lecture continues to discuss issues related to estimating absolute performance from transient and steady-state simulations (of terminating and non-terminating systems, respectively). We continue to emphasize the importance and utility of interval estimations (over point estimates). We then move on to discuss experimental methodologies useful for steady-state simulations, particularly related to eliminating estimator bias and reducing computational time.
Thursday, November 5, 2020
Lecture J2 (2020-10-05): Estimation of Absolute Performance, Part 2
In this lecture, we continue our discussion of the use of performance estimation strategies for absolute performance (particularly in the case of transient simulation models of terminating systems). We review the sources of variation within and across replications in a simulation study, followed by a definition of common point estimators (for mean as well as quantile estimation), and then we define measures of estimator variance (e.g., standard error of the mean, SEM). That allows us to introduce interval estimation, particularly confidence interval estimation, which represents the interval of hypotheses that would fail to be rejected by a one-sample t-test. We discuss interval estimation in terms of quantile estimation as well. Finally, we conclude with a method of using constraints on interval half width to guide how many simulation replications are needed for a given simulation study.
Tuesday, November 3, 2020
Lecture J1 (2020-11-03): Estimation of Absolute Performance, Part 1
In this lecture, we continue to discuss hypothesis testing -- introducing parametric, non-parametric, exact, and non-exact tests and reviewing the assumptions behind many popular parameterized tests (like the t-test and ANOVA) and non-exact tests (Chi-square test). We then move to discuss the multiple-comparisons problem ("fishing") and Bonferroni correct. We end with an introduction to the estimation of absolute performance in simulated systems.
Popular Posts
-
In this lecture, we close out our review of DES fundamentals and hand simulation. After going through a hand-simulation example one last tim...
-
This lecture covers Variance Reduction Techniques (VRT) for stochastic simulation, covering: Common Random Numbers (CRNs), Control Variates ...
-
In this lecture, we review basic probability space concepts from the previous lecture. We then go on to discuss the common probabilistic mod...
-
In this lecture, we review topics from the first half of the semester that will be tested over in the upcoming midterm. Most of the class in...
-
In this lecture, we introduce the detailed process of input modeling. Input models are probabilistic models that introduce variation in simu...
-
In this lecture, we review pseudo-random number generation and then introduce random-variate generation by way of inverse-transform sampling...
-
In this lecture, we introduce the three different simulation methodologies (agent-based modeling, system dynamics modeling, and discrete eve...
-
In this lecture, we (nearly) finish our coverage of Input Modeling, where the focus of this lecture is on parameter estimation and assessing...
-
In this lecture, we continue to discuss hypothesis testing -- introducing parametric, non-parametric, exact, and non-exact tests and reviewi...
-
In this lecture, we wrap up the course content in IEE 475. We first do a quick overview of the four variance reduction techniques (VRT's...