In this lecture, we review what we have learned about one-sample confidence intervals (i.e., how to use them as graphical versions of one-sample t-tests) for absolute performance estimation in order to motivate the problem of relative performance estimation. We introduce two-sample confidence intervals (i.e., confidence intervals on DIFFERENCES based on different two-sample t-tests) that are tested against a null hypothesis of 0. This means covering confidence interval half widths for the paired-difference t-test, the equal-variance (pooled) t-test, and Welch's unequal variance t-test. Each of these different experimental conditions sets up a different standard error of the mean formula and formula for degrees of freedom that are used to define the actual confidence interval half widths (centered on the difference in sample means in the pairwise comparison of systems). We then generalize to the case of more than 2 systems, particularly for "ranking and selection (R&S)." This lets us review the multiple-comparisons problem (and Bonferroni correction) and how post hoc tests (after an ANOVA) are more statistically powerful ways to do comparisons.
Archived lectures from undergraduate course on stochastic simulation given at Arizona State University by Ted Pavlic
Thursday, November 10, 2022
Lecture J4 (2022-11-10): Estimation of Relative Performance
Labels:
podcast
Location:
Tempe, AZ, USA
Subscribe to:
Post Comments (Atom)
Popular Posts
-
In this lecture, we close out our review of DES fundamentals and hand simulation. After going through a hand-simulation example one last tim...
-
This lecture covers Variance Reduction Techniques (VRT) for stochastic simulation, covering: Common Random Numbers (CRNs), Control Variates ...
-
In this lecture, we review basic probability space concepts from the previous lecture. We then go on to discuss the common probabilistic mod...
-
In this lecture, we review topics from the first half of the semester that will be tested over in the upcoming midterm. Most of the class in...
-
In this lecture, we introduce the detailed process of input modeling. Input models are probabilistic models that introduce variation in simu...
-
In this lecture, we review pseudo-random number generation and then introduce random-variate generation by way of inverse-transform sampling...
-
In this lecture, we introduce the three different simulation methodologies (agent-based modeling, system dynamics modeling, and discrete eve...
-
In this lecture, we (nearly) finish our coverage of Input Modeling, where the focus of this lecture is on parameter estimation and assessing...
-
In this lecture, we continue to discuss hypothesis testing -- introducing parametric, non-parametric, exact, and non-exact tests and reviewi...
-
In this lecture, we wrap up the course content in IEE 475. We first do a quick overview of the four variance reduction techniques (VRT's...
No comments:
Post a Comment