One-Way Analysis of Variance (ANOVA)#
The one-way ANOVA (Analysis of Variance) tests whether the means of three or more independent groups differ significantly. It is the extension of the independent samples t-test to more than two groups.
When to Use#
Use the one-way ANOVA when you want to:
- Compare three or more independent groups on a metric variable
- The dependent variable is metric (continuous)
- The data in all groups are approximately normally distributed
- The variances across groups are approximately equal (homoscedasticity)
Important: The ANOVA only tests whether any difference exists between the groups (omnibus test). It does not indicate which groups differ. Post-hoc tests are required for this (e.g., Tukey HSD, Bonferroni).
Assumptions#
- Independence of observations (between and within groups)
- Metric scale of the dependent variable
- Normal distribution in each group (Shapiro-Wilk test per group)
- Homogeneity of variance (Levene's test) β if violated: Welch's ANOVA
Note: The ANOVA is relatively robust against mild violations of the normality assumption, especially with large and equal sample sizes. For severe violations, the Kruskal-Wallis test is the appropriate nonparametric alternative.
Formula#
The test statistic is based on the ratio of variance between groups to variance within groups:
The mean squares are calculated from the sums of squares:
where:
- is the number of groups
- is the size of the j-th group
- is the total sample size
- is the mean of the j-th group
- is the grand mean
The test statistic follows an F-distribution with and degrees of freedom.
Example#
Practical Example: Comparing Teaching Methods
An education researcher wants to compare three different teaching methods. They randomly assign 90 students to three groups:
- Group 1 (n=30): Traditional lecture
- Group 2 (n=30): Problem-based learning
- Group 3 (n=30): E-learning
At the end of the semester, all students take the same exam. The one-way ANOVA tests whether the mean exam scores of the three groups differ significantly.
If the result is significant (), post-hoc tests (e.g., Tukey HSD) follow to determine which specific groups differ from each other.
Effect Size#
Eta-squared () as a measure of effect size:
Partial eta-squared is commonly reported in practice:
| Effect Size | Ξ·Β² |
|---|---|
| Small | 0.01 |
| Medium | 0.06 |
| Large | 0.14 |
Tip: Alternatively, omega-squared () can be reported, which provides a less biased estimate of the population effect size.
Further Reading
- Fisher, R. A. (1925). Statistical Methods for Research Workers. Oliver and Boyd.
- Field, A. (2018). Discovering Statistics Using IBM SPSS Statistics (5th ed.). SAGE.
- Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Lawrence Erlbaum Associates.