PickMyTest

One-Way Analysis of Variance (ANOVA)

Compare means of three or more independent groups

One-Way Analysis of Variance (ANOVA)#

The one-way ANOVA (Analysis of Variance) tests whether the means of three or more independent groups differ significantly. It is the extension of the independent samples t-test to more than two groups.

When to Use#

Use the one-way ANOVA when you want to:

  • Compare three or more independent groups on a metric variable
  • The dependent variable is metric (continuous)
  • The data in all groups are approximately normally distributed
  • The variances across groups are approximately equal (homoscedasticity)

Important: The ANOVA only tests whether any difference exists between the groups (omnibus test). It does not indicate which groups differ. Post-hoc tests are required for this (e.g., Tukey HSD, Bonferroni).

Assumptions#

  • Independence of observations (between and within groups)
  • Metric scale of the dependent variable
  • Normal distribution in each group (Shapiro-Wilk test per group)
  • Homogeneity of variance (Levene's test) – if violated: Welch's ANOVA

Note: The ANOVA is relatively robust against mild violations of the normality assumption, especially with large and equal sample sizes. For severe violations, the Kruskal-Wallis test is the appropriate nonparametric alternative.

Formula#

The test statistic is based on the ratio of variance between groups to variance within groups:

F=MSbetweenMSwithinF = \frac{MS_{between}}{MS_{within}}

The mean squares are calculated from the sums of squares:

MSbetween=SSbetweendfbetween=βˆ‘j=1knj(XΛ‰jβˆ’XΛ‰)2kβˆ’1MS_{between} = \frac{SS_{between}}{df_{between}} = \frac{\sum_{j=1}^{k} n_j (\bar{X}_j - \bar{X})^2}{k - 1} MSwithin=SSwithindfwithin=βˆ‘j=1kβˆ‘i=1nj(Xijβˆ’XΛ‰j)2Nβˆ’kMS_{within} = \frac{SS_{within}}{df_{within}} = \frac{\sum_{j=1}^{k} \sum_{i=1}^{n_j} (X_{ij} - \bar{X}_j)^2}{N - k}

where:

  • kk is the number of groups
  • njn_j is the size of the j-th group
  • NN is the total sample size
  • XΛ‰j\bar{X}_j is the mean of the j-th group
  • XΛ‰\bar{X} is the grand mean

The test statistic follows an F-distribution with df1=kβˆ’1df_1 = k - 1 and df2=Nβˆ’kdf_2 = N - k degrees of freedom.

Example#

Practical Example: Comparing Teaching Methods

An education researcher wants to compare three different teaching methods. They randomly assign 90 students to three groups:

  • Group 1 (n=30): Traditional lecture
  • Group 2 (n=30): Problem-based learning
  • Group 3 (n=30): E-learning

At the end of the semester, all students take the same exam. The one-way ANOVA tests whether the mean exam scores of the three groups differ significantly.

If the result is significant (p<.05p < .05), post-hoc tests (e.g., Tukey HSD) follow to determine which specific groups differ from each other.

Effect Size#

Eta-squared (Ξ·2\eta^2) as a measure of effect size:

Ξ·2=SSbetweenSStotal\eta^2 = \frac{SS_{between}}{SS_{total}}

Partial eta-squared is commonly reported in practice:

Ξ·p2=SSbetweenSSbetween+SSwithin\eta_p^2 = \frac{SS_{between}}{SS_{between} + SS_{within}}
Effect SizeΞ·Β²
Small0.01
Medium0.06
Large0.14

Tip: Alternatively, omega-squared (Ο‰2\omega^2) can be reported, which provides a less biased estimate of the population effect size.

Further Reading

  • Fisher, R. A. (1925). Statistical Methods for Research Workers. Oliver and Boyd.
  • Field, A. (2018). Discovering Statistics Using IBM SPSS Statistics (5th ed.). SAGE.
  • Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Lawrence Erlbaum Associates.