One-Sample t-Test#
The one-sample t-test tests whether the mean of a sample differs significantly from a known or hypothetical reference value .
When to Use#
Use the one-sample t-test when you want to:
- Compare the mean of a single sample to a known reference value
- The dependent variable is metric (continuous)
- The data are approximately normally distributed
Typical research questions:
- Does the average IQ of a class differ significantly from 100?
- Is the mean production time different from the target value?
Assumptions#
- Independence of observations
- Metric scale of the dependent variable
- Normal distribution of the data (Shapiro-Wilk test)
- The reference value μ₀ is known or theoretically justified
Note: If the normality assumption is violated (e.g., small sample with skewed distribution), the Wilcoxon signed-rank test as a one-sample version (comparing the median to a reference value) is the appropriate alternative.
Formula#
The test statistic is calculated as:
where:
- is the sample mean
- is the hypothetical reference value (population mean under )
- is the sample standard deviation
- is the sample size
The test statistic follows a t-distribution with degrees of freedom.
Example#
Practical Example: Fill Volume of Beverage Bottles
A quality manager wants to check whether the mean fill volume of a bottling line meets the target value of 500 ml. They take a random sample of 30 bottles and measure their fill volume.
- Sample: n = 30 bottles
- Reference value : 500 ml (target value)
- Research question: Does the mean fill volume differ significantly from 500 ml?
The one-sample t-test tests the null hypothesis against the alternative hypothesis (two-tailed).
Effect Size#
Cohen's d as a measure of effect size:
| Effect Size | Cohen's d |
|---|---|
| Small | 0.2 |
| Medium | 0.5 |
| Large | 0.8 |
Tip: The effect size indicates how far the sample mean deviates from the reference value in units of the standard deviation. It is independent of sample size and facilitates comparability across studies.
Further Reading
- Student (1908). The probable error of a mean. Biometrika, 6(1), 1–25.
- Field, A. (2018). Discovering Statistics Using IBM SPSS Statistics (5th ed.). SAGE.