StatsCalculators.com

Dunnett's Test

Created:December 4, 2024
Last Updated:August 24, 2025

Dunnett's test is a specialized post-hoc analysis used after a significant ANOVA to compare multiple treatment groups against a single control group. Unlike other post-hoc tests that compare all possible group pairs, Dunnett's test focuses specifically on treatment-to-control comparisons while controlling the familywise error rate, making it ideal for experimental research designs.

What You'll Get:

  • Complete Analysis: ANOVA F-test plus individual treatment vs. control comparisons
  • Statistical Tables: Mean differences, t-statistics, and adjusted p-values for each comparison
  • Visual Comparisons: Forest plots showing effect sizes and group distribution charts
  • Group Statistics: Detailed descriptive statistics for control and all treatment groups
  • Clear Interpretation: Which treatments significantly differ from your control group
  • APA-Style Report: Publication-ready results formatted for academic writing

💡 Pro Tip: Use Dunnett's test when you have a clear control group. If you need to compare all groups to each other, use ourTukey's HSD Test Calculatorinstead. Always run a significant ANOVA first!

Ready to compare your treatments to control? Try our sample dataset to see Dunnett's test in action, or upload your experimental data to discover which treatments show significant effects compared to your control condition.

Calculator

1. Load Your Data

2. Select Columns & Options

Related Calculators

Learn More

Dunnett's Test

Definition

Dunnett's Test is a multiple comparison procedure used to compare several treatments against a single control group. It maintains the family-wise error rate while providing more statistical power than methods that compare all pairs.

Formula

Test Statistic:

t=Xˉi−XˉcSEt = \frac{\bar{X}_i - \bar{X}_c}{SE}

Where:

  • Xˉi\bar{X}_i = mean of treatment group i i
  • Xˉc\bar{X}_c = mean of the control group
  • nin_i = sample size of treatment group i i
  • ncn_c = sample size of the control group
  • SESE = standard error of the mean

The standard error of the mean is calculated as:

SE=s×1ni+1ncSE = s \times \sqrt{\frac{1}{n_i} + \frac{1}{n_c}}

Where s2s^2 is the pooled variance, combining the variances of all groups:

s2=∑i=1k(ni−1)si2∑i=1k(ni−1)s^2 = \frac{\sum_{i=1}^k(n_i-1)s_i^2}{\sum_{i=1}^k(n_i-1)}

In ANOVA, the pooled variance corresponds to the within-group variance:

s2=MSWithin=SSWithindfWithins^2 = MS_{\text{Within}} = \frac{SS_{\text{Within}}}{df_{\text{Within}}}

Key Assumptions

Independence: Observations must be independent
Normality: Data within each group should be normally distributed
Homogeneity of Variance: Groups should have equal variances

Practical Example

Step 1: State the Data
GroupDataNMeanSD
Control8.5, 7.8, 8, 8.2, 7.958.080.27
Treatment A9.1, 9.3, 8.9, 9, 9.259.100.16
Treatment B7.2, 7.5, 7, 7.4, 7.357.280.19
Treatment C10.1, 10.3, 10, 10.2, 9.9510.100.16
Step 2: State Hypotheses

For each treatment group i vs control:

  • H0:μi=μcH_0: \mu_i = \mu_c
  • Ha:μi≠μcH_a: \mu_i \neq \mu_c
  • α=0.05\alpha = 0.05
Step 3: Calculate SS and MSE
  • SSBetween=22.532SS_{Between} = 22.532 with df=19df = 19
  • SSWithin=0.656SS_{Within} = 0.656 with df=16df = 16
  • SSTotal=SSBetween+SSWithin=23.188SS_{Total} = SS_{Between} + SS_{Within} = 23.188
  • MSE=0.65616=0.041MS_{E} = \frac{0.656}{16} = 0.041
Step 4: Calculate Test Statistics

For each treatment vs control:

  • Treatment A: t=9.10−8.082(0.041)5=7.965t = \frac{9.10 - 8.08}{\sqrt{\frac{2(0.041)}{5}}} = 7.965
  • Treatment B: t=7.28−8.082(0.041)5=−6.247t = \frac{7.28 - 8.08}{\sqrt{\frac{2(0.041)}{5}}} = -6.247
  • Treatment C: t=10.10−8.082(0.041)5=15.774t = \frac{10.10 - 8.08}{\sqrt{\frac{2(0.041)}{5}}} = 15.774
Step 5: Compare with Critical Value

Critical value for α=0.05\alpha = 0.05, k=3k = 3 treatments, df=16df = 16: tcrit=2.74t_{crit} = 2.74

Compare ∣t∣|t| with critical value:

  • Treatment A: ∣7.965∣>2.74|7.965| > 2.74 (Significant)
  • Treatment B: ∣−6.247∣>2.74|-6.247| > 2.74 (Significant)
  • Treatment C: ∣15.774∣>2.74|15.774| > 2.74 (Significant)
Step 6: Draw Conclusions

All treatments show significant differences from the control group (p<0.05p \lt 0.05):

  • Treatment A significantly increases response
  • Treatment B significantly decreases response
  • Treatment C shows largest significant increase

Code Examples

R
library(multcomp)
library(tidyverse)
# Data preparation
df = tibble(
  Group = rep(c("Control", "Treatment A", "Treatment B", "Treatment C"), each = 5),
  Response = c(8.5, 7.8, 8, 8.2, 7.9, 
               9.1, 9.3, 8.9, 9, 9.2, 
               7.2, 7.5, 7, 7.4, 7.3, 
               10.1, 10.3, 10, 10.2, 9.9)
)

# multcom package requires the Group variable to be a factor
df$Group <- as.factor(df$Group)

# Perform one-way ANOVA
model <- aov(Response ~ Group, data = df)

# Perform Dunnett's test
dunnett_test <- glht(model, linfct = mcp(Group = "Dunnett"))
summary(dunnett_test)
Python
import numpy as np
from statsmodels.stats.multicomp import pairwise_tukeyhsd
import pandas as pd

# Create example data
data = pd.DataFrame({
    'Response': [8.5, 7.8, 8.0, 8.2, 7.9,  # Control
                9.1, 9.3, 8.9, 9.0, 9.2,   # Treatment A
                7.2, 7.5, 7.0, 7.4, 7.3,   # Treatment B
                10.1, 10.3, 10.0, 10.2, 9.9], # Treatment C
    'Group': np.repeat(['Control', 'Treatment A', 
                       'Treatment B', 'Treatment C'], 5)
})

# Perform multiple pairwise comparisons using Tukey's test
# (Note: Dunnett's test focuses only on comparisons against a control group, 
# but statsmodels does not provide a direct implementation of Dunnett's test.)
results = pairwise_tukeyhsd(data['Response'], 
                           data['Group'])

print(results)

Alternative Tests

  • Tukey's HSD: When comparing all groups to each other
  • Bonferroni: When a simpler but more conservative approach is needed
  • Games-Howell: When variances are unequal

Verification