Dunnett's test is a specialized post-hoc analysis used after a significant ANOVA to compare multiple treatment groups against a single control group. Unlike other post-hoc tests that compare all possible group pairs, Dunnett's test focuses specifically on treatment-to-control comparisons while controlling the familywise error rate, making it ideal for experimental research designs.
What You'll Get:
- Complete Analysis: ANOVA F-test plus individual treatment vs. control comparisons
- Statistical Tables: Mean differences, t-statistics, and adjusted p-values for each comparison
- Visual Comparisons: Forest plots showing effect sizes and group distribution charts
- Group Statistics: Detailed descriptive statistics for control and all treatment groups
- Clear Interpretation: Which treatments significantly differ from your control group
- APA-Style Report: Publication-ready results formatted for academic writing
💡 Pro Tip: Use Dunnett's test when you have a clear control group. If you need to compare all groups to each other, use ourTukey's HSD Test Calculatorinstead. Always run a significant ANOVA first!
Ready to compare your treatments to control? Try our sample dataset to see Dunnett's test in action, or upload your experimental data to discover which treatments show significant effects compared to your control condition.
Calculator
1. Load Your Data
2. Select Columns & Options
Related Calculators
Learn More
Dunnett's Test
Definition
Dunnett's Test is a multiple comparison procedure used to compare several treatments against a single control group. It maintains the family-wise error rate while providing more statistical power than methods that compare all pairs.
Formula
Test Statistic:
Where:
- = mean of treatment group
- = mean of the control group
- = sample size of treatment group
- = sample size of the control group
- = standard error of the mean
The standard error of the mean is calculated as:
Where is the pooled variance, combining the variances of all groups:
In ANOVA, the pooled variance corresponds to the within-group variance:
Key Assumptions
Practical Example
Step 1: State the Data
Group | Data | N | Mean | SD |
---|---|---|---|---|
Control | 8.5, 7.8, 8, 8.2, 7.9 | 5 | 8.08 | 0.27 |
Treatment A | 9.1, 9.3, 8.9, 9, 9.2 | 5 | 9.10 | 0.16 |
Treatment B | 7.2, 7.5, 7, 7.4, 7.3 | 5 | 7.28 | 0.19 |
Treatment C | 10.1, 10.3, 10, 10.2, 9.9 | 5 | 10.10 | 0.16 |
Step 2: State Hypotheses
For each treatment group i vs control:
Step 3: Calculate SS and MSE
- with
- with
Step 4: Calculate Test Statistics
For each treatment vs control:
- Treatment A:
- Treatment B:
- Treatment C:
Step 5: Compare with Critical Value
Critical value for , treatments, :
Compare with critical value:
- Treatment A: (Significant)
- Treatment B: (Significant)
- Treatment C: (Significant)
Step 6: Draw Conclusions
All treatments show significant differences from the control group ():
- Treatment A significantly increases response
- Treatment B significantly decreases response
- Treatment C shows largest significant increase
Code Examples
library(multcomp)
library(tidyverse)
# Data preparation
df = tibble(
Group = rep(c("Control", "Treatment A", "Treatment B", "Treatment C"), each = 5),
Response = c(8.5, 7.8, 8, 8.2, 7.9,
9.1, 9.3, 8.9, 9, 9.2,
7.2, 7.5, 7, 7.4, 7.3,
10.1, 10.3, 10, 10.2, 9.9)
)
# multcom package requires the Group variable to be a factor
df$Group <- as.factor(df$Group)
# Perform one-way ANOVA
model <- aov(Response ~ Group, data = df)
# Perform Dunnett's test
dunnett_test <- glht(model, linfct = mcp(Group = "Dunnett"))
summary(dunnett_test)
import numpy as np
from statsmodels.stats.multicomp import pairwise_tukeyhsd
import pandas as pd
# Create example data
data = pd.DataFrame({
'Response': [8.5, 7.8, 8.0, 8.2, 7.9, # Control
9.1, 9.3, 8.9, 9.0, 9.2, # Treatment A
7.2, 7.5, 7.0, 7.4, 7.3, # Treatment B
10.1, 10.3, 10.0, 10.2, 9.9], # Treatment C
'Group': np.repeat(['Control', 'Treatment A',
'Treatment B', 'Treatment C'], 5)
})
# Perform multiple pairwise comparisons using Tukey's test
# (Note: Dunnett's test focuses only on comparisons against a control group,
# but statsmodels does not provide a direct implementation of Dunnett's test.)
results = pairwise_tukeyhsd(data['Response'],
data['Group'])
print(results)
Alternative Tests
- Tukey's HSD: When comparing all groups to each other
- Bonferroni: When a simpler but more conservative approach is needed
- Games-Howell: When variances are unequal