This calculator helps you compute the probabilities of a negative binomial distribution given the number of successes needed (r) and the probability of success (p). You can find the probability of achieving a certain number of failures before the r-th success. The distribution chart shows the probability mass function (PMF) of the negative binomial distribution.
The negative binomial distribution, also known as the Pascal distribution, is a discrete probability distribution that models the number of failures in a sequence of independent Bernoulli trials before a specified number of successes occurs. Each trial has the same probability of success (p).
Combining these probabilities gives us the negative binomial formula:
Where:
"I need to make 5 sales this week. My success rate is 20%. What's the probability of getting exactly 8 rejections before my 5th sale?"
→ Fixed successes (5), calculating P(X = 8 failures)
"I will make exactly 50 calls today. My success rate is 20%. What's the probability of making exactly 12 sales?"
→ Fixed trials (50), calculating P(X = 12 successes)
"We need 10 good components for assembly. With 90% quality rate, what's the probability of encountering exactly 3 defective items?"
→ Fixed good items needed (10), calculating P(X = 3 defects)
"We will test exactly 100 components today. With 90% quality rate, what's the probability that exactly 88 will pass inspection?"
→ Fixed tests (100), calculating P(X = 88 passes)
"Clinical trial needs 50 qualified patients. If 15% qualify, what's the probability of screening exactly 280 people before finding our 50th qualified patient?"
→ Fixed qualified patients (50), calculating P(X = 280 screenings)
"We will screen exactly 500 people. If 15% qualify, what's the probability that exactly 68 will be eligible for the study?"
→ Fixed screenings (500), calculating P(X = 68 qualified)
Key Decision Factor: Ask yourself - "Am I stopping after a fixed number of trials (Binomial), or continuing until I achieve a target number of successes (Negative Binomial)?"
library(tidyverse)
r <- 3 # number of successes needed
p <- 0.4 # probability of success on each trial
# P(X = )
prob_exact <- dnbinom(7, size = r, prob = p) # 0.06449725
print(str_glue("P(X = 7): {prob_exact}"))
# P(X <= 9)
prob_cumulative <- pnbinom(9, size = r, prob = p) # 0.9165567
print(str_glue("P(X <= 9): {prob_cumulative}"))
# mean and variance
mean <- r * (1-p)/p
variance <- r * (1-p)/(p^2)
print(str_glue("Mean: {mean}")) # 4.5
print(str_glue("Variance: {variance}")) # 11.25
# plot
x <- 0:20
pmf <- dnbinom(x, size = r, prob = p)
pmf_df <- data.frame(x = x, pmf = pmf)
ggplot(pmf_df, aes(x = x, y = pmf)) +
geom_col() +
labs(
x = "Number of failures before r successes",
y = "Probability",
title = "Negative Binomial Distribution PMF"
) +
theme_minimal()import scipy.stats as stats
import numpy as np
import matplotlib.pyplot as plt
r = 3 # number of successes needed
p = 0.4 # probability of success on each trial
# P(X = 7)
prob_exact = stats.nbinom.pmf(7, r, p)
print(f"P(X = 7): {prob_exact:.4f}")
# P(X <= 9)
prob_cumulative = stats.nbinom.cdf(9, r, p)
print(f"P(X <= 9): {prob_cumulative:.4f}")
# mean and variance
mean = r/p
variance = r * (1-p)/(p**2)
print(f"Mean: {mean:.4f}")
print(f"Variance: {variance:.4f}")
# plot
x = np.arange(0, 21)
pmf = stats.nbinom.pmf(x, r, p)
plt.figure(figsize=(10, 6))
plt.vlines(x, 0, pmf, colors="b", lw=2)
plt.plot(x, pmf, "bo", ms=8)
plt.xlabel("Number of failures before r successes")
plt.ylabel("Probability")
plt.title("Negative Binomial Distribution PMF")
plt.grid(True)
plt.show()This calculator helps you compute the probabilities of a negative binomial distribution given the number of successes needed (r) and the probability of success (p). You can find the probability of achieving a certain number of failures before the r-th success. The distribution chart shows the probability mass function (PMF) of the negative binomial distribution.
The negative binomial distribution, also known as the Pascal distribution, is a discrete probability distribution that models the number of failures in a sequence of independent Bernoulli trials before a specified number of successes occurs. Each trial has the same probability of success (p).
Combining these probabilities gives us the negative binomial formula:
Where:
"I need to make 5 sales this week. My success rate is 20%. What's the probability of getting exactly 8 rejections before my 5th sale?"
→ Fixed successes (5), calculating P(X = 8 failures)
"I will make exactly 50 calls today. My success rate is 20%. What's the probability of making exactly 12 sales?"
→ Fixed trials (50), calculating P(X = 12 successes)
"We need 10 good components for assembly. With 90% quality rate, what's the probability of encountering exactly 3 defective items?"
→ Fixed good items needed (10), calculating P(X = 3 defects)
"We will test exactly 100 components today. With 90% quality rate, what's the probability that exactly 88 will pass inspection?"
→ Fixed tests (100), calculating P(X = 88 passes)
"Clinical trial needs 50 qualified patients. If 15% qualify, what's the probability of screening exactly 280 people before finding our 50th qualified patient?"
→ Fixed qualified patients (50), calculating P(X = 280 screenings)
"We will screen exactly 500 people. If 15% qualify, what's the probability that exactly 68 will be eligible for the study?"
→ Fixed screenings (500), calculating P(X = 68 qualified)
Key Decision Factor: Ask yourself - "Am I stopping after a fixed number of trials (Binomial), or continuing until I achieve a target number of successes (Negative Binomial)?"
library(tidyverse)
r <- 3 # number of successes needed
p <- 0.4 # probability of success on each trial
# P(X = )
prob_exact <- dnbinom(7, size = r, prob = p) # 0.06449725
print(str_glue("P(X = 7): {prob_exact}"))
# P(X <= 9)
prob_cumulative <- pnbinom(9, size = r, prob = p) # 0.9165567
print(str_glue("P(X <= 9): {prob_cumulative}"))
# mean and variance
mean <- r * (1-p)/p
variance <- r * (1-p)/(p^2)
print(str_glue("Mean: {mean}")) # 4.5
print(str_glue("Variance: {variance}")) # 11.25
# plot
x <- 0:20
pmf <- dnbinom(x, size = r, prob = p)
pmf_df <- data.frame(x = x, pmf = pmf)
ggplot(pmf_df, aes(x = x, y = pmf)) +
geom_col() +
labs(
x = "Number of failures before r successes",
y = "Probability",
title = "Negative Binomial Distribution PMF"
) +
theme_minimal()import scipy.stats as stats
import numpy as np
import matplotlib.pyplot as plt
r = 3 # number of successes needed
p = 0.4 # probability of success on each trial
# P(X = 7)
prob_exact = stats.nbinom.pmf(7, r, p)
print(f"P(X = 7): {prob_exact:.4f}")
# P(X <= 9)
prob_cumulative = stats.nbinom.cdf(9, r, p)
print(f"P(X <= 9): {prob_cumulative:.4f}")
# mean and variance
mean = r/p
variance = r * (1-p)/(p**2)
print(f"Mean: {mean:.4f}")
print(f"Variance: {variance:.4f}")
# plot
x = np.arange(0, 21)
pmf = stats.nbinom.pmf(x, r, p)
plt.figure(figsize=(10, 6))
plt.vlines(x, 0, pmf, colors="b", lw=2)
plt.plot(x, pmf, "bo", ms=8)
plt.xlabel("Number of failures before r successes")
plt.ylabel("Probability")
plt.title("Negative Binomial Distribution PMF")
plt.grid(True)
plt.show()