Explore the sampling distribution of p̂. Calculate mean, standard error, probabilities, and quantiles. Visualize the bell curve with normal approximation conditions.
When you draw a random sample from a population with true proportion p, the sample proportion p̂ changes from sample to sample. The sampling distribution of p̂ describes that variation and is approximately normal when the sample is large enough.
This calculator lets you enter p and n, then shows the mean, standard error, quantiles, and tail probabilities for the resulting distribution. You can also ask how unusual an observed p̂ would be if the population proportion really were p.
That makes it useful for survey planning, confidence intervals, and hypothesis tests about proportions.
The sampling distribution is the bridge between a population proportion and the sample proportions you actually observe. Showing the distribution, the standard error, and the approximation checks together makes it easier to judge whether a sample result is plausible or unusual.
Sampling Distribution of p̂: Mean: μₚ̂ = p Standard Error: SE = √(p(1−p)/n) With FPC: SE_adj = SE × √((N−n)/(N−1)) Normal Approximation Conditions: np ≥ 10 and n(1−p) ≥ 10 Z-score for observed p̂: z = (p̂ − p) / SE P(p̂ < x) = Φ(z)
Result: P(p̂ > 0.53) = 0.0287
With p = 0.5 and n = 1,000, the standard error is 0.0158. An observed p̂ = 0.53 has z = 1.90, giving P(p̂ > 0.53) = 0.029. There's about a 2.9% chance of seeing 53% or more purely by sampling variability if the true proportion is 50%.
Each observation is a Bernoulli trial with success probability p, so the number of successes follows a Binomial(n, p) distribution and p̂ = count/n. As n grows, that distribution becomes more nearly normal, which is why the normal approximation is so common in introductory inference.
The standard error SE = √(p(1−p)/n) is largest at p = 0.5. That is why sample-size planning often uses p = 0.5 when you do not know the population proportion in advance: it gives the widest, most conservative precision estimate.
The distribution helps you answer practical questions about how much a sample proportion can move around, how large a sample you need for a desired margin of error, and when exact binomial probabilities are a better choice than the normal approximation.
Last updated:
It describes the probability distribution of sample proportions across all possible samples of size n from the population. By the Central Limit Theorem, it's approximately N(p, p(1−p)/n) for large samples.
Because p̂ is an unbiased estimator of p. On average, across all possible samples, the sample proportion equals the population proportion. Individual samples vary, but the expected value is p.
The standard error SE = √(p(1−p)/n) determines the spread. It depends on the population proportion (maximum spread at p = 0.5) and sample size (larger n = less spread). Population size matters only through FPC.
When np < 10 or n(1−p) < 10, the sampling distribution is noticeably skewed and the normal approximation is poor. For example, with p = 0.01 and n = 100, np = 1 (too small). You'd need n ≥ 1,000 for this proportion.
A 95% confidence interval for p is p̂ ± 1.96×SE. This comes directly from the sampling distribution: 95% of sample proportions fall within 1.96 standard errors of the mean.
When sampling without replacement from a finite population of size N, the variability of p̂ is slightly less than with replacement. The factor √((N−n)/(N−1)) adjusts the SE downward. It's important when n is a substantial fraction of N.