Compute normal distribution probabilities, z-scores, and sampling distributions with visual comparison of population vs sample mean curves and confidence intervals.
The normal distribution and sampling calculator works with the Gaussian model for both individual observations and sample means. It computes probabilities, z-scores, and sample-distribution values, then shows how the two curves compare.
The normal model is used for heights, weights, measurement error, test scores, and many other approximately bell-shaped datasets. Its sampling distribution is what makes confidence intervals and hypothesis tests possible.
Enter the population mean and standard deviation, then explore point probabilities, interval probabilities, confidence intervals, and how sample size changes the spread of sample means.
This page is useful when you need both the probability of a single value and the probability of an average from repeated sampling. Those are related, but they are not the same calculation.
Seeing the population curve and the sampling curve together makes the central limit theorem easier to interpret and shows why larger samples produce tighter estimates.
PDF: f(x) = (1/(σ√(2π))) exp(−(x−μ)²/(2σ²)). z = (x − μ)/σ. SE = σ/√n. CI: X̄ ± z*·SE. P(X ≤ x) = Φ(z).
Result: Individual: P(X ≤ 120) = 90.88%, z = 1.33. Sample mean: P(X̄ ≤ 120) = 100.00%, z = 6.67
With μ = 100, σ = 15: a single observation of 120 has z = 1.33 (91st percentile). But a sample mean of 120 from n = 25 has z = 6.67 (virtually impossible) because SE = 15/√25 = 3.
The CLT states that X̄ ~ N(μ, σ²/n) regardless of the population distribution, provided n is sufficiently large. This calculator demonstrates the effect: as you increase n, the sampling distribution narrows dramatically, showing why large samples give precise estimates.
Six Sigma methodology uses the normal distribution to set quality standards. A "six sigma" process has defect rates of 3.4 per million — corresponding to 6 standard deviations from the mean. Control charts use z-scores to detect when a process has shifted.
In hypothesis testing, z-scores convert to p-values via the normal CDF. A two-tailed p-value is 2×P(Z > |z|). This calculator provides the building blocks for understanding t-tests, z-tests, and the foundation of statistical inference.
Last updated:
A z-score measures how many standard deviations a value is from the mean: z = (x − μ)/σ. A z-score of 2 means the value is 2 standard deviations above the mean, which is in the top ~2.3%.
Standard error (SE) is the standard deviation of the sampling distribution of x̄: SE = σ/√n. It measures how much sample means vary from sample to sample. Larger samples → smaller SE → more precise estimates.
Larger samples average out individual variation. A sample of 100 is very unlikely to have a mean far from μ, even though individual values might be spread out. The SE decreases as 1/√n.
A 95% CI means: if you repeated the sampling process, 95% of the resulting intervals would contain the true mean. It's x̄ ± z*·SE, where z* = 1.96 for 95% confidence.
When the data is continuous, symmetric, and bell-shaped. Many natural measurements (heights, errors, averages) are approximately normal. The CLT also makes it appropriate for sample means regardless of the population shape.
For individual probabilities, consider a different distribution that matches the data shape. For means, the central limit theorem still pushes the sampling distribution toward normality once the sample is large enough.