Calculate standard error for means, proportions, differences, or raw data. Includes margin of error, finite population correction, and SE vs sample size comparison.
The standard error (SE) measures the precision of a sample statistic as an estimate of a population value. It describes how much a mean, proportion, or difference would vary from sample to sample if you repeated the same study design many times.
This calculator covers several common cases: the SE of a mean, a proportion, a difference between means, a difference between proportions, and SE derived directly from raw data. It also reports the corresponding margin of error and can apply a finite population correction when the sample is a large share of the full population.
That makes it useful for confidence intervals, hypothesis tests, survey interpretation, and planning how much sample size you need to reach a given level of precision.
Standard error sits underneath most introductory inference, but the formula changes with the estimator and the study design. It is easy to mix up standard deviation, standard error, and margin of error if you are moving quickly between notes, software, and hand calculations.
Keeping the estimator choice, confidence level, and finite-population option together helps make the result easier to interpret and easier to explain in a report or study plan.
SE of Mean: SE = s / √n SE of Proportion: SE = √(p̂(1−p̂)/n) SE of Difference of Means: SE = √(s₁²/n₁ + s₂²/n₂) SE of Difference of Proportions: SE = √(p̂₁(1−p̂₁)/n₁ + p̂₂(1−p̂₂)/n₂) Margin of Error: MOE = z* × SE With FPC: SE_adj = SE × √((N−n)/(N−1)) Key relationship: SE ∝ 1/√n
Result: SE = 2.1909, MOE = ±4.29
With s = 12 and n = 30, the standard error of the mean is 12/√30 = 2.19. At 95% confidence (z* = 1.96), the margin of error is ±4.29. This means the sample mean is expected to be within about 4.3 units of the true population mean.
Nearly every inferential procedure in statistics uses the standard error. Confidence intervals are point estimate ± z* × SE. Test statistics are (estimate − null) / SE. Power analysis uses SE to determine the sample size needed to detect an effect. Understanding SE is understanding the precision of your data.
The inverse square root relationship between SE and n has profound practical implications. Getting from ±10% precision to ±5% requires 4× the data. Getting to ±1% requires 100× the starting sample. This diminishing returns curve is why most studies settle for "good enough" precision rather than pursuing perfection — the cost grows quadratically with precision.
Beyond simple means and proportions, standard errors exist for regression coefficients, correlation coefficients, percentiles, and virtually any sample statistic. When analytical formulas aren't available, bootstrap methods estimate SE by resampling from the data. The concept extends to any statistic that varies across samples.
Last updated:
Standard deviation (SD) measures variability of individual observations within a sample. Standard error (SE) measures variability of the sample statistic (like the mean) across different samples. SE = SD/√n, so it's always smaller than SD for n > 1.
With more observations, extreme values average out and the sample statistic becomes more stable. The Central Limit Theorem guarantees this: the variance of the sample mean is σ²/n, giving SE = σ/√n.
Confidence intervals are generally preferred for scientific communication because they're more intuitive (the range of plausible values). SE is useful for technical audiences and as an input to meta-analyses. Some journals require one or the other.
When comparing two independent estimates, the SE of their difference combines both uncertainties: SE_diff = √(SE₁² + SE₂²). This applies to both mean differences and proportion differences.
Compute the sample standard deviation (s), then divide by √n. The raw data mode in this calculator does this automatically, computing s from your data and then SE = s/√n.
Meta-analyses weight studies by the inverse of their squared SE. Studies with smaller SE (larger samples, less variability) get more weight because they provide more precise estimates of the effect.