Calculate the coefficient of variation (CV) for one or two datasets, with comparison mode, CV gauge, reference table, and detailed calculation steps.
The coefficient of variation (CV) expresses standard deviation as a percentage of the mean, which makes it a useful way to compare relative variability across datasets with different units or scales.
This calculator supports both single-dataset analysis and side-by-side comparison. It reports the CV along with the mean, standard deviation, variance, standard error, and relative range, then places the result on a gauge so you can judge whether the spread is low, moderate, or high relative to the average.
It is most useful when raw standard deviations are hard to compare directly, such as when one dataset is measured in grams and another in dollars, or when two processes have very different average levels.
Raw standard deviation answers "how spread out is this dataset?" but not "how large is that spread relative to the mean?" CV fills that gap, which is why it is used in laboratory precision work, manufacturing repeatability, and cross-scale comparisons.
Seeing CV beside the underlying mean and standard deviation helps you tell whether a dataset is truly unstable or whether it only looks noisy because its values are large in absolute terms.
CV = (σ / |μ|) × 100% where σ = standard deviation and μ = mean. For samples: s = √[Σ(xᵢ − x̄)² / (n−1)]. Relative Range = (Range / |Mean|) × 100%.
Result: CV = 9.39%
Mean = 83.5, Sample SD = 7.84. CV = 7.84 / 83.5 × 100 = 9.39%. This falls in the "low variability" band (<10%), indicating consistent exam scores. In educational assessment, this suggests a well-designed test with appropriate difficulty spread.
Standard deviation tells you "how spread out data is in the original units." CV tells you "how spread out data is relative to the average." Use SD when the scale is fixed and understood. Use CV when comparing variability across different scales, units, or magnitudes—for example, comparing measurement precision of height (in cm) versus weight (in kg).
In manufacturing and laboratory settings, CV is the primary acceptance criterion for method validation. Typical thresholds: intra-assay repeatability CV < 5%, inter-assay reproducibility CV < 10%, analytical methods CV < 15%. The FDA, USP, and ISO 17025 all reference CV-based acceptance criteria.
CV is undefined when the mean is zero and misleading when the mean is near zero. For such data, use the Quartile Coefficient of Dispersion (QCD = IQR / (Q1 + Q3)) or the Median Absolute Deviation (MAD). For non-ratio scales (temperature in °C, dates), CV is inappropriate — use absolute measures of spread instead.
Last updated:
It depends on context. In lab work, CV < 5% is excellent and < 10% is acceptable. In manufacturing quality control, CV < 10% is typical. In social science, CV < 25% is common. In financial returns, CV can exceed 50%. Always compare to standards for your specific domain.
Standard deviation measures absolute spread and depends on the units and scale. CV is a dimensionless percentage, so it works for comparing variability between measurements with different units (e.g., heights in cm vs. weights in kg) or different scales (e.g., a test out of 10 vs. one out of 100).
CV requires a meaningful ratio scale where zero represents "nothing." Data with negative values (like temperatures in Celsius, profit/loss, or z-scores) can produce misleading CVs because the mean might be near zero. Use standard deviation for such data instead.
They're the same thing. Relative Standard Deviation (RSD) is the term used in chemistry and analytical science; Coefficient of Variation (CV) is the general statistics term. Both equal (SD / Mean) × 100%. Some fields report RSD without the × 100 (as a proportion).
Use sample (n−1 denominator) when your data is a subset of a larger group — almost always the case in practice. Use population (n denominator) only when you have data for the entire group of interest.
Yes. CV > 100% means the standard deviation exceeds the mean, indicating extremely high variability. This can happen with right-skewed distributions (like income data) or when the mean is close to zero. It's unusual but not inherently wrong.