Calculate post-test probability from sensitivity, specificity, and prevalence with likelihood ratios, confusion matrix, sequential testing, and PPV/NPV sensitivity analysis.
The post-test probability calculator shows how a diagnostic test result changes the probability of disease. Starting from a pre-test probability, it combines sensitivity and specificity to estimate the probability after a positive or negative result.
This is a direct application of Bayes' theorem to medical testing. It shows why positive predictive value and negative predictive value depend heavily on prevalence: the same test can look strong in one setting and weak in another.
Enter the test characteristics and prevalence to see the confusion matrix, likelihood ratios, sequential-testing effect, and how predictive values change across different prevalence levels.
This calculator is useful when the question is not just whether a test is positive, but what that result actually means in context. Prevalence can dominate the answer, so the same sensitivity and specificity can produce very different post-test probabilities.
Showing the confusion matrix, likelihood ratios, and predictive values together makes it easier to see whether a test result is genuinely informative or just looks persuasive on the surface.
LR+ = Sensitivity / (1 − Specificity). LR− = (1 − Sensitivity) / Specificity. Post-test odds = Pre-test odds × LR. PPV = (Sens×Prev) / (Sens×Prev + (1−Spec)×(1−Prev)).
Result: PPV = 48.6%, NPV = 99.4%, LR+ = 18
At 5% prevalence, a positive result raises probability from 5% to 48.6%. A negative result lowers it to 0.55%. LR+ of 18 is a strong diagnostic tool — each positive multiplies the odds by 18.
Mass screening for rare diseases suffers from the base rate fallacy. Even with a 99% sensitive and 99% specific test, screening for a disease with 0.1% prevalence produces 10× more false positives than true positives (PPV ≈ 9%). This is why targeted testing based on clinical risk factors is preferred over universal screening.
The ROC curve plots sensitivity vs (1−specificity) across all possible test thresholds. The area under the ROC curve (AUC) summarizes overall test performance. This calculator evaluates a single point on the ROC curve — the chosen threshold that determines the specific sensitivity/specificity trade-off.
Sequential testing (test again if positive) maximizes specificity with each step but may miss cases. Parallel testing (confirm if either positive) maximizes sensitivity. The optimal strategy depends on the cost of false positives vs false negatives in the clinical context.
Last updated:
At 1% prevalence, 99% of people are disease-free. Even a 95% specific test produces false positives in 5% of 99 = ~5 people, while catching 1% of 1 person with disease. Most positives are false — hence low PPV.
Sensitivity asks: of those WITH disease, what fraction tests positive? PPV asks: of those who TEST positive, what fraction has disease? Sensitivity is a test property; PPV depends on prevalence.
LR transforms pre-test odds to post-test odds. Convert probability to odds (p/(1−p)), multiply by LR, convert back. LR+ applies to positive results, LR− to negative results.
Yes! The same framework applies to any binary classifier: spam detection, quality control, fraud detection. Sensitivity = recall, PPV = precision in machine learning terminology.
If the two tests are independent and measure the result separately, sequential testing multiplies likelihood ratios. If it is just the same test repeated, the independence assumption is weaker.
A graphical tool connecting pre-test probability, likelihood ratio, and post-test probability on three scales. Drawing a line from pre-test through LR gives post-test probability. This calculator provides the same information numerically.