A confidence interval is a sample estimate plus and minus a margin of error that gives a likely range for the true population value.
A confidence interval turns one sample result into a range. That range helps you show how much uncertainty sits around your estimate, whether you’re working with a mean, a percentage, a survey result, or a test score. If you only report one number, readers can miss how shaky or steady that number is. The interval gives that missing context.
The good news is that the process is clean once you know the parts. You need a sample statistic, a standard error, a confidence level, and the right critical value. Put those together, and you can build a confidence interval by hand or check whether your calculator or software gave you a sensible result.
What A Confidence Interval Tells You
A confidence interval gives a range of plausible values for a population number. If your sample mean is 72 and your margin of error is 4, your interval runs from 68 to 76. That does not mean the true value moves around inside that range. It means your sample method produced a range that is built to catch the true value a set share of the time over many repeated samples.
That last line matters. A 95% confidence interval does not mean there is a 95% chance the true value is inside this one finished interval. The sample has already been drawn. The interval either covers the truth or it does not. The 95% part refers to the method. Over many random samples, about 95% of those intervals would contain the population value, as explained in NIST’s confidence limits note.
That’s why confidence intervals are used so often in research, polling, quality control, and public health. They show both the estimate and the uncertainty around it in one line.
The Four Pieces You Need Before You Start
Sample Statistic
This is the number you already got from your sample. For a mean, use the sample mean, written as x̄. For a proportion, use the sample proportion, written as p̂. If 54 out of 120 people said yes, then p̂ = 54/120 = 0.45.
Standard Error
The standard error tells you how much your sample statistic tends to vary from sample to sample. It is not the same as the standard deviation. Standard deviation describes spread in the raw data. Standard error describes spread in the estimate.
For a mean, the standard error is:
SE = s / √n
Here, s is the sample standard deviation and n is the sample size.
For a proportion, the standard error is:
SE = √[ p̂(1 - p̂) / n ]
Confidence Level
The confidence level is usually 90%, 95%, or 99%. A higher confidence level gives a wider interval. That happens because you are asking for a range that catches the true value more often.
Critical Value
The critical value is the multiplier that turns the standard error into a margin of error. For many 95% intervals, you’ll see 1.96 used as the z critical value. When you work with a mean and the population standard deviation is not known, the t distribution is used instead. NIST’s confidence interval approach page lays out that setup for means and one-sided or two-sided intervals.
How To Calculate A Confidence Interval For Means And Proportions
The overall pattern stays the same:
- Find your sample statistic.
- Find the standard error.
- Choose the confidence level.
- Get the matching critical value.
- Compute the margin of error:
critical value × standard error. - Build the interval:
estimate ± margin of error.
That’s the whole structure. The only thing that changes is the formula for the standard error and whether you use a z value or a t value.
Worked Example For A Mean
Say a teacher samples 25 quiz scores. The sample mean is 78, and the sample standard deviation is 10. You want a 95% confidence interval for the class mean.
Start with the standard error:
SE = 10 / √25 = 10 / 5 = 2
Since the population standard deviation is not known, use a t critical value with 24 degrees of freedom. For 95%, that value is about 2.064.
Now find the margin of error:
ME = 2.064 × 2 = 4.128
Build the interval:
78 ± 4.128
So the confidence interval is:
(73.872, 82.128)
Rounded to one decimal place, that’s 73.9 to 82.1.
You can report it like this: “The 95% confidence interval for the mean quiz score is 73.9 to 82.1.” Clean, direct, and easy to read.
| Situation | Formula | What To Use |
|---|---|---|
| Mean, population SD known | x̄ ± z × (σ/√n) |
Use z critical value |
| Mean, population SD unknown | x̄ ± t × (s/√n) |
Use t critical value |
| Proportion | p̂ ± z × √[p̂(1-p̂)/n] |
Use z critical value |
| 90% interval | Critical value is smaller | Narrower range |
| 95% interval | Common default in reports | Middle-width range |
| 99% interval | Critical value is larger | Wider range |
| Larger sample size | Reduces standard error | Narrower range |
| More variable data | Raises standard error | Wider range |
When To Use Z And When To Use T
This is where many people slip. If you’re building a confidence interval for a mean and the population standard deviation is known, use z. In classwork, that case appears often. In real data work, it’s rare. Most of the time, you do not know the population standard deviation, so you use t.
The t distribution looks like the normal curve but has heavier tails. That makes the interval a bit wider, which fits the extra uncertainty from estimating the standard deviation with sample data. As sample size grows, t and z get closer.
For a proportion, z is the usual choice. The method works best when the sample is random and large enough that both np̂ and n(1-p̂) are not tiny. The CDC’s NHANES reliability notes point out that the common Wald interval for proportions is easy to compute but can misbehave in some cases, especially with small samples or proportions near 0 or 1.
Worked Example For A Proportion
Say 180 people are asked whether they prefer online billing, and 126 say yes. You want a 95% confidence interval for the population proportion.
First find the sample proportion:
p̂ = 126 / 180 = 0.70
Next find the standard error:
SE = √[0.70 × 0.30 / 180] = √(0.21 / 180) = √0.001167 ≈ 0.0342
For a 95% interval, use z = 1.96.
Now the margin of error:
ME = 1.96 × 0.0342 ≈ 0.0670
Build the interval:
0.70 ± 0.0670
That gives:
(0.633, 0.767)
Written as percentages, the 95% confidence interval is 63.3% to 76.7%.
This is also a good spot to read the interval in plain language. You are not saying that 70% is the exact population result. You are saying your sample puts the likely population percentage in a range from about 63% to 77%.
The width of that range tells a story. Narrow intervals suggest more precision. Wide intervals suggest less precision, often due to smaller samples or noisier data. The CDC’s confidence interval notes make that point clearly when describing why interval width changes.
| Confidence Level | Common Critical Value | Effect On Interval |
|---|---|---|
| 90% | z ≈ 1.645 |
Narrower |
| 95% | z ≈ 1.96 |
Middle width |
| 99% | z ≈ 2.576 |
Wider |
| Small sample mean | t > z |
Usually wider |
Common Mistakes That Throw Off The Interval
Mixing Up Standard Deviation And Standard Error
This is the big one. Standard deviation measures spread in the data. Standard error measures spread in the estimate. Use the wrong one, and the interval can be way off.
Using Z For Every Mean
If the population standard deviation is not known, t is the safer pick for a mean. Many students reach for 1.96 out of habit. That works in some settings, not all.
Skipping The Random Sample Check
The formulas rest on a sampling process that gives the data a fair shot at representing the population. If the sample is biased, a neat interval does not fix that problem.
Reading 95% The Wrong Way
A 95% interval is about the method, not a probability statement about the finished interval. This wording trips people up all the time, so it’s worth slowing down and getting it right.
Forgetting That Bigger Samples Shrink The Margin Of Error
If you double or triple sample size, the interval usually gets tighter. That’s one reason polls with tiny samples can look shaky even when the point estimate sounds neat and tidy.
How To Report A Confidence Interval Clearly
Once you’ve done the math, report the estimate, the interval, the confidence level, and the unit. Say what the number refers to. Do not just drop two endpoints on the page and move on.
Here are clean reporting styles:
- “The sample mean was 78, with a 95% confidence interval from 73.9 to 82.1.”
- “The estimated proportion was 70.0%, with a 95% confidence interval from 63.3% to 76.7%.”
- “Average wait time was 14.2 minutes (95% CI, 12.8 to 15.6).”
If you’re writing for a broad audience, one extra sentence helps: “The interval shows the range of values that fit the sample data under this method.” That keeps the wording honest without sounding stiff.
A Fast Way To Check Whether Your Answer Makes Sense
Before you trust the final interval, give it a quick sense check. The midpoint should be your sample estimate. The margin above and below should match. A 99% interval should be wider than a 95% interval from the same data. A larger sample should usually pull the interval inward. If any of those checks fail, stop and retrace the formula.
Confidence intervals can look technical on the page, yet the logic is plain: start with a sample estimate, measure how much it can wobble, then build a range around it. Once that pattern clicks, the whole topic gets lighter.
References & Sources
- NIST.“1.3.5.2. Confidence Limits for the Mean.”Defines confidence intervals and explains the long-run meaning of a 95% confidence method.
- NIST.“7.2.2.1. Confidence Interval Approach.”Shows the general confidence interval formulas for means and one-sided or two-sided intervals.
- CDC.“NHANES Tutorials – Reliability of Estimates Module.”Explains confidence intervals for proportions and notes limits of common default methods.
- CDC.“Confidence Intervals | U.S. Cancer Statistics.”Explains how interval width reflects precision and why wider intervals point to more uncertainty.