Sample size determination

Sample size determination

Sample size determination

On this page

Fundamentals - Sizing Up a Study

  • Statistical Power (1-β): Probability of detecting a true effect if it exists. Conventionally set to ≥80%.
  • Errors in Hypothesis Testing:
    • Type I Error (α): False positive. Incorrectly rejecting a true null hypothesis. Threshold $p < \textbf{0.05}$.
    • Type II Error (β): False negative. Incorrectly failing to reject a false null hypothesis.
  • Core Determinants of Sample Size:
    • Effect Size: Magnitude of the difference to be detected. ↑ effect size → ↓ required sample size.
    • Precision: A narrower confidence interval requires ↑ sample size.

Power, Effect Size, and Hypothesis Testing Distributions

⭐ To detect a smaller effect size, a much larger sample size is required to maintain the same statistical power.

  • Practical & Ethical Constraints: Limited resources (funding, time) and the ethical need to not expose excessive participants to potential harm or ineffective treatment constrain sample size.

Key Inputs - The Power Players

  • Power ($1-β$): The probability of correctly detecting a true effect (rejecting a false null hypothesis). 📌 You need POWER to detect a difference.

    • Typically set at 80% (or 0.8).
    • Higher power requires a larger sample size.
    • Associated with Z-score $Z_β$.
  • Significance Level ($α$): The probability of a Type I error (incorrectly rejecting a true null hypothesis).

    • Typically set at 0.05.
    • Lower $α$ requires a larger sample size.
    • Associated with Z-score $Z_α$.
  • Effect Size: The magnitude of the difference you want to detect.

    • A smaller effect size requires a larger sample size to detect.
  • Variability: The spread of the data, measured by standard deviation ($σ$).

    • Higher variability requires a larger sample size.

⭐ As power increases, the sample size requirement increases. As effect size decreases, the sample size requirement increases.

The Formula - Crunching the Numbers

  • Core Equation (for two means):
    • $n = \frac{2(Z_{\alpha/2} + Z_{\beta})^2 \sigma^2}{(\mu_1 - \mu_2)^2}$
    • n = sample size per group.
    • σ = standard deviation (variability).
    • μ₁ - μ₂ = effect size (expected difference).
    • Zα/2 = critical value for alpha (e.g., 1.96 for 95% CI).
    • Zβ = critical value for power (e.g., 0.84 for 80% power).

⭐ Power is the probability of correctly rejecting a false null hypothesis (1 - β). Conventionally set at 80%.

  • Key Relationships:

High‑Yield Points - ⚡ Biggest Takeaways

  • Sample size is set to ensure a study can detect a true effect if one exists.
  • It's primarily determined by power (1-β), significance level (α), effect size, and population variability.
  • A larger sample is required for higher power, a stricter alpha (e.g., 0.01), a smaller effect size, or greater variability.
  • An inadequate sample size leads to an underpowered study, increasing the risk of a Type II error (false negative).

Practice Questions: Sample size determination

Test your understanding with these related questions

You are reading through a recent article that reports significant decreases in all-cause mortality for patients with malignant melanoma following treatment with a novel biological infusion. Which of the following choices refers to the probability that a study will find a statistically significant difference when one truly does exist?

1 of 5

Flashcards: Sample size determination

1/8

_____ risk reduction is the proportion of risk reduction attributable to an intervention compared to a control

TAP TO REVEAL ANSWER

_____ risk reduction is the proportion of risk reduction attributable to an intervention compared to a control

Relative

browseSpaceflip

Enjoying this lesson?

Get full access to all lessons, practice questions, and more.

Start Your Free Trial