Reporting standards in medical journals US Medical PG Practice Questions and MCQs
Practice US Medical PG questions for Reporting standards in medical journals. These multiple choice questions (MCQs) cover important concepts and help you prepare for your exams.
Reporting standards in medical journals US Medical PG Question 1: A study is funded by the tobacco industry to examine the association between smoking and lung cancer. They design a study with a prospective cohort of 1,000 smokers between the ages of 20-30. The length of the study is five years. After the study period ends, they conclude that there is no relationship between smoking and lung cancer. Which of the following study features is the most likely reason for the failure of the study to note an association between tobacco use and cancer?
- A. Late-look bias
- B. Latency period (Correct Answer)
- C. Confounding
- D. Effect modification
- E. Pygmalion effect
Reporting standards in medical journals Explanation: ***Latency period***
- **Lung cancer** typically has a **long latency period**, often **20-30+ years**, between initial exposure to tobacco carcinogens and the development of clinically detectable disease.
- A **five-year study duration** in young smokers (ages 20-30) is **far too short** to observe the development of lung cancer, which explains the false negative finding.
- This represents a **fundamental flaw in study design** rather than a bias—the biological timeline of disease development was not adequately considered.
*Late-look bias*
- **Late-look bias** occurs when a study enrolls participants who have already survived the early high-risk period of a disease, leading to **underestimation of true mortality or incidence**.
- Also called **survival bias**, it involves studying a population that has already been "selected" by survival.
- This is not applicable here, as the study simply ended before sufficient time elapsed for disease to develop.
*Confounding*
- **Confounding** occurs when a third variable is associated with both the exposure and outcome, distorting the apparent relationship between them.
- While confounding can affect study results, it would not completely eliminate the detection of a strong, well-established association like smoking and lung cancer in a properly conducted prospective cohort study.
- The issue here is temporal (insufficient follow-up time), not the presence of an unmeasured confounder.
*Effect modification*
- **Effect modification** (also called interaction) occurs when the magnitude of an association between exposure and outcome differs across levels of a third variable.
- This represents a **true biological phenomenon**, not a study design flaw or bias.
- It would not explain the complete failure to detect any association.
*Pygmalion effect*
- The **Pygmalion effect** (observer-expectancy effect) refers to a psychological phenomenon where higher expectations lead to improved performance in the observed subjects.
- This concept is relevant to **behavioral and educational research**, not to objective epidemiological studies of disease incidence.
- It has no relevance to the biological relationship between carcinogen exposure and cancer development.
Reporting standards in medical journals US Medical PG Question 2: Group of 100 medical students took an end of the year exam. The mean score on the exam was 70%, with a standard deviation of 25%. The professor states that a student's score must be within the 95% confidence interval of the mean to pass the exam. Which of the following is the minimum score a student can have to pass the exam?
- A. 45%
- B. 63.75%
- C. 67.5%
- D. 20%
- E. 65% (Correct Answer)
Reporting standards in medical journals Explanation: ***65%***
- To find the **95% confidence interval (CI) of the mean**, we use the formula: Mean ± (Z-score × Standard Error). For a 95% CI, the Z-score is approximately **1.96**.
- The **Standard Error (SE)** is calculated as SD/√n, where n is the sample size (100 students). So, SE = 25%/√100 = 25%/10 = **2.5%**.
- The 95% CI is 70% ± (1.96 × 2.5%) = 70% ± 4.9%. The lower bound is 70% - 4.9% = **65.1%**, which rounds to **65%** as the minimum passing score.
*45%*
- This value is significantly lower than the calculated lower bound of the 95% confidence interval (approximately 65.1%).
- It would represent a score far outside the defined passing range.
*63.75%*
- This value falls below the calculated lower bound of the 95% confidence interval (approximately 65.1%).
- While close, this score would not meet the professor's criterion for passing.
*67.5%*
- This value is within the 95% confidence interval (65.1% to 74.9%) but is **not the minimum score**.
- Lower scores within the interval would still qualify as passing.
*20%*
- This score is extremely low and falls significantly outside the 95% confidence interval for a mean of 70%.
- It would indicate performance far below the defined passing threshold.
Reporting standards in medical journals US Medical PG Question 3: You are conducting a study comparing the efficacy of two different statin medications. Two groups are placed on different statin medications, statin A and statin B. Baseline LDL levels are drawn for each group and are subsequently measured every 3 months for 1 year. Average baseline LDL levels for each group were identical. The group receiving statin A exhibited an 11 mg/dL greater reduction in LDL in comparison to the statin B group. Your statistical analysis reports a p-value of 0.052. Which of the following best describes the meaning of this p-value?
- A. There is a 95% chance that the difference in reduction of LDL observed reflects a real difference between the two groups
- B. Though A is more effective than B, there is a 5% chance the difference in reduction of LDL between the two groups is due to chance
- C. If 100 permutations of this experiment were conducted, 5 of them would show similar results to those described above
- D. This is a statistically significant result
- E. There is a 5.2% chance of observing a difference in reduction of LDL of 11 mg/dL or greater even if the two medications have identical effects (Correct Answer)
Reporting standards in medical journals Explanation: **There is a 5.2% chance of observing a difference in reduction of LDL of 11 mg/dL or greater even if the two medications have identical effects**
- The **p-value** represents the probability of observing results as extreme as, or more extreme than, the observed data, assuming the **null hypothesis** is true (i.e., there is no true difference between the groups).
- A p-value of 0.052 means there's approximately a **5.2% chance** that the observed 11 mg/dL difference (or a more substantial difference) occurred due to **random variation**, even if both statins were equally effective.
*There is a 95% chance that the difference in reduction of LDL observed reflects a real difference between the two groups*
- This statement is an incorrect interpretation of the p-value; it confuses the p-value with the **probability that the alternative hypothesis is true**.
- A p-value does not directly tell us the probability that the observed difference is "real" or due to the intervention being studied.
*Though A is more effective than B, there is a 5% chance the difference in reduction of LDL between the two groups is due to chance*
- This statement implies that Statin A is more effective, which cannot be concluded with a p-value of 0.052 if the significance level (alpha) was set at 0.05.
- While it's true there's a chance the difference is due to chance, claiming A is "more effective" based on this p-value before statistical significance is usually declared is misleading.
*If 100 permutations of this experiment were conducted, 5 of them would show similar results to those described above*
- This is an incorrect interpretation. The p-value does not predict the outcome of repeated experiments in this manner.
- It refers to the **probability under the null hypothesis in a single experiment**, not the frequency of results across multiple hypothetical repetitions.
*This is a statistically significant result*
- A p-value of 0.052 is generally considered **not statistically significant** if the conventional alpha level (significance level) is set at 0.05 (or 5%).
- For a result to be statistically significant at alpha = 0.05, the p-value must be **less than 0.05**.
Reporting standards in medical journals US Medical PG Question 4: A researcher is conducting a study to compare fracture risk in male patients above the age of 65 who received annual DEXA screening to peers who did not receive screening. He conducts a randomized controlled trial in 900 patients, with half of participants assigned to each experimental group. The researcher ultimately finds similar rates of fractures in the two groups. He then notices that he had forgotten to include 400 patients in his analysis. Including the additional participants in his analysis would most likely affect the study's results in which of the following ways?
- A. Wider confidence intervals of results
- B. Increased probability of committing a type II error
- C. Decreased significance level of results
- D. Increased external validity of results
- E. Increased probability of rejecting the null hypothesis when it is truly false (Correct Answer)
Reporting standards in medical journals Explanation: ***Increased probability of rejecting the null hypothesis when it is truly false***
- Including more participants increases the **statistical power** of the study, making it more likely to detect a true effect if one exists.
- A higher sample size provides a more precise estimate of the population parameters, leading to a greater ability to **reject a false null hypothesis**.
*Wider confidence intervals of results*
- A larger sample size generally leads to **narrower confidence intervals**, as it reduces the standard error of the estimate.
- Narrower confidence intervals indicate **greater precision** in the estimation of the true population parameter.
*Increased probability of committing a type II error*
- A **Type II error** (false negative) occurs when a study fails to reject a false null hypothesis.
- Increasing the sample size typically **reduces the probability of a Type II error** because it increases statistical power.
*Decreased significance level of results*
- The **significance level (alpha)** is a pre-determined threshold set by the researcher before the study begins, typically 0.05.
- It is independent of sample size and represents the **acceptable probability of committing a Type I error** (false positive).
*Increased external validity of results*
- **External validity** refers to the generalizability of findings to other populations, settings, or times.
- While a larger sample size can enhance the representativeness of the study population, external validity is primarily determined by the **sampling method** and the study's design context, not just sample size alone.
Reporting standards in medical journals US Medical PG Question 5: An investigator is measuring the blood calcium level in a sample of female cross country runners and a control group of sedentary females. If she would like to compare the means of the two groups, which statistical test should she use?
- A. Chi-square test
- B. Linear regression
- C. t-test (Correct Answer)
- D. ANOVA (Analysis of Variance)
- E. F-test
Reporting standards in medical journals Explanation: ***t-test***
- A **t-test** is appropriate for comparing the means of two independent groups, such as the blood calcium levels between runners and sedentary females.
- It assesses whether the observed difference between the two sample means is statistically significant or occurred by chance.
*Chi-square test*
- The **chi-square test** is used to analyze categorical data to determine if there is a significant association between two variables.
- It is not suitable for comparing continuous variables like blood calcium levels.
*Linear regression*
- **Linear regression** is used to model the relationship between a dependent variable (outcome) and one or more independent variables (predictors).
- It aims to predict the value of a variable based on the value of another, rather than comparing means between groups.
*ANOVA (Analysis of Variance)*
- **ANOVA** is used to compare the means of **three or more independent groups**.
- Since there are only two groups being compared in this scenario, a t-test is more specific and appropriate.
*F-test*
- The **F-test** is primarily used to compare the variances of two populations or to assess the overall significance of a regression model.
- While it is the basis for ANOVA, it is not the direct test for comparing the means of two groups.
Reporting standards in medical journals US Medical PG Question 6: You are reading through a recent article that reports significant decreases in all-cause mortality for patients with malignant melanoma following treatment with a novel biological infusion. Which of the following choices refers to the probability that a study will find a statistically significant difference when one truly does exist?
- A. Type II error
- B. Type I error
- C. Confidence interval
- D. p-value
- E. Power (Correct Answer)
Reporting standards in medical journals Explanation: ***Power***
- **Power** is the probability that a study will correctly reject the null hypothesis when it is, in fact, false (i.e., will find a statistically significant difference when one truly exists).
- A study with high power minimizes the risk of a **Type II error** (failing to detect a real effect).
*Type II error*
- A **Type II error** (or **beta error**) occurs when a study fails to reject a false null hypothesis, meaning it concludes there is no significant difference when one actually exists.
- This is the **opposite** of what the question describes, which asks for the probability of *finding* a difference.
*Type I error*
- A **Type I error** (or **alpha error**) occurs when a study incorrectly rejects a true null hypothesis, concluding there is a significant difference when one does not actually exist.
- This relates to the **p-value** and the level of statistical significance (e.g., p < 0.05).
*Confidence interval*
- A **confidence interval** provides a range of values within which the true population parameter is likely to lie with a certain degree of confidence (e.g., 95%).
- It does not directly represent the probability of finding a statistically significant difference when one truly exists.
*p-value*
- The **p-value** is the probability of observing data as extreme as, or more extreme than, that obtained in the study, assuming the null hypothesis is true.
- It is used to determine statistical significance, but it is not the probability of detecting a true effect.
Reporting standards in medical journals US Medical PG Question 7: A biostatistician is processing data for a large clinical trial she is working on. The study is analyzing the use of a novel pharmaceutical compound for the treatment of anorexia after chemotherapy with the outcome of interest being the change in weight while taking the drug. While most participants remained about the same weight or continued to lose weight while on chemotherapy, there were smaller groups of individuals who responded very positively to the orexic agent. As a result, the data had a strong positive skew. The biostatistician wishes to report the measures of central tendency for this project. Just by understanding the skew in the data, which of the following can be expected for this data set?
- A. Mean = median = mode
- B. Mean < median < mode
- C. Mean > median > mode (Correct Answer)
- D. Mean > median = mode
- E. Mean < median = mode
Reporting standards in medical journals Explanation: ***Mean > median > mode***
- In a dataset with a **strong positive skew**, the tail of the distribution is on the right, pulled by a few **unusually large values**.
- These extreme high values disproportionately influence the **mean**, pulling it to the right (higher value), while the **median** (middle value) is less affected, and the **mode** (most frequent value) is often located at the peak of the distribution towards the left.
*Mean = median = mode*
- This relationship between the measures of central tendency is characteristic of a **perfectly symmetrical distribution**, such as a **normal distribution**, where there is no skew.
- In a symmetrical distribution, the mean, median, and mode are all located at the exact center of the data.
*Mean < median < mode*
- This order is typical for a dataset with a **negative skew**, where the tail is on the left due to a few **unusually small values**.
- In a negatively skewed distribution, the mean is pulled to the left (lower value) by the small values, making it less than the median and mode.
*Mean > median = mode*
- This configuration is generally not characteristic of standard skewed distributions and would imply a specific, less common bimodal or complex distribution shape where the mode coincides with the median, but the mean is pulled higher.
- While theoretically possible, it doesn't describe a typical positively skewed distribution where the mode is usually the lowest of the three.
*Mean < median = mode*
- This relationship would suggest a negatively skewed distribution where the median and mode are equal, but the mean is pulled to the left (lower value) by a leftward tail.
- Again, this is a less typical representation of a standard negatively skewed distribution, which often follows the Mean < Median < Mode pattern.
Reporting standards in medical journals US Medical PG Question 8: A research group wants to assess the safety and toxicity profile of a new drug. A clinical trial is conducted with 20 volunteers to estimate the maximum tolerated dose and monitor the apparent toxicity of the drug. The study design is best described as which of the following phases of a clinical trial?
- A. Phase 0
- B. Phase III
- C. Phase V
- D. Phase II
- E. Phase I (Correct Answer)
Reporting standards in medical journals Explanation: ***Phase I***
- **Phase I clinical trials** involve a small group of healthy volunteers (typically 20-100) to primarily assess **drug safety**, determine a safe dosage range, and identify side effects.
- The main goal is to establish the **maximum tolerated dose (MTD)** and evaluate the drug's pharmacokinetic and pharmacodynamic profiles.
*Phase 0*
- **Phase 0 trials** are exploratory studies conducted in a very small number of subjects (10-15) to gather preliminary data on a drug's **pharmacodynamics and pharmacokinetics** in humans.
- They involve microdoses, not intended to have therapeutic effects, and thus cannot determine toxicity or MTD.
*Phase III*
- **Phase III trials** are large-scale studies involving hundreds to thousands of patients to confirm the drug's **efficacy**, monitor side effects, compare it to standard treatments, and collect information that will allow the drug to be used safely.
- These trials are conducted after safety and initial efficacy have been established in earlier phases.
*Phase V*
- "Phase V" is not a standard, recognized phase in the traditional clinical trial classification (Phase 0, I, II, III, IV).
- This term might be used in some non-standard research contexts or for post-marketing studies that go beyond Phase IV surveillance, but it is not a formal phase for initial drug development.
*Phase II*
- **Phase II trials** involve several hundred patients with the condition the drug is intended to treat, focusing on **drug efficacy** and further evaluating safety.
- While safety is still monitored, the primary objective shifts to determining if the drug works for its intended purpose and at what dose.
Reporting standards in medical journals US Medical PG Question 9: A pharmaceutical corporation is developing a research study to evaluate a novel blood test to screen for breast cancer. They enrolled 800 patients in the study, half of which have breast cancer. The remaining enrolled patients are age-matched controls who do not have the disease. Of those in the diseased arm, 330 are found positive for the test. Of the patients in the control arm, only 30 are found positive. What is this test’s sensitivity?
- A. 330 / (330 + 30)
- B. 330 / (330 + 70) (Correct Answer)
- C. 370 / (30 + 370)
- D. 370 / (70 + 370)
- E. 330 / (400 + 400)
Reporting standards in medical journals Explanation: ***330 / (330 + 70)***
- **Sensitivity** measures the proportion of actual **positives** that are correctly identified as such.
- In this study, there are **400 diseased patients** (half of 800). Of these, 330 tested positive (true positives), meaning 70 tested negative (false negatives). So sensitivity is **330 / (330 + 70)**.
*330 / (330 + 30)*
- This calculation represents the **positive predictive value**, which is the probability that subjects with a positive screening test truly have the disease. It uses **true positives / (true positives + false positives)**.
- It does not correctly calculate **sensitivity**, which requires knowing the total number of diseased individuals.
*370 / (30 + 370)*
- This expression is attempting to calculate **specificity**, which is the proportion of actual negatives that are correctly identified. It would be **true negatives / (true negatives + false positives)**.
- However, the numbers used are incorrect for specificity in this context given the data provided.
*370 / (70 + 370)*
- This formula is an incorrect combination of values and does not represent any standard epidemiological measure like **sensitivity** or **specificity**.
- It is attempting to combine false negatives (70) and true negatives (370 from control arm) in a non-standard way.
*330 / (400 + 400)*
- This calculation attempts to divide true positives by the total study population (800 patients).
- This metric represents the **prevalence of true positives within the entire study cohort**, not the test's **sensitivity**.
Reporting standards in medical journals US Medical PG Question 10: Two research groups independently study the same genetic variant's association with diabetes. Study A (n=5,000) reports OR=1.25, 95% CI: 1.05-1.48, p=0.01. Study B (n=50,000) reports OR=1.08, 95% CI: 1.02-1.14, p=0.006. Both studies are methodologically sound. Synthesize these findings to determine the most likely true effect and evaluate implications for clinical and research interpretation.
- A. Study B is definitive because of its larger sample size and should replace Study A's findings
- B. The study with the lower p-value (Study B) is automatically more reliable
- C. The studies are contradictory and no conclusions can be drawn
- D. Study A is correct because it was published first
- E. The true effect is likely modest (closer to Study B's estimate); Study A likely overestimated due to smaller sample size, but both show statistical significance with clinically marginal effects (Correct Answer)
Reporting standards in medical journals Explanation: ***The true effect is likely modest (closer to Study B's estimate); Study A likely overestimated due to smaller sample size, but both show statistical significance with clinically marginal effects***
- Study B has significantly higher **statistical power** and **precision** (narrower 95% CI) due to its larger sample size, making its **odds ratio (OR)** estimate more reliable.
- Smaller initial studies often exhibit the **Winner's Curse**, where effect sizes are **overestimated** to reach the threshold for statistical significance.
*Study A is correct because it was published first*
- **Publication order** does not determine the scientific validity or accuracy of genetic association studies.
- Early studies are more prone to **random error** and inflated effect sizes compared to later, larger-scale replications.
*Study B is definitive because of its larger sample size and should replace Study A's findings*
- While Study B is more **precise**, both studies are directionally consistent and both show **statistical significance** (p < 0.05).
- Scientific evidence is **cumulative**; Study B refines and confirms the existence of an association rather than declaring Study A's findings as entirely false.
*The studies are contradictory and no conclusions can be drawn*
- The studies are not contradictory because both **confidence intervals** show an OR > 1.0, and both reach **statistical significance**.
- Both groups found the same **direction of effect**, suggesting a real albeit modest genetic association with diabetes.
*The study with the lower p-value (Study B) is automatically more reliable*
- Reliability depends on **methodological rigor** and **precision**, whereas the p-value is heavily influenced by **sample size**.
- A lower p-value indicates stronger evidence against the **null hypothesis** but does not inherently mean the study is free from bias or more reliable in its effect estimate.
More Reporting standards in medical journals US Medical PG questions available in the OnCourse app. Practice MCQs, flashcards, and get detailed explanations.