Central value of a set of 180 values can be obtained by which of the following?
What is the correct formula for calculating the positive predictive value (PPV) of a screening test?
The fundamental equilibrium principle of population genetics was given by?
Which scale is also known as the Cumulative Scale?
In the context of hypothesis testing, what does statistical power refer to?
What is the effect of increasing the confidence level in hypothesis testing?
A scatter diagram is drawn to study the relationship between two quantitative variables. What does it primarily illustrate?
What correlation coefficient indicates the strongest positive correlation between two variables?
Which of the following describes the purpose of ICD-10 codes?
Which of the following is true about Simple Random Sampling?
Explanation: ***2nd quartile*** - The **2nd quartile** is equivalent to the **median**, which represents the central value of a dataset. - For 180 values, the median (Q2) would be located between the 90th and 91st values when ordered, effectively dividing the data into two equal halves. *2nd tertile* - The 2nd tertile divides the data into three equal parts, signifying the value below which two-thirds of the data lie, not the central value. - For 180 values, the 2nd tertile would be around the 120th ordered value (2/3 of 180), which is far from the center. *80th percentile* - The 80th percentile indicates that 80% of the data falls below this value. - This is a measure of a specific position within the upper portion of the data, not the central tendency. *9th decile* - The 9th decile represents the value below which 90% of the data falls. - This value is very high in the dataset and does not represent the central value.
Explanation: ***True positives / (True positives + False positives)*** - **Positive predictive value (PPV)** indicates the probability that a patient who tests positive actually has the disease. - It is calculated by dividing the number of **true positives** (correctly identified positive cases) by the total number of positive test results (**true positives + false positives**). *True positives / (True positives + False negatives)* - This formula represents the **sensitivity** of a test, which is the proportion of actual positive cases that are correctly identified. - Sensitivity measures the ability of a test to correctly identify individuals with the disease. *False positives / (False positives + True negatives)* - This formula represents **1 - specificity**, or the **false positive rate**. - **Specificity** is the proportion of actual negative cases that are correctly identified as negative. *True negatives / (True negatives + False negatives)* - This formula represents the **negative predictive value (NPV)**, which is the probability that a patient who tests negative actually does not have the disease. - NPV is calculated by dividing the number of **true negatives** (correctly identified negative cases) by the total number of negative test results (**true negatives + false negatives**).
Explanation: ***Hardy Weinberg*** - The **Hardy-Weinberg principle** describes the conditions under which allele and genotype frequencies in a population remain constant from generation to generation. - It established the baseline for understanding when evolutionary forces like **mutation**, **selection**, **gene flow**, and **genetic drift** are acting on a population. *Sewall Wright* - Sewall Wright is known for his work on **genetic drift**, particularly the concept of the **effective population size** and the **shifting balance theory** of evolution. - While fundamental to population genetics, his contributions did not lay the initial equilibrium principle. *J. B. S. Haldane* - J.B.S. Haldane made significant contributions to the **mathematical theory of natural selection** and was a pioneer in developing population genetics as a field. - He focused more on the dynamics of evolution under selection rather than the foundational equilibrium state. *R. A. Fisher* - R. A. Fisher was a key figure in modern statistics and population genetics, known for developing concepts like **Fisher's fundamental theorem of natural selection** and the **evolution of dominance**. - His work built upon the Hardy-Weinberg equilibrium, explaining how selection drives evolutionary change.
Explanation: ***Guttman Scale*** - The **Guttman scale**, also known as the **cumulative scale**, is designed such that if an individual agrees with a more extreme statement, they will also agree with all less extreme statements - It measures a **single, unidimensional trait**, and responses are ordered cumulatively - This cumulative property is what gives it the name "Cumulative Scale" *Visual Analog Scale* - The **Visual Analog Scale (VAS)** is a psychometric response scale used to measure subjective characteristics or attitudes that cannot be directly measured - It typically presents a **continuous line** where patients mark their current state, most commonly used for pain assessment - Not a cumulative scale *Thurstone Scale* - A **Thurstone scale** uses a panel of judges to assign numeric values to attitude statements based on their perceived intensity or favorability - It aims to create an **interval scale** where the distance between categories is assumed to be equal - Does not have cumulative properties *Semantic Differential Scale* - The **Semantic Differential Scale** measures the connotative meaning of concepts or objects - Asks respondents to rate a concept on a series of **bipolar adjective pairs** (e.g., good-bad, strong-weak) - Used to assess perceptions and attitudes rather than cumulative agreement
Explanation: ***The probability of correctly rejecting a false null hypothesis.*** - **Statistical power** is the probability that a statistical test will **correctly detect an effect** when there is a true effect present. - It represents the ability of a study to **avoid a Type II error (β)** (failing to reject a false null hypothesis), and is calculated as **1 - β**. - Higher statistical power means greater ability to detect a true effect when it exists. *The probability of failing to reject a true null hypothesis.* - This describes the **complement of Type I error (1 - α)**, representing the probability of correctly retaining a true null hypothesis. - This is a correct decision in hypothesis testing but is **not the definition of statistical power**. - Related to the specificity of the test when the null hypothesis is true. *The probability of incorrectly rejecting a true null hypothesis.* - This describes **Type I error (α)**, also known as a **false positive**. - It represents the significance level of the test, typically set at 0.05 or 0.01. - This is an error, not a measure of power, and represents concluding there is an effect when none exists. *The probability of incorrectly rejecting a false null hypothesis.* - This statement is **logically contradictory** and conceptually impossible. - If the null hypothesis is false, rejecting it is the **correct decision**, not incorrect. - The probability of **failing to reject a false null hypothesis** is **Type II error (β)**, and power = 1 - β.
Explanation: ***Increased significance threshold affects results*** - Increasing the **confidence level** (e.g., from 95% to 99%) means we are demanding higher certainty that our result is not due to random chance. This translates to a **lower alpha (significance level)** - from α=0.05 to α=0.01. - A higher confidence level implies a **more stringent threshold** for rejecting the null hypothesis. The p-value must now be smaller than the reduced alpha to achieve statistical significance. - This makes it **harder to reject the null hypothesis** and reduces the probability of Type I error (false positive). *Previously significant value remains significant* - This statement is incorrect because if a **p-value** was barely significant at a lower confidence level (e.g., p=0.04 at 95% confidence, α=0.05), it would become **non-significant** at a higher confidence level (e.g., 99% confidence, α=0.01). - The threshold for **statistical significance** becomes stricter, meaning fewer results will meet the criteria. *Hypothesis testing outcome may change* - While this is technically true, it is less precise than the correct answer. The outcome may change specifically because results that were previously significant may become non-significant. - This option describes a **consequence** rather than the direct effect of changing the confidence level. *Previously insignificant value may become significant* - This statement is incorrect. If a result was **non-significant** at a lower confidence level (e.g., p=0.06 at 95% confidence, α=0.05), it will certainly remain non-significant at a higher confidence level (e.g., 99% confidence, α=0.01). - Increasing the confidence level makes it **harder, not easier** to achieve statistical significance by requiring a smaller p-value to reject the null hypothesis.
Explanation: ***Relationship between two given variables*** - A **scatter diagram**, also known as a scatter plot, is specifically designed to visualize the **relationship** or **correlation** between two different quantitative variables. - Each point on the plot represents a pair of values (x, y) for the two variables, allowing for the observation of patterns, clusters, or trends. *Frequency of occurrence of events in categorical data.* - **Bar charts** or **pie charts** are typically used to illustrate the frequency of occurrence of events in categorical or qualitative data. - Scatter diagrams are not suited for displaying **categorical data frequencies**. *Mean and median values of the given data.* - **Box plots** or **histograms** are better suited for illustrating the mean, median, and distribution of a single variable. - A scatter diagram shows individual data points and their relationship, not summary statistics like **mean** or **median**. *Trend of a variable over time in a time series analysis.* - A **line graph** or **time series plot** is used to show the trend of a variable over time. - While a scatter plot can show **patterns**, it does not inherently represent the sequential nature of time series data unless time is one of the plotted variables.
Explanation: ***1*** - A correlation coefficient of **1** signifies a **perfect positive linear relationship** between two variables, meaning as one variable increases, the other increases proportionally. - This value represents the strongest possible positive correlation. *0* - A correlation coefficient of **0** indicates **no linear relationship** between the two variables. - Changes in one variable are not associated with predictable changes in the other. *0.7 to 0.9* - A correlation coefficient in this range indicates a **strong positive correlation**, but it is not the *strongest* possible. - While significant, it suggests that the relationship is not perfectly linear. *Greater than 1* - A correlation coefficient **cannot be greater than 1** or less than -1. - The range for the Pearson correlation coefficient is **-1 to +1**, inclusive.
Explanation: ***Used for morbidity statistics*** - ICD-10 codes primarily serve to classify diseases and health problems for **mortality and morbidity statistics**. - They provide a standardized system for tracking and reporting causes of illness and death, crucial for public health surveillance and research. *Published by WHO* - While it's true that the **ICD-10 (International Classification of Diseases, 10th Revision)** is developed and published by the **World Health Organization (WHO)**, this describes its origin, not its primary purpose. - The publication aspect is a characteristic, not the fundamental reason for its existence or use. *Contains alphanumeric codes* - ICD-10 codes are indeed **alphanumeric**, with the first character being a letter followed by numbers. - This describes the **structure** of the codes, not their purpose in a healthcare or statistical context. *Consists of 21 chapters* - The **ICD-10 classification** is organized into **21 chapters**, each covering a specific category of diseases or health conditions. - This detail describes the **organization** or **scope** of the classification system, rather than its overarching purpose.
Explanation: ***Every person has an equal chance of selection*** - In **simple random sampling**, each member of the population has an **identical probability** of being chosen for the sample. - This method ensures **unbiased selection** from the population, as every element is given an equal opportunity. *Fewer samples are collected* - The number of samples collected is not inherently less; simple random sampling can involve any sample size, small or large. - This statement does not define a characteristic unique to or consistently true for simple random sampling. *Also known as Systematic randomization* - Simple random sampling is distinct from **systematic randomization**, which involves selecting every nth element from a list after a random start. - **Systematic randomization** follows a fixed interval, while **simple random sampling** involves individual random selections. *Groups may not be equally represented in small samples* - While possible, this is a limitation of all small samples and NOT particular to simple random sampling rather than a defining truth. - In small samples, **random chance** can lead to disproportional representation of subgroups, but this isn't a fundamental characteristic of the method itself.
Collection and Presentation of Data
Practice Questions
Measures of Central Tendency
Practice Questions
Measures of Dispersion
Practice Questions
Normal Distribution
Practice Questions
Sampling Methods
Practice Questions
Sample Size Calculation
Practice Questions
Hypothesis Testing
Practice Questions
Tests of Significance
Practice Questions
Correlation and Regression
Practice Questions
Survival Analysis
Practice Questions
Multivariate Analysis
Practice Questions
Statistical Software in Research
Practice Questions
Get full access to all questions, explanations, and performance tracking.
Start For Free