Reg Id: 19PLR00751
Course Code: 8417
Programme: BBA
Semester: Autumn 2023
Submit To: Muhammad Mushtaq
Phone: 03335565865
Address: House NO.107_A, Street NO.13 Tahli Mohri Rawalpindi,
Cantt,
Question NO.1
Discuss the following:
1. Power of Test
a)
It seems like your question is a bit unclear. "Power of test" generally refers to
statistical power in the context of hypothesis testing.
b) Statistical power is the probability that a statistical test will correctly reject a false
null hypothesis (i.e., make a correct decision) when the alternative hypothesis is
true. In other words, it's the ability of a test to detect an effect, if the effect truly
exists.
c) A test with high power is more likely to correctly identify a real effect, while a test
with low power is more likely to fail to detect a real effect even if it exists. Power is
influenced by several factors, including sample size, effect size, and the significance
level chosen for the test.
d) If you have a more specific question or if you were referring to something else with
"power of test," please provide additional details so I can better assist you.
2. Type-I Error
Type I error, also known as a false positive error, occurs in hypothesis testing when a
null hypothesis that is actually true is rejected. In other words, it's the error of
concluding that there is a significant effect or difference when there is none.
, Here's a breakdown of the concepts:
1. Null Hypothesis (H0): This is a statement that there is no effect or no
difference. It is a hypothesis set up for the purpose of testing. For example, in a
medical study, the null hypothesis might be that a new drug has no effect on a
certain condition.
2. Alternative Hypothesis (H1 or Ha ): This is the opposite of the null
hypothesis. It suggests that there is an effect or a difference. Using the medical study
example, the alternative hypothesis might be that the new drug has a significant
effect on the condition.
3. Significance Level (α): This is the probability of rejecting the null hypothesis
when it is true. It is often set at 0.05, meaning there is a 5% chance of incorrectly
rejecting the null hypothesis.
If the p-value (a measure that helps determine the significance of the results in hypothesis
testing) is less than the significance level, the null hypothesis is rejected. A Type I error
occurs if the null hypothesis is rejected when it is actually true.
In practical terms, a Type I error might mean concluding that a new treatment is effective
when, in reality, it has no effect. It's a type of error that researchers and analysts try to
minimize, especially when dealing with important decisions or interventions. The balance
between Type I and Type II errors is often considered when designing experiments and
interpreting their results.
3. Type-II Error
Type-II error, also known as a beta error, occurs in statistical hypothesis testing when a
null hypothesis that is actually false is not rejected. In other words, it's the failure to reject a
false null hypothesis. This error is associated with the acceptance of a null hypothesis that
should have been rejected.
In hypothesis testing, there are two primary types of errors:
1. Type-I Error (False Positive): This occurs when a true null hypothesis is
incorrectly rejected. In other words, it's the error of concluding that there is an
effect or difference when, in reality, there isn't.