8.8 Testing of Hypothesis
For testing a hypothesis: Many a time, we
strongly believe some results to be true. but after taking a sample,
we notice that one sample data does not wholly support the result.
The difference is due to (i) the original belief being wrong and
(ii) the sample being slightly one sided.
Tests are therefore, needed to distinguish between the two possibilities. These tests tells about the likely possibilities and reveal whether or not the difference can be due to chance elements. If the difference is not due to chance elements it is significant and therefore, these tests are called tests of significance. The whole procedure is known is Testing of Hypothesis.
A hypothesis is a statement supposed to be true till it is proved false. It may be based on previous experience or may be derived theoretically. First a statistician or the investigator forms a research hypothesis that an exception is to be tested. Then he derives a statement which is opposite the research hypothesis (noting as Ho). The approach here is to set up an assumption that there is no contraction between the believed result and the sample result and that the difference therefore can be ascribed solely to chance. Such a hypothesis is called a null hypothesis ( Ho). It is the null hypothesis that is actually tested, not the research hypothesis. The object of the test is to see whether the null hypothesis should be rejected or accepted.
If the null hypothesis is rejected, that is taken
as evidence in favor of the research hypothesis which is called
as the alternative hypothesis (denoted by Ha).
In usual practice we do not say that the research hypothesis has
been "proved" only that it has been supported.
For example, if it is assumed that the mean of the weights of the population of a college is 110 lb, then the null hypothesis will be the mean of the population that is 110 lbs. i.e. Ho : m = 110 lbs ( Null hypothesis ). In terms of alternative hypothesis (i) H a : m a ¹ 110 lbs (ii) Ha : ma > 110 lbs (iii) Ha : ma < 110 lbs.
Setting up levels of significance: Once
the null hypothesis is set up, the next job is to set the limits
within which we expect (the null hypothesis) m lies. The idea behind
it is to ensure that the difference between the sample value and
the hypothesis should arise due to sampling fluctuations alone.
If this difference does not exceeds this limit then the sample supports
the null hypothesis and the sample is accepted. If it exceeds this
limit the sample does not support the hypothesis and it is rejected.
Now fixing the limits totally depends upon the accuracy desired. Generally the limits are fixed such that the probability that the difference will exceeds the limits is 0.05 or 0.01. These levels are known as the 'levels of significance' and are expressed as 5% or 1% levels of significance. Rejection of null hypothesis does not mean that the hypothesis is disproved.
It simply means that the sample values does not support the hypothesis. Also, acceptance does not mean that the hypothesis is proved. It means simply it is being supported.
Confidence limits: The limits (or range)
within which the hypothesis should lie with specified probabilities
are called the confidence limits or fiducial limits. It is customary
to take these limits as 5% or 1% levels of significance. If sample
values lies between the confidence limits, the hypothesis is accepted;
if it does not, the hypothesis is rejected at the specified level