F test variance analysis
The F-test is named in the honor of the great statistician R.A. Fisher. The object of the F-test is to find out whether the two independent estimates of population variance differ importantly, or whether the two samples may be regarded as drawn from normal populations having similar variance. For carrying out the test of significance, we calculate the ratio F. The F is defined as:
The calculated value of F is compared with the table value for u1 and u2 at 5% or 1% level of the significance. If calculated value of F is larger than the table value then the F ratio is considered significant and the null hypothesis is discarded. On the other hand, if the calculated value of F is less than the table value then the null hypothesis is accepted and it is inferred that both the samples have come from the population having similar variance.
As F-test is based on the ratio of two variances, it is also termed as the variance ratio test. The ratio of two variances follows a distribution known as the f distribution named after the famous statistician R.A. Fisher.
Assumptions in F-test: The F test is basically based on the following assumptions:
Normality: the values in every group are normally distributed,
Homogeneity: the variance within each group must be equal for all groups (σ12 = σ22 = ........= σc2) this assumption is required in order to combine or pool the variance within the groups into a single groups source of variation.
Independence of error: It describes that the error (variation of each value around its own group mean) must be independent for every value.
Some of its main important topics are:
- Variance analysis
- Variance analysis-techniques
- Irregular variations
- F-test applications
- Inverse interpolation
- Extrapolation
- Data coding
- Semi averages method
- Ratio-to-trend method
- Trend origin shifting
- Seasonal index-uses