Heteroscedasticity:
In the classical least squares approach that we discussed in Block 1 we assumed that the error terms are identically independently distributed (IID) with mean 0 and variance σ2. Moreover, it was assumed that the variance remains constant for all observations. This assumption of constant variance however, does not remain valid always. It may so happen that the errors are mutually uncorrelated (that is, no serial correlation) but have different variances. Thus when the variance of the error term increase or decreases with the dependent variable, we have the case of heteroscedasticity. In other words, we say that the problem of heteroscedasticity arises when the assumption of homoscedasticity - that the variances of the stochastic disturbance term are finite and constant over the sample - is riot met.
In the classical least-squares approach we assume that the variances of the error terms are finite and constant. When this assumption does not hold, the problem of heteroscedasticity arises. This problem occurs often in cross-section of data. When this problem arises, the least-squares estimators are not efficient, no longer providing minimum variance estimators among the class of linear unbiased estimators, and the estimated variances become biased. In this case the usual tests of statistical significance are no longer valid.