Detection of multicollinearity:
Note that multicollinearity is almost always present in most applications, so it is a matter of degtee and not whether it is present or not. What most often we are trying to find out in empirical applications is whether it is likely to create problems in terms of hypothesis testing or not (through a higher estimate of standard errors), since the objective of most empirical studies is usually to find and test the exact relationship between variables. There does not exist any formal tests for multicollinearity, however there does exist some informal rules of thumb that most researchers use for this purpose. These are as follows:
High R2 but few significant t-statistics. This is the "classic" symptom of multicollinearity. If the R2 is high, say in excess of 9.8, the F test in most:cases. will reject the hypothesis that estimated coefficient$ are all simultaneously zero. The individual t tests, however, will show that none or very few of the slopes (coefficients) estimates are statistically different from zero. This was illustrated earlier in the example shown above.
High pair-wise correlations among regressors. Another suggested rule of thumb is that if the pair-wise correlation coefficient between two regressors is high, say, in excess of 0.8, then multicollinearity is a serious problem. The problem with this criterion is that, this is a sufficient condition for multicollinearity but not a necessary condition. In other words, there can be problems of multicollinearity present even when this condition is not fulfilled.