In Example 1 of Multiple Regression Analysis we used 3 independent variables: Infant Mortality, White and Crime, and found that the regression model was a significant fit for the data. We also commented that the White and Crime variables could be eliminated from the model without significantly impacting the accuracy of the model. The following property can be used to test whether all of these variables add significantly to the model.
where m = number of independent variables being tested for elimination and SS’E is the value of SSE for the model without these variables.
E.g. suppose we consider the multiple regression model
and want to determine whether b3, b4 and b5 add significant benefit to the model (i.e. whether the reduced model y = b0 + b1 x1 + b2 x2 is significantly no worse than the complete model). The null hypothesis H0: b3 = b4 = b5 = 0 is tested using the statistic F as described in Property 1 where m = 3 and SS’E references the reduced model, while SSE, MSE and dfE refer to the complete model.
Example 1: Determine whether the White and Crime variables can be eliminated from the regression model for Example 1 of Multiple Regression Analysis.
Figure 1 implements the test described in Property 1 (using the output in Figure 3 and 4 of Multiple Regression Analysis to determine the values of cells AD4, AD5, AD6, AE4 and AE5).
Since p-value = .536 > .05 = α, we cannot reject the null hypothesis, and so conclude that White and Crime do not add significantly to the model and so can be eliminated.
Observation: An alternative way of determining whether certain independent variables are making a significant contribution to the regression model is to use the following property.
where R2 and dfE are the values for the full model, m = number of independent variables being tested for elimination and is the value of R2 for the model without these variables (i.e. the reduced model).
Observation: If we redo Example 1 using Property 2, once again we see that the White and Crime variables do not make a significant contribution (see Figure 2, which uses the output from Figure 3 and 4 from Using the output in Figure 3 and 4 of Multiple Regression Analysis to determine the values of cells AD14, AD15, AE14 and AE15).
Observation: When there are a large number of potential independent variables which can be used to model the dependent variable, the general approach is to use the fewest number of independent variables that accounts for a sufficiently large portion of the variance (as measured by R2). Of course you may prefer to include certain variables based on theoretical criteria rather than on simply statistical considerations.
If your only objective is to explain the greatest amount of variance with the fewest independent variables, generally the independent variable x with the largest correlation coefficient with the dependent variable y should be chosen first. Additional independent variables can then be added until the desired level of accuracy is achieved.
In particular, the stepwise estimation method is as follows:
- Select the independent variable x1 which most highly correlates with the dependent variable y. This provides the simple regression model y = b0 + b1 x1
- Examine the partial correlation coefficients to find the independent variable x2 that explains the largest significant portion of the unexplained (error) variance) from among the remaining independent variables. This yields the regression equation y = b0 + b1 x1 + b2 x2.
- Examine the partial F value for x1 in the model to determine whether it still makes a significant contribution. If it does not then eliminate this variable.
- Continue the procedure by examining all independent variables not in the model to determine whether one would make a significant addition to the current equation. If so, select the one that makes the highest contribution, generate a new regression model and then examine all the other independent variables in the model to determine whether they should be kept.
- Stop the procedure when no additional independent variable makes a significant contribution to the predictive accuracy. This occurs when all the remaining partial regression coefficients are non-significant.
From Property 2 of Multiple Correlation, we know that
Thus we are seeking the order x1, x2, …, xk such that the leftmost terms on the right side of the equation above explain the most variance. In fact the goal is to choose an m < k such
Observation: We can use the following alternatives to this approach:
- Start with all independent variables and remove variables one at a time until there is a significant loss in accuracy
- Look at all combinations of independent variables to see which ones generate the best model. For k independent variables there are 2k such combinations.
Since multiple significance tests are performed, when using the stepwise procedure it is better to have a larger sample space and to employ more conservative thresholds when adding and deleting variables (e.g. α = .01). In fact, it is better not to use a mechanized approach and instead evaluate the significance of adding or deleting variables based on theoretical considerations.
Note that if two independent variables are highly correlated (multicollinearity) then if one of these is used in the model, it is highly unlikely that the other will enter the model. One should not conclude, however, that the second independent variable is inconsequential.
Observation: In the approaches considered thus far, we compare a complete model with a reduced model. We can also compare models using Akaike’s Information Criterion (AIC).
Definition 1: For regression models, Akaike’s Information Criterion (AIC) is defined by
When n < 40(k+2) it is better to use the following modified version
Observation: All things being equal it is better to choose a model with lower AIC, although given two models with similar AICs there is no test to determine whether the difference in AIC values is significant.
Example 2: Determine whether the regression model for Example 1 with the White and Crime variables is better than the model without these variables.
Figure 3 – Comparing the two models using AIC
Since the AIC for the reduced model is lower than the AIC for the complete model, once again we see that the reduced model is a better choice.
Observation: AIC can be useful when deciding whether or not to use a transformation for one or more independent variables since we can’t use Property 1 or 2. AIC is calculated for each model, and all other things being equal the model with the lower AIC should be chosen.
Real Statistics Excel Functions: The Real Statistics Resource Pack contains the following two functions where R1 is an n × k array containing the X sample data and R2 is an n × 1 array containing the Y sample data.
RegAIC(R1, R2) = AIC for regression model for the data in R1 and R2
RegAICc(R1, R2) = AICc for regression model for the data in R1 and R2
We also have the following supplemental function where R1 is an n × k array containing the X sample data for the full model, R3 contains the X sample data for the reduced model and R2 is an n × 1 array containing the Y sample data.
RSquareTest(R1, R3, R2) = the p-value of the test defined by Property 2
Thus for the data in Example 1 (referring to Figure 2 of Multiple Regression Analysis), we have RegAIC(C4:E53,B4:B53) = 94.26, RegAICc(C4:E53,B4:B53) = 95.63 and RSquareTest(C4:E53,C4:C53,B4:B53) = .536.
Observation: This webpage focuses on whether some of the independent variables make a significant contribution to the accuracy of a regression model. The same approach can be used to determine whether interactions between variables of the square or higher orders of some variables make a significant contribution.