Testing the significance of extra variables on the model

In Example 1 of Multiple Regression Analysis we used 3 independent variables:  Infant Mortality, White and Crime, and found that the regression model was a significant fit for the data. We also commented that the White and Crime variables could be eliminated from the model without significantly impacting the accuracy of the model. The following property can be used to test whether all of these variables add significantly to the model.

Property 1:
image2098

where m = number of independent variables being tested for elimination and SS’E is the value of SSE for the model without these variables.

E.g. suppose we consider the multiple regression model

image2101

and want to determine whether b3, b4 and b5 add significant benefit to the model (i.e. whether the reduced model y = b0 + b1x1 + b2x2 is significantly no worse than the complete model). The null hypothesis H0: b= b4 = b5 = 0 is tested using the statistic F as described in Property 1 where m = 3 and SS’E references the reduced model, while SSE, MSE and dfE refer to the complete model.

Example 1: Determine whether the White and Crime variables can be eliminated from the regression model for Example 1 of Multiple Regression Analysis.

Figure 1 implements the test described in Property 1 (using the output in Figure 3 and 4 of Multiple Regression Analysis to determine the values of cells AD4, AD5, AD6, AE4 and AE5).

Significant variables regression Excel

Figure 1 – Determine if White and Crime can be eliminated

Since p-value = .536 > .05 = α, we cannot reject the null hypothesis, and so conclude that White and Crime do not add significantly to the model and so can be eliminated.

Observation: An alternative way of determining whether certain independent variables are making a significant contribution to the regression model is to use the following property.

Property 2:
image2108

where R2 and dfE are the values for the full model, m = number of independent variables being tested for elimination and R_r^2 is the value of R2 for the model without these variables (i.e. the reduced model).

Observation: If we redo Example 1 using Property 2, once again we see that the White and Crime variables do not make a significant contribution (see Figure 2, which uses the output from Figure 3 and 4 from Using the output in Figure 3 and 4 of Multiple Regression Analysis to determine the values of cells AD14, AD15, AE14 and AE15).

R-square test Excel

Figure 2 – Using R-square to decide whether to drop variables

Observation: When there are a large number of potential independent variables which can be used to model the dependent variable, the general approach is to use the fewest number of independent variables that accounts for a sufficiently large portion of the variance (as measured by R2). Of course you may prefer to include certain variables based on theoretical criteria rather than on simply statistical considerations.

If your only objective is to explain the greatest amount of variance with the fewest independent variables, generally the independent variable x with the largest correlation coefficient with the dependent variable y should be chosen first. Additional independent variables can then be added until the desired level of accuracy is achieved.

In particular, the stepwise estimation method is as follows:

  1. Select the independent variable x1 which most highly correlates with the dependent variable y. This provides the simple regression model y = b0 + b1 x1
  2. Examine the partial correlation coefficients to find the independent variable x2 that explains the largest significant portion of the unexplained (error) variance) from among the remaining independent variables. This yields the regression equation y = b0 + b1 x1 + b2 x2.
  3. Examine the partial F value for x1 in the model to determine whether it still makes a significant contribution. If it does not then eliminate this variable.
  4. Continue the procedure by examining all independent variables not in the model to determine whether one would make a significant addition to the current equation. If so, select the one that makes the highest contribution, generate a new regression model and then examine all the other independent variables in the model to determine whether they should be kept.
  5. Stop the procedure when no additional independent variable makes a significant contribution to the predictive accuracy. This occurs when all the remaining partial regression coefficients are non-significant.

From Property 2 of Multiple Correlation, we know that

image1632

Thus we are seeking the order x1, x2, …, xk such that the leftmost terms on the right side of the equation above explain the most variance. In fact the goal is to choose an m < k such

image2118

explains most of
image2119

Observation: We can use the following alternatives to this approach:

  • Start with all independent variables and remove variables one at a time until there is a significant loss in accuracy
  • Look at all combinations of independent variables to see which ones generate the best model. For k independent variables there are 2k such combinations.

Since multiple significance tests are performed, when using the stepwise procedure it is better to have a larger sample space and to employ more conservative thresholds when adding and deleting variables (e.g.  α = .01). In fact, it is better not to use a mechanized approach and instead evaluate the significance of adding or deleting variables based on theoretical considerations.

Note that if two independent variables are highly correlated (multicollinearity) then if one of these is used in the model, it is highly unlikely that the other will enter the model. One should not conclude, however, that the second independent variable is inconsequential.

Observation: In Stepwise Regression, we describe another stepwise regression approach, which is also included in the Linear Regression data analysis tool.

Observation: In the approaches considered thus far, we compare a complete model with a reduced model. We can also compare models using Akaike’s Information Criterion (AIC).

Definition 1: For multiple linear regression models, Akaike’s Information Criterion (AIC) is defined by

Akaike’s Information Criterion

When n < 40(k+2) it is better to use the following modified version

Akaike’s information criterion modifed

Another such measure is the Schwarz Baysean Criterion (SBC), which puts more weight of the sample size.

image069x

Observation: All things being equal it is better to choose a model with lower AIC, although given two models with similar AICs there is no test to determine whether the difference in AIC values is significant.

Example 2: Determine whether the regression model for Example 1 with the White and Crime variables is better than the model without these variables.

AIC and SBC

Figure 3 – Comparing the two models using AIC

Since the AIC and SBC for the reduced model is lower than the AIC and SBC for the complete model, once again we see that the reduced model is a better choice.

Observation: AIC (or SBC) can be useful when deciding whether or not to use a transformation for one or more independent variables since we can’t use Property 1 or 2. AIC is calculated for each model, and all other things being equal the model with the lower AIC (or SBC) should be chosen.

Observation: Augmented versions of AIC and SBC, used in some texts, are as follows:

image070ximage071x

Real Statistics Excel Functions: The Real Statistics Resource Pack contains the following two functions where R1 is an n × k array containing the X sample data and R2 is an n × 1 array containing the Y sample data.

RegAIC(R1, R2,, aug) = AIC for regression model for the data in R1 and R2

RegAICc(R1, R2,, aug) = AICc for regression model for the data in R1 and R2

RegSBC(R1, R2,, aug) = SBC for regression model for the data in R1 and R2

If aug = FALSE (default), the first version of AIC, AICc, SBC are returned, while if aug =TRUE, then the augmented versions are returned.

We also have the following Real Statistics function where R1 is an n × k array containing the X sample data for the full model, R3 contains the X sample data for the reduced model and R2 is an n × 1 array containing the Y sample data.

RSquareTest(R1, R3, R2) = the p-value of the test defined by Property 2

Thus for the data in Example 1 (referring to Figure 2 of Multiple Regression Analysis), we have RegAIC(C4:E53,B4:B53) = 94.26, RegAICc(C4:E53,B4:B53) = 95.63 and RSquareTest(C4:E53,C4:C53,B4:B53) = .536.

Observation: This webpage focuses on whether some of the independent variables make a significant contribution to the accuracy of a regression model. The same approach can be used to determine whether interactions between variables of the square or higher orders of some variables make a significant contribution.

Observation: You can also ask the question, which of the independent variables has the largest effect? There are two ways of addressing this issue.

  1. You standardize each of the independent variables (e.g. by using the STANDARDIZE function) before conducting the regression. In this case, the variable whose regression coefficient is highest (in absolute value) has the largest effect. If you don’t standardize the variables each of the variables first, then the variable with the highest regression coefficient is not necessarily the one with the highest effect (since the units are different).
  2. You rerun the regression removing one independent variable from the model and record the value of R-square. If you have k independent variables you will run k reduced regression models. The model which has the smallest value of R-square corresponds to the variable which has the largest effect. This is because the removal of that variable reduces the fit of the model the most.

36 Responses to Testing the significance of extra variables on the model

  1. rushi says:

    I need to predict sales on basis of promotion and i have past 3 months sales data and promotion cycle of each month.i do have next month promotion file for which i need to predict sales.how shall i initiate and how intearction variable will work here.please help!

    • Charles says:

      Rushi,
      You have a number of choices here. You could use regression (linear, polynomial, exponential, etc.) or some form of time series analysis (Holt-Winter, ARIMA, etc.). These approaches are described in various parts of the website.
      Before you start, I suggest that you create a scatter chart (as described on the website) to see whether your data has a linear or some other pattern. You can also see whether there is any seasonality element to the data. This will help you decide which technique to use.
      Charles

  2. Arshad K Butt says:

    Hi
    I have the following problem
    1. My class has 30 students
    2. Outcome variable is grade on test of French language coded as pass or fail (binary outcome)
    3. There are 3 different models that can predict success of failure of each student
    4. Which test can be employed to determine which is the best model for prediction?

    • Charles says:

      Arshad,
      It depends on how you define “best”. If you mean the least squared error, then for each model you calculate for each of the 30 students the square of y-observed minus y-predicted by the model and then sum these values. The model that has the smallest value for this sum of squares can be considered the best. There are other criterion that can be used though.
      Charles

  3. Claire says:

    I performed a stepwise logistic regression analysis with ~100 total data points and ~30 outcomes of interest. With respect to the “one in ten rule of thumb” is there a maximum number of independent variables I can test in a stepwise fashion?

    There are at most three variables in my final model. Did I err if, say, I tested a dozen variables to choose those 3 in the final model? I never really thought about overfitting in this scenario, to be honest.

    I can tell you more specifics of the data if necessary.

    • Charles says:

      Claire,
      I don’t recall describing a “one in ten rule of thumb”. What exactly is this rule? I don’t know any maximum number of independent variables that can be tested in a stepwise fashion.
      Have you looked at the following webpage?
      Stepwise Regression
      Charles

  4. nhell james says:

    these helped me a lot in my assignment. thanks!

  5. Peter says:

    Hi Charles
    Kindly advise me on the following:
    1. I have three independent variables (AT, SY & PC) which are all significant to the dependent variable C. The only distinction is that the Variable AT’s coefficient is negative whereas the others are positive. How should I explain this?
    2. The KMO for the factorization is above 50% ie, .593 which also means that it is acceptable. How should I explain this in regard to the study.

    I am not a statistics person so I am very ‘hot’.
    Thank you for your consideration

  6. Liz K says:

    Charles, your website is fantastic, I appreciate this resource being freely available online in such a clear and coherent format.

  7. Albert Chituka says:

    Hi Charles,

    Would you please help with the interpretation of this multiple regression output. I was appointed Headteacher of Chipembi Girls’ Secondary School, in 2010, at the time when the overall school performance and that of Maths, Science and Biology were plummeting.

    I invested hugely to improve the quality of teaching and learning in the aforementioned subjects and the overall school performance dramatically improved (as shown below). I wanted to determine the impact of the three subjects (Maths, Science and Biology) in improving the schools overall Grade 12 pass rate.
    YEAR SCH % PASS % FAIL – MATHS % FAIL BIOLOGY % FAIL – SCIENCE
    2009 91.8 29 14 17
    2010 98.44 23 0.78 7
    2011 99.22 10.9 4.7 1.6
    2012 99.3 8.1 2.9 7.4
    2013 100 9.4 0 4
    2014 100 8.4 0 2.52
    2015 100 4.7 0 0.94

    SUMMARY OUTPUT

    Regression Statistics
    Multiple R 0.985741155
    R Square 0.971685624
    Adjusted R Square 0.943371247
    Standard Error 0.705417301
    Observations 7

    ANOVA
    df SS MS F Significance F
    Regression 3 51.23093072 17.07697691 34.31774775 0.008019283
    Residual 3 1.49284070 0.497613568
    Total 6 52.72377143

    Coefficients Standard Error t Stat P-value
    Intercept 101.2674647 0.537506587 188.4 3.29737E-07
    % FAIL – MATHS -0.076790072 0.060340374 -1.272 0.292830347
    % FAIL BIOLOGY -0.323322932 0.105568695 -3.062 0.054877288
    % FAIL – SCIENCE -0.140791256 0.125626235 -1.121 0.344033995

    Questions:
    1. What is the meaning of R squared in this context, i.e. as regards the effect size of Maths, Science and Biology?
    2. Since both Maths & Science P-values were insignificant, would it right to say that both subjects did not contribute to the general improvement in the overall school performance?
    3. Would I be right to attribute the 97% variation in the dependent variable as being solely influenced by Biology?

    Thanking you in advance for assistance.

    Regards,

    Albert

    • Charles says:

      Albert,
      1. I don’t know how to relate R squared to the effect size of Math, Science and Biology.
      2. Yes, Math and Science did not make a significant improvement based on this model. However, I would next test the models (1) Math and Biology and (2) Science and Biology to see whether the p-value is still insignificant.
      3. No, if you remove Math and Science the R-square value will go down. These variables make a difference, but the difference is not significant.
      Charles

  8. Julio Morales says:

    Charles:

    I am doing a 90 day electricity consumption study on two buildings. I took daily meter readings (t morning and night). I have run simple linear regressions in excel to determine with low, high or average temperature where the better predictive value. In this simple two variable regression, the average temp. had the highest r2, and therefore, is the better predictive independent variable correct?

    Now the important question. I have found that consumption on Friday drops consistently from previous days (people leave early and actually turn off air, lights, and computers. How do i set up a regression analysis taking into account the temperature and the day of the week?

    • Charles says:

      Julio,

      I am not sure what you mean by average temperature has the better predictive variable. Temperature is the variable.

      What you are describing in paragraph two is a version of seasonal regression. Please see the following webpage for how you add day into the regression model: Seasonal Regression

      Charles

  9. Atif Ismail says:

    Hi Sir,
    I am working on estimation of permeability using different well responses using multivariate regression analysis.
    What is the relation the of null hypothesis with p value and alpha?
    Is this possible that variable having highest regression coefficient may have lowest significance in multivariate regression analysis?
    Thanks a lot for your time.

  10. Jonathan Bechtel says:

    Hi Charles,

    Great information as always.

    Quick question: what are the benefits of using this method to determine the significance of a variable vs. using the t-stat for the slope of the line?

    The latter seems more straightforward and easier to implement so it’d be useful to know when I ought to switch to this instead.

    Thanks,

    Jonathan

    • Charles says:

      Jonathan,
      The t stat is sufficient if you want to determine whether one variable is making a significant contribution to the model. If you want to see whether two or more variables together are making a significant contribution, then you should use the other approach. Also the other approach can be used to determine relative contributions of each variable.
      Charles

  11. Hadjimoosa says:

    Hi,
    I am stuck with the following:
    4 independent variables,
    1 dependent variable, (i.e. Y= a+x1+x2+x3+x4 + E
    After running the regression using SPSS, the following were the result:
    R² =0.899
    F change= 55.448
    Sig. (signicant)= 0.000
    But, the Beta coefficients are:
    a= 1606.175
    X1 = 2.477 sig. @ 0.001
    X2 = 0.085 sig. @0.001
    X3 = 0.00001664 sig. @0.079
    X4 = – 0.023 sig. @ 0.000.
    Considering the beta coefficients, can I say the Independent variables have contributed significantly to the dependent at 5% Significance level.
    Sorry for the error in the earlier message.
    Looking forward to a prompt response. Good job out here.

  12. Ella says:

    Hi there!

    I am stuck on a regression where both my independent variables are significant. what does this mean now?

    • Charles says:

      Ella,
      It means that both independent variables are contributing to the linear regression (i.e. the corresponding coefficients are significantly different from zero).
      Charles

  13. Alberto says:

    Hi if I were to use a regression (OLS for univariante and multivariante cases) to determine the “weight” or significant variables affecting this, what statistics should be examined?
    Thanks
    Alberto

  14. martha kumvenji says:

    i have three independent variables which is x1,x2,x5 i want formula for R. PLEASE help

  15. ASMARA ASURA says:

    nice course i like it. i have one research the topic was determinants of tax revenue performance w/c model is preferably to this study?

  16. Tim Henkel says:

    Can you comment on the AIC correction that you use. In reviewing Burnham and Anderson 2002, they provide the correction as AICc= AIC + [(2K(K+1))/(n-K-1)]. I was looking for a reference to the correction you use here. Thanks.

  17. zakiya says:

    i need to solve the question pls. help me
    y – Production of beef in kg/ha;
    x1- Average quantity of potatoes (kg)
    intended for the animal feed within 1 day;
    x2- The farm size in hectares;
    x3- The average purchase price of beef in a
    given region (in zł/kg);
    x4- Number of employed persons on the
    farm.
    y x1 x2 x3 x4
    1950 20 10 5 4
    2200 24 13 5,4 4
    2600 25 15 5,6 5
    2900 33 20 5,2 6
    3000 32 20 5,3 7
    3750 38 25 5,8 7
    4900 49 30 6 9
    5100 50 35 5,2 9
    5800 60 37 5,9 10
    Based on data from nine farms build a linear econometric model. Perform appropriate
    statistical tests (F test and t-Student tests) , remove an independent variable and recalculate
    the model if necessary

    • Charles says:

      To build the multiple linear regression model see the Multiple Regression webpage. You can perform this analysis manually, using Excel or using the Real Statistics tools. To determine what happens when you remove one independent variable, see the referenced webpage.
      Charles

  18. alberto rivas says:

    excellent course. thanks

Leave a Reply

Your email address will not be published. Required fields are marked *