Fit of the Regression Line

In this section we test the following null hypothesis:

H0: the regression line doesn’t capture the relationship between the variables

If we reject the null hypothesis it means that the line is a good fit for the data. We now express the null hypothesis in a way that is more easily testable:

H0: \sigma^2_{Reg} ≤ \sigma^2_{Res}

As described in Two Sample Hypothesis Testing to Compare Variances, we can use the F test to compare the variances in two samples. To test the above null hypothesis we set F = MSReg/MSRes and use dfReg, dfRes degrees of freedom.

Observation: The use of the linear regression model is based on the following assumptions:

  • Linearity of the phenomenon measured
  • Constant variance of the error term
  • Independence of the error terms
  • Normality of the error term distribution

In fact the normality assumption is equivalent to the condition that the sample comes from a population with a bivariate normal distribution. See Multivariate Normal Distribution for more information about this distribution. The homogeneity of the variance assumption is equivalent to the condition that for any values x1 and x2 of x, the variance of y for those x are equal, i.e.

image1742

Observation: Linear regression can be effective with a sample size as small as 20.

Example 1: Test whether the regression line in Example 1 of Method of Least Squares is a good fit for the data.

Regression line goodness fit

Figure 1 – Goodness of fit of regression line for data in Example 1

We note that SST = DEVSQ(B4:B18) = 1683.7 and r = CORREL(A4:A18, B4:B18) = -0.713, and so by Property 3 of Regression Analysis, SSReg = r2·SST = (1683.7)(0.713)2 = 857.0. By Property 1 of Regression Analysis, SSRes = SST SSReg = 1683.7 – 857.0 = 826.7. From these values, it is easy to calculate MSReg and MSRes.

We now calculate the test statistic F = MSReg/MSRes = 857.0/63.6 = 13.5. Since Fcrit = FINV(α, dfReg, dfRes) = FINV(.05, 1, 13) = 4.7 < 13.5 = F, we reject the null hypothesis, and so accept that the regression line is a good fit for the data (with 95% confidence). Alternatively, we note that p-value = FDIST(F, dfReg, dfRes) = FDIST(13.5, 1, 13) = 0.0028 < .05 = α, and so once again we reject the null hypothesis.

Observation: There are many ways of calculating SSReg, SSRes and SST. E.g., using the worksheet in Figure 1 of Regression Analysis, we note that SSReg = DEVSQ(K5:K19) and SSRes = DEVSQ(L5:L19). These formulas are valid since the means of the y values and ȳ values are equal by Property 5(b) of Regression Analysis.

Also by Definition 2 of Regression Analysis, SSRes = \sum{} (yi – ŷi)2  = SUMXMY2(J5:J19, K5:K19). Finally, SST = DEVSQ(J5:J19), but alternatively SST = var(y) ∙ dfT = VAR(J5:J19) * (COUNT(J5:J19)-1).

14 Responses to Fit of the Regression Line

  1. Pedro says:

    Dear Charles,

    Is this equivalent of asking excel to run a regression analysis using the “Data analysis” package?

    Is there any statistical test we should do to test linearity?

    • Charles says:

      Pedro,
      1. This data is displayed when you run Excel’s Regression data analysis tool. This webpage describes how to interpret this part of the results.
      2. Create a scatter plot and see whether the data roughly aligns with a straight line.
      Charles

  2. Hi Charles, are these methods valid for multiple regression as well?

  3. Syifa says:

    Hi Charles,
    How does one test the assumption of linearity of the phenomenon measured? Sorry for the basic question.

    • Charles says:

      Syifa,
      The easiest method to determine whether is linearity between y and x is to create a scatter plot and see whether the points are reasonably close to a straight line.
      Charles

      • Syifa says:

        No need to count anything? I was told that one can correlate between straight line and the data and produce a number. Forgot the source 🙁
        I was wondering if the software can be used for things like this. Like I can see straight line in the scatter but my teacher doesn’t see that they are close…

        • Charles says:

          Syifa,
          You can correlate between the data and the corresponding points on the straight line. This is the square root of the R-square statistic from the regression analysis.
          Charles

  4. bhuvan says:

    how to calculate Dof for SSreg?

  5. John says:

    The cells that you are using for calculations in the final Observation section must have been changed. There are no columns J, K, or L in Figure 1.

  6. Colin says:

    Sir
    In example 1 you wrote: “We note that SST = DEVSQ(A4:A18) = 1683.7” It seems to be a typo. It should be DEVSQ(B4:B18)
    In the second last paragraph you said “SSRes = \sum {(y_i^2 – \hat{y}_i^2)} = SUMX2MY2 (J5:J19, K5:K19).” It looks like a mistake. I’ve checked in spreadsheet.

    • Charles says:

      Hi Colin,
      You are correct on all counts. I have made the changes that you have identified on the webpage. Thanks again for your diligence. You have certainly helped make things clearer for everyone who looks at the site.
      Charles

Leave a Reply

Your email address will not be published. Required fields are marked *