Testing the significance of the slope of the regression line

In this section we test the value of the slope of the regression line.

Observation: By Theorem 1 of One Sample Hypothesis Testing for Correlation, under certain conditions, the test statistic t has the property


But by Property 1 of Method of Least Squares


and by Definition 3 of Regression Analysis and Property 4 of Regression Analysis


Putting these elements together we get that



Since by the population version of Property 4 of Regression Analysis

image1756it follows that ρ = 0 if and only if β = 0. Thus Theorem 1 of One Sample Hypothesis Testing for Correlation can be transformed into the following test of the hypothesis H0: β = 0 (i.e. the slope of the population regression line is zero):


Example 1: Test whether the slope of the regression line in Example 1 of Method of Least Squares is zero.

Figure 1 shows the worksheet for testing the null hypothesis that the slope of the regression line is 0.

Slope regression t test

Figure 1 – t-test of the slope of the regression line for data in Example 1

Since p-value = .0028 < .05 = α (or |t| = 3.67 > 2.16 = tcrit) we reject the null hypothesis, and so we can’t conclude that the population slope is zero.

Note that the 95% confidence interval for the population slope is

b ± tcrit · sb = -628 ± 2.16(.171) = (-.998, -.259)

Observation: We can also test whether the slopes of the regression lines arising from two independent populations are significantly different. This would be useful for example when testing whether the slope of the regression line for the population of men in Example 1 is significantly different from that of women.

Click here for additional information and an example about Hypothesis Testing for Comparing the Slopes of Two Independent Samples.

Excel Functions: where R1 = the array of observed  values and R2 = the array of observed  values.

STEYX(R1, R2) = standard error of the estimate sy∙x = SQRT(MSRes)

LINEST(R1, R2, TRUE, TRUE) – an array function which generates a number of useful statistics.

To use LINEST, begin by highlighting a blank 5 × 2 region, enter =LINEST( and then highlight the R1 array, enter a comma, highlight the R2 array and finally enter ,TRUE,TRUE) and press Ctrl-Shft-Enter.

The LINEST function returns a number of values, but unfortunately no labels for these values. To make all of this clearer, Figure 2 displays the output from LINEST(A4:A18, B4:B18, TRUE, TRUE) using the data in Figure 1. I have added the appropriate labels manually for clarity.

LINEST Excel regression

Figure 2 – LINEST(B4:B18,A4:A18,TRUE,TRUE) for data in Figure 1

R Square is the correlation of determination r2 (see Definition 2 of Basic Concepts of Correlation), while all the other values are as described above with the exception of the standard error of the y-intercept, which will be explained shortly.

Excel also provides a Regression data analysis tool. The creation of a regression line and hypothesis testing of the type described in this section can be carried out using this tool. Figure 3 displays the principal output of this tool for the data in Example 1.

Regression data analysis tool

Figure 3 – Output from Regression data analysis tool

The following is a description of the fields in this report:

Summary Output:

  • Multiple R – correlation coefficient (see Definition 1 of Multiple Correlation, although since there is only one independent variable this is equivalent to Definition 2 of Basic Concepts of Correlation)
  • R Square – coefficient of determination (see Definition 1 of Multiple Correlation), i.e. the square of Multiple R
  • Adjusted R Square – see Definition 2 of Multiple Correlation
  • Standard Error = SQRT(MSRes), can also be calculated using Excel’s STEYX function
  • Observations – sample size


  • The first row lists the values for dfReg, SSReg, MSReg, F = MSReg/MSRes and p-value
  • The second row lists the values for dfRes, SSRes and MSRes
  • The third row lists the values for dfT and SST

Coefficients (third table):

The third table gives key statistics for testing the y-intercept (Intercept in the table) and slope (Cig in the table). We will explain the intercept statistics in Confidence and Prediction Intervals for Forecasted Values. The slope statistics are as follows:

  • Coefficients – value for the slope of the regression line
  • Standard Error – standard error of the slope, sb = sy∙x / (ssx * SQRT(n-1))
  • t-Stat = b/sb
  • P-value = TDIST(t, dfRes, 2); i.e. 2-tailed value
  • 95% confidence interval = b ± tcrit sb

In addition to the principal results described in Figure 3, one can optionally generate a table of residuals and table of percentiles as described in Figure 4.

Residual output Excel regression

Figure 4 – Additional output from Regression data analysis tool for data

Residual Output:

  • Predicted Life Exp = Cig * b + a; i.e. ŷ
  • Residuals = Observed Life Exp – Predicted Life Exp; i.e. y – ŷ
  • Standard Residuals = Residual / Std Dev of the Residuals (since the mean of the residuals is expected to be 0): i.e. e/se

For example. for Observation 1 we have

  • Predicted Life Exp = -.63 * 5 + 85.72 = 82.58
  • Residuals = 80 – 82.58 = -2.58
  • Standard Residuals = -2.58 / 7.69 = -.336

Note that the mean of the residuals is approximately 0 (which is consistent with a key assumption of the regression model) and standard deviation 7.69.

There is also the option to produce certain charts, which we will review when discussing Example 2 of Multiple Regression Analysis.

14 Responses to Testing the significance of the slope of the regression line

  1. Birend Dhungana says:

    Thank you for the post.
    I am trying to generate a calibration curve consisting 5 data points. I have six sets of replicate measurement data. Now I am wondering, what is the best approach of finding/reporting standard deviations of slope and intercept: a) get single data set by averaging values from replicate measurements and use LINEST function or b) obtain 6 slope and intercept values by plotting each data set of replicate measurement separately and calculate standard deviations of slopes and intercept.
    Also, how can I calculate confidence interval if the approach (a) is used?

    I have similar set of data obtained for slightly different condition and I need to compare if resulting slopes for these two different conditions significantly different?

  2. Jérémie says:

    Hi! Thank you for your site, I find it very useful. I have a question about linear regression. I have performed a test to check correlation between two variables. For this test, 11 points were taken 4 times each (i.e. at the temperature of 4.5, the replicates were 4, 7, 6.5, 8.1). If I check for significance of a correlation between the average of each point (11 points on the curve) I can’t reject the null (p=0.073). However, if I use all 44 points I can reject the null (p=0.0005). I am wondering which treatment is correct and why? Thank you in advance.

    • Charles says:

      One of the assumptions for linear regression is that the observations are independent. In the 44 point case, you clearly don’t have independent observations (since there are 4 measurements for each of the 11 points).

  3. Mehdi says:

    Hi. thanks for your useful and clear explains.
    I’ve used Eviews software to estimate an independent variable as a function of 8 independent variables. the R^2 for my model is high (0.72) but the t value for my parameters of independent variables are too low (in some cases less than 0.001).
    I want to know should I remove variables with low t value? and is R square more important compared to t value?

    • Charles says:

      The R^2 value is a measure of the overall fit of the model. The p-value (not the t statistic) of each coefficient is a measure of weather the corresponding variable is contributing much to the model. You can remove a variable whose corresponding p-value is not significant (this indicates that the corresponding coefficient is statistically equivalent to zero). You could remove such variables from the model and see what impact this has on the R^2 value. This is explained on the webpage

      Significance regression model variables


  4. Vicki says:

    Hi Thanks for the post. It is very helpful. I need to calculate LOQ and LOD for my work. I used ICH guideline about standard deviation of y-intercept/slope to calculate LOQ and LOD. I used regression analysis and get the standard error of y-intercept (3rd table). I also calculated the STEYX. The two data do not match. Do they suppose to match? I am very confused. Also ICH calls for standard deviation but bot 3rd table and STEYX have the name of standard error instead of deviation, but I searched online and everybody said they are same thing. Can I send you the excel file.

  5. djU says:

    hi, i was just wondering on how do i know which independent variables are significant and which are not?

  6. Ryan says:

    Under Figure 3 below Summary Output I believe R Square – correlation of determination should be “coefficient of determination” Also, does the Real Statistics Data Analysis Tools offer variance inflation factor for linear regression? Or a scatterplot matrix feature to check for multicollinearity?

  7. krithi Subramanian says:

    I am doing multiple regression in Excel 2007. I have a one Dependent Data and 18 independent Data. But i am not to finding multiple regression at the time in all my Data.
    Its give warning message like this, Only 16 column are available. can you please give me solution.?

    • Charles says:

      Excel’s Regression data analysis tool is limited to 16 independent variables. You can use the Real Statistics Linear Regression data analysis tool instead. This tool supports up to 64 independent variables and is part of the Real Statistic Resource Pack, which you can download for free from this website.

Leave a Reply

Your email address will not be published. Required fields are marked *