We will now extend the method of least squares to equations with multiple independent variables of form

As in Method of Least Squares, we express this line in the form

Given a set of *n* points (*x*_{11}, …, *x*_{1k}, y_{1}), … , (*x _{n}*

_{1}, …,

*x*, y

_{nk}*), our objective is to find a line of the above form which best fits the points. As in the simple regression case, this means finding the values of the*

_{n}*b*coefficients for which the sum of the squares, expressed as follows, is minimum:

_{j}where ŷ* _{i}* is the y-value on the best fit line corresponding to

*x*

*, …, x*

_{ik}.**Definition 1**: The best fit line is called the (**multiple**) **regression line**

**Theorem 1**: The regression line has form

where the coefficients *b _{m}* are the solutions to the following

*k*equations in

*k*unknowns.

Click here for a proof of Theorem 1 (using calculus).

**Observation**: We can use either the population or sample formulas for covariance (as long as we stick to one or the other). Thus, we can use the Excel function COVAR for the population covariance (or COVARIANCE.P in Excel 2010/2013) or COVARIANCE.S for the sample covariance in Excel 2010/2013 (or the supplemental function COVARS), although as we will see below there are ways of calculating all the covariances together. Note too that where *j* = *m*

**Example 1**: A jeweler prices diamonds on the basis of quality (with values from 0 to 8, with 8 being flawless and 0 containing numerous imperfections) and color (with values from 1 to 10, with 10 being pure white and 1 being yellow). Based on the price per carat (in hundreds of dollars) of the following 11 diamonds weighing between 1.0 and 1.5 carats, determine the relationship between quality, color and price.

**Figure 1 – Data for Example 1**

As described above, we need to solve the following equations:

where *x*_{1} = quality and *x*_{2} = color, which for our problem yields the following equations (using the sample covariances that can be calculated via the COVAR function as described in Basic Concepts of Correlation):

For this example, finding the solution is quite straightforward: *b*_{1} = 4.90 and *b*_{2} = 3.76. Thus the regression line takes the form

Using the means found in Figure 1, the regression line for Example 1 is

(Price – 47.18) = 4.90 (Color – 6.00) + 3.76 (Quality – 4.27)

or equivalently

Price = 4.90 ∙ Color + 3.76 ∙ Quality + 1.75

Thus, the coefficients are *b*_{0} = 1.75, *b*_{1} = 4.90 and *b*_{2} = 3.76.

**Observation**: The fact that coefficient *b*_{1} is larger than *b*_{2} doesn’t mean that it plays a stronger role in the prediction described by the regression line. If, however, we standardize all the variables that are used to create the regression line, then indeed the coefficients that have a larger absolute value do have a greater influence in the prediction defined by the regression line. Note that if we do this the intercept will be zero.

**Observation**: With only two independent variables, it is relatively easy to calculate the coefficients for the regression line as described above. With more variables, this approach becomes tedious, and so we now define a more refined method.

**Definition 2**: Given *m* random variables *x*_{1}, *x*_{2}, …, *x _{m}* and a sample

*x*

_{1j},

*x*

_{2j}, …,

*x*of size

_{nj}*n*for each random variable

*x*, the

_{j}**covariance matrix**is an

*m*×

*m*array of form [

*c*] where

_{ij}*c*= cov(

_{ij}*x*,

_{i}*x*). The

_{j}**correlation matrix**is an

*m*×

*m*array of form [

*c*] where

_{ij}*c*is the correlation coefficient between

_{ij}*x*and

_{i }*x*.

_{j}The **sample covariance matrix** is the covariance matrix where the *c _{ij}* refer to the sample covariances and the

**population covariance matrix**is the covariance matrix where the

*c*refer to the population covariances.

_{ij}Since the corresponding sample and population correlation matrices are the same, we refer to them simply as the correlation matrix.

**Property 0**: If *X* is the *n* × *m* array [*x _{ij}*] and

*x̄*is the 1 ×

*m*array [

*x̄*], then the sample covariance matrix S and the population covariance matrix Σ have the following property:

_{j}**Example 2**: Find the regression line for the data in Example 1 using the covariance matrix.

The approach is described in Figure 2.

**Figure 2 – Creating the regression line using the covariance matrix**

The sample covariance matrix for this example is found in the range G6:I8. Since we have 3 variables, it is a 3 × 3 matrix. In general, the covariance matrix is a (*k*+1) × (*k*+1) matrix where *k* = the number of dependent variables. The sample covariance matrix can be created in Excel, cell by cell using the COVARIANCE.S or COVARS function. Alternatively, using Property 0, it can be created by highlighting the range G6:I8 and using the following array formula:

=MMULT(TRANSPOSE(A4:C14-A15:C15),A4:C14-A15:C15)/(B17-1)

(see Matrix Operations for more information about these matrix operations). The sample covariance matrix can also be created using the following supplemental array function (as described below):

=COV(A4:C14)

Note that the linear equations that need to be solved arise from the first 2 rows (in general, the first *k* rows) of the covariance matrix, which we have repeated in the range G12:I13 of Figure 2.

Solving this system of linear equations is equivalent to solving the matrix equation *AX = C* where *X* is the *k* × 1 column vector consisting of the *b _{j}*,

*C*= the

*k*× 1 column vector consisting of the constant terms and

*A*is the

*k*×

*k*matrix consisting of the coefficients of the

*b*terms in the above equations. Using the techniques of Matrix Operations and Simultaneous Linear Equations, the solution is given by

_{i}*X*=

*A*

^{-1}

*C*. For this example the solution

*A*

^{-1}C is located in the range K16:K17, and can be calculated by the array formula:

= MMULT(MINVERSE(F16:G17),I16:I17))

Thus *b*_{1} is the value in cell K16 (or G20) and *b*_{2} is the value in cell K17 (or G21). The value of the coefficient *b*_{0} (in cell G19) is found using the following Excel formula:

=C15-MMULT(A15:B15,G20:G21)

**Real Statistics Excel Support**: The Real Statistics Resources Pack provides the following supplemental array functions:

**COV**(R1, *b*) = the covariance matrix for the sample data contained in range R1, organized by columns. If R1 is a *k × n* array (i.e. variables, each with a sample of size *n*), then COV(R1) must be a *k × k* array.

**COVP**(R1, *b*) = the population covariance matrix for the data contained in range R1. The result is the same as COV(R1) except that entries use the population version of covariance (i.e. division by *n* instead of *n* – 1).

**CORR**(R1, *b*) = the correlation matrix for the data contained in range R1.

If *b* = TRUE (default) then any row in R1 which contains a blank or non-numeric cell is not used, while if *b* = FALSE then correlation/covariance coefficients are calculated pairwise (by columns) and so any row which contains non-numeric data for either column in the pair is not used to calculate that coefficient value.

The Real Statistics Resource Pack also contains a **Matrix Operations** data analysis tool which includes similar functionality.

**Observation**: Let R1 be a* k *×* n* range which contains only numeric value, let R2 be a 1 ×* n* range containing the means of the columns in R1 and let R3 be a 1 ×* n* range containing the standard deviations of the columns in R1. Then

COV(R1) = MMULT(TRANSPOSE(R1-R2),R1-R2)/(ROWS(R1)–1)

CORR(R1) = MMULT(TRANSPOSE((R1-R2)/R3),(R1-R2)/R3)/(ROWS(R1)–1)

Charles,

Can’t thank you enough for the information. Question for you: I’d like to perform a weighted MLE in Excel (minimizing the weighted squared error with weights I define) without using an add-in (I have to share the sheet with various users who will not all be able to install outside software). I was cheating and using solver but I’m finding it is giving me unusual (and often incorrect) answers. I figured out how to do it mathematically for an OLE but I’m stumped on how to do it for an MLE. Can you help?

Bonus question: is there also a way to do it with constraints on the variables? For example, say in B1 x X1 +B2 x X2 + B3 I want B1 >=1, B2 >=0, and B3 >=0.

Thanks!

Ethan

Ethan,

You should be able to use Solver for this. If you send me an Excel file with your data and analysis I will try to understand why Solver is giving unusual results.

See also the following webpage regarding how to perform weighted linear regression.

Weighted Linear Regression

Charles

How would you standardize the variables to see which ones have a greater influence on the prediction?

Jack,

You can standardize the data for each variable by using Excel’s STANDARDIZE function (based on the mean and std dev for each variable), but this won’t tell you which variable have the greater influence on the prediction. You can use the Shapley-Owen decomposition for this. See the following webpage for details

http://www.real-statistics.com/multiple-regression/shapley-owen-decomposition/

Charles

Hi, please why are linear and multiple regression called “least square” regression? And why do we calculate confidence interval for slope and intercept?

Aloysius,

This is because the regression algorithm is based on finding coefficient values that minimize the sum of the squares of the residuals (i.e. the difference between the observed values of y and the values predicted by the regression model) – this is where the “least squares” notion comes from.

You don’t need to calculate confidence intervals for the slope(s) and intercept, but the idea here is to find the range of probable values for the real slopes and intercept (by real I mean the population values of these parameters). If the confidence interval for the slope (or intercept) contains zero, then statistically speaking you can assume that that slope (or intercept) value is zero, i.e. can be removed from the model.

Charles

sir how to analysis the use in categorical predictor variables .

eg: gender(male/female), area(urban/village)…. etc

using these predictor variables how to develop the multiple linear regression model , response variable is student mathematics marks. sir can you give me to the idea. thank you sir

This is done using dummy variables. Please look at the following webpage for details:

http://www.real-statistics.com/multiple-regression/multiple-regression-analysis/categorical-coding-regression/

You can also use the Search box to find other descriptions of dummy variables in the website.

Charles

Hi,

Thanks a lot for the nice detailed explanation.

One question, what does the final regression line look like in the Example 2?

Thanks!

See Example of the webpage Multiple Regression Analysis in Excel.

Charles

Just wanted to thank you for these web sites.

In general I would say these are probably the best web sites I have ever come across with!

Best Regards,

Oskari

Oskari,

Thank you very much. I am very pleased that you like the website.

Charles

Can anyone please help me out in solving the following problem:

35.36αul + 1.16Xul + 34.2ωul = 19.41

Need to find out values ofαul , Xul, and ωul = 0.

That will satisfy the equations closely

an early response would be much appreciated.

There are an infinite number of exact solutions to the equation that you have given. There are no solutions where αul = 0, Xul = 0 and ωul = 0.But I don’t think this is the intended question. Please explain better what you are looking for.

Charles

Can you please explain the following:

For this example, finding the solution is quite straightforward: b1 = 4.90 and b2 = 3.76.

How did you end up with the factors for b1 and b2, 4.9 and 3.76, respectively.

Stan,

This is explained on the referenced webpage. Essentially you generate the following two linear equations

20.5 = 5.80b1 – 2.10b2

15.35 = -2.10b1 + 6.82b2

and then you find the solution using high school algebra. E.g. if you multiply the first equation by 2.10 and multiply the second equation by 5.80 and then add the two equations together, the b1 term will drop out and you can solve the resulting equation for b2. Once you have the value for b2, you can substitute it in the first equation and solve for b1.

Charles

Hi Charles,

I made the calculations of the equations below figure 1 and i found that there is a trouble… the result of my calculation is

20.5 =5.8b1 -2.1b2

15.34 = -2.1b1 – 6.82b2

S0, cov(y,x1)= 15,34, cov(x1,x2)=-2.10, cov(x1,x1)=6.82, cov(x2,x2)= 5.8

and cov(y,x2)= 20,5

with color=x2 and quality= x1 (as you say in the start of the text)

Are there any mistakes int he equations?

Everything looks good except that you made typo in the second equation. It should be 15.34 = -2.1b1 + 6.82b2.

Charles

Good, clearly written approach – thanks!

Thank you for the formulas. My textbook and Prof ( for online course) are both useless. This helped a lot and I was actually able to do my assignment. Appreciate it!!

Thanks a lot for this website! Really helped a lot..

Hi Charles, do you have a matlab code for the least square with an example?

Hi Emrah,

Sorry, but I don’t have matlab code for the least square. Examples using Excel can be found on my website.

Charles

I do if it is still no too late. There is also a lot of regression add-ons for matlab on the File Exchange. Let me know and good luck

Hi Charles,

Just wanted to say a massive thank you for creating this whole site. It has fantastically written pieces with the relevant mathematically formulations for those who wish to fully understand the processes and brilliant examples for those who just wish to use them. I wished to do both so to have both in one place in depth is invaluable.

Many, many thanks,

Josh

Josh,

I really appreciate your comment and am very pleased that you have benefitted from the website.

Charles

Hi,

sir how you found 20.5= 5.80b1-2.10b2 from figure 1 by using the equation

cov(y,x1)=b1 cov(x1,x1)+b2 cov(x2,x1)

You have only written one equation, but there are two equations, not just one. In fact there are two linear equations in two unknowns. The coefficients b1 and b2 are the unknowns, the values for cov(y1,x1), cov(x1,x2), etc. are known (they can be calculated from the sample data values). Normal algebra can be used to solve two equations in two unknowns. With more equations and more unknowns you can still use algebra, but you can also use the techniques shown elsewhere on the site. Charles

Hi,

I think the second formula below figure 1 contains an error:

http://www.real-statistics.com/wp-content/uploads/2013/02/image1824.png

It should probably read:

cov(y,x2) =

Regards,

Rainer

Rainer,

Another good catch. I have made the change. Thanks for your careful examination. It is much appreciated.

Charles