**Definition 1**. Matrices of the same shape can be added and subtracted.

Let *A* and *B* be *r* × *c* matrices with *A* = [*a _{ij}*] and

*B*= [

*b*]. Then

_{ij}*A + B*is an

*r*×

*c*matrix with

*A*+

*B*= [

*a*] and

_{ij }+ b_{ij}*A – B*is an

*r × c*matrix with

*A – B*= [

*a*].

_{ij }– b_{ij}**Definition 2**: A matrix can be multiplied (or divided) by a scalar. A scalar can also be added to (or subtracted from) a matrix.

Let *A* be an *r* × *c* matrix with *A* = [*a _{ij}*] and let

*b*be a scalar. Then

*bA*and

*A + b*are

*r*×

*c*matrices where

*bA*= [

*b ·*

*a*] and

_{ij}*A + b*= [

*a*+

_{ij }*b*]. We can define

*Ab*and

*b*+

*A*in a similar fashion. Clearly,

*b + A = A + b*and

*Ab = bA*. Division and subtraction of matrices by scalars can be defined similarly.

**Definition 3**: Two matrices can also be multiplied, but only if they have compatible shape.

Let *A* be a *p* × *m* matrix with *A* = [*a _{ij}*], and let

*B*be an

*m*×

*n*matrix with

*B*= [

*b*]. Then

_{jk}*AB*is an

*p*×

*n*matrix with

*AB*= [

*c*] where

_{ik}**Observation**: For the multiplication *AB* to be valid, the number of columns in *A* must equal the number of rows in *B*. The resulting matrix will have the same number of rows as *A* and the same number of columns as *B*.

The associative law holds, namely (*AB*)*C = A*(*BC*)*,* i.e. it doesn’t matter whether you multiply *A* by *B* and then multiply the result by *C* or first multiply *B* by *C* and then multiply *A* by the result. It is essential that the matrices have compatible shape. Thus if *A* is *p × m*, *B* is *m × n* and *C* is *n × s* then *ABC* will have shape *p × s*. The distributive laws, namely *A*(*B + C*)* = AB + BC* and (*A + B*)*C = AC + BC*, also hold.

The commutative law of addition holds, namely *A + B = B + A*, but the commutative law of multiplication does not hold even when the matrices have suitable shape; thus, even for two *n x n* matrices *A* and *B*, *AB* is not necessarily equal to *BA*. For square matrices the trace of *AB* is equal to the trace of *BA* though.

**Property 0**: For square matrices *A* and *B* of the same size and shape and scalar *c*:

- Trace(
*A+B*) = Trace(*B+A*) - Trace(
*cA*) =*c*Trace(*A*) - Trace(
*AB*) = Trace(*BA*)

Proof: The proofs are straightforward, based on the definition of trace and matrix addition and multiplication.

**Definition 4**: The **transpose** of an *r × c* matrix *A* = [*a _{ij}*] is the

*c × r*matrix

*A*= [

^{T}*a*].

_{ji}A (square) matrix *A* is **symmetric** if *A = A ^{T}*

**Property 1**:

- (
*A*)^{T}^{T}= A *(AB*)^{T}*=**B*^{T}*A*^{T}- If A and B are symmetric and
*AB = BA*then*AB*is symmetric - Trace(
*A*) = Trace(*A*^{T})

Proof: We prove (c). Assume that *A* and *B* are symmetric. By Definition 4 and Property 1b, *AB = A*^{T}*B*^{T} = (*BA*)^{T} = (*AB*)^{T}

**Observation**: If *A* is a column vector then *A ^{T}A* is a scalar. In fact,

*A*= ‖A‖. Thus, a column vector

^{T}A*A*is a unit vector if and only if

*A*= 1.

^{T}A**Definition 5**: An *n* × *n* matrix *A* is **invertible** (also called **non-singular**) if there is a matrix *B *such that *AB* = *BA* = *I _{n}*.

*A*is the

^{-1}**inverse**of

*A*provided

*A*=

*A*^{-1}*A*

^{-1}*A*=

*I*. A matrix which is not invertible is called

_{n}**singular**.

**Property 2**: If *A* is invertible, then the inverse is unique.

Proof: Suppose *B* and *C* are inverses of *A*. Then by the associative law, *C* = *IC* = (*BA*)*C = B*(*AC*) = *BI = B,* and so *C = B*.

**Observation**: In fact, if there is a matrix such that *AB* = *I _{n}* or

*BA*=

*I*then

_{n}*A*is invertible and

*=*

*A*^{-1}*B.*

**Property 3**: If *A* and *B* are invertible, then (* A^{-1}*)

*=*

^{-1}*A*and (

*)*

*AB**=*

^{-1}

*B*^{-1}

*A*^{-1}Proof: The first assertion results from the first assertion of Property 2.

Since (*AB*)(*B ^{-1}*

*) =*

*A*^{-1}*A*(

*BB*)

^{-1}*A*,=

^{-1}*AIA*=

^{-1}*AA*=

^{-1}*I*the second assertion follows from the second assertion of Property 2.

**Property 4**: If *A* is invertible, then so is its transpose and (*A*^{T})^{-1 }= (*A*^{-1})^{T}

Proof: By Property 1b, *A*^{T}(*A*^{-1})^{T} = (*A*^{-1}*A*)^{T} = *I*^{T} = *I. *Similarly, (*A*^{-1})^{T}*A*^{T} = (*AA*^{-1})^{T} = *I*^{T} = *I*.

**Property 5**: *A* is symmetric if and only if *A*^{-1} is also symmetric

Proof: Assume *A* is symmetric, then by Property 4, (*A*^{-1})^{T} = (*A*^{T})^{-1} = *A*^{-1}, and so *A*^{-1} is also symmetric. For the converse, assume that *A*^{-1} is symmetric, then from the above, it follows that (*A*^{-1})^{-1} is symmetric, but by Property 3, this means that *A* is symmetric.

**Example 1**: Find the inverse of

Since the inverse of *A* takes the form

where *AA^{-1}* =

*I*

_{2}, it follows that

Thus we need to solve the following four linear equations in four unknowns:

Solving these equations yields *a* = 2/3, *b* = -1/3, *c* = 1/3, *d*=1/3, and so it follows that

**Excel Functions**: Excel provides the following array functions to carry out the various matrix operations described above (where we conflate the arrays *A* and *B* with the ranges in an Excel worksheet that contain these arrays).

**MMULT**(*A*, *B*): If *A* is an *p* × *m* array and *B* is an *m* × *n* array, then MMULT(*A, B*) = the *p* × *n *matrix *AB*. Note that since this is an array function, you must first highlight a *p* × *n* range before entering =MMULT(*A, B*) and then you must press **Ctrl-Shft-Enter**.

**MINVERSE**(*A*): If *A* is an *n × n* square array, then MINVERSE(*A*) = *A ^{-1}*. This is an array function and so you must highlight an

*n × n*range before entering =MINVERSE(

*A*) and then pressing

**Ctrl-Shft-Enter**.

**TRANSPOSE**(*A*): If *A* is an *m × n* array, then TRANSPOSE(*A*) = *A ^{T}*. This is an array function and so you must highlight an

*n × m*range before entering =TRANSPOSE(

*A*) and then pressing

**Ctrl-Shft-Enter**.

Excel 2013 and Excel 2016 also provide the array function **MUNIT**(*n*) which returns the *n* × *n* identity matrix.

You can also transpose an array *A* in Excel by copying the array (i.e. by highlighting the array and pressing **Ctrl-C**), clicking where you want *A ^{T}* located (i.e. the cell at the upper left corner of

*A*) and then selecting

^{T}**Home > Clipboard|Paste**and choosing the

**Transpose**option.

In addition, if *A* and *B* are defined as arrays (e.g. they are named arrays or entities such as B5:F8 or they are the results of matrix operations such as TRANSPOSE, INVERSE or MMULT, then they can be manipulated using the +, -, *, / and ^ operators. These operations are done on a cell by cell basis.

For example, suppose range B2:C3 contains

If you highlight range D7:E8, enter =2*B2:C3+TRANSPOSE(B2:C3) and then press **Ctrl-Shft-Enter**, D7:E8 will contain

Note that D7:E8 must have the same shape as B2:C3 or an error will result. Note too that

Note too that if *A* is an *m × n* matrix and *B* is a 1 *×* *n* matrix (i.e. a row vector) then *A* + *B* is a valid operation in Excel and gives the same result as *A* + *C* where *C* is an *m × n* matrix all of whose rows contain the same data as

*B*. Similarly you can calculate

*A*–

*B*,

*A**B and

*A*/

*B*. Also, you can calculate

*A*+

*B*,

*A*–

*B*,

*A**

*B*and

*A*/

*B*where

*B*is an

*m*

*×*1 column vector.

E.g. Suppose B2:C3 contains and E2:E3 contains . If you highlight F4:G5, enter =B2:C3–E2:E3 and then press **Crtl-Shft-Enter**, F4:G5 will contain .

**Real Statistics Data Analysis Tool**: The Real Statistics Resource Pack provides the **Matrix Operations** data analysis tool, which supports a number of matrix operations. These work just like the corresponding worksheet array functions TRANSPOSE, MINVERSE, etc., except that you don’t need to specify the shape and size of the matrix since these are determined automatically from the shape and size of the input matrix.

**Example 2**: Perform all the operations supported by the Matrix data analysis tool for the data in Figure 1.

**Figure 1 – Data for Example 2**

After pressing **Ctrl-m** and selecting the Matrix option, a dialog box appears. Click on the options shown in Figure 2 when the dialog box appears.

**Figure 2 – Dialog box for Example 2**

Note that we didn’t select the **Eigenvalues/vectors** option since the matrix in Figure 1 is not symmetric and so that option is not useful. Instead we chose the **Eigenpairs (non-sym)** option which will display the eigenvalues and eigenvectors for the matrix in Figure 1 even though it is not symmetric.

The result is shown in Figure 3 and 4 (slightly reformatted).

**Figure 3 – Output from Matrix data analysis tool (part 1)**

**Figure 4 – Output from Matrix data analysis tool (part 2)**

**Definition 6**: Vectors X_{1}, …, X* _{k}* of the same size and shape are

**independent**if for any scalar values

*b*

_{1}, …

*b*, if

_{k}*b*

_{1}

*X*

_{1}

*+⋯+ b*= 0, then

_{k}X_{k}*b*

_{1}= … =

*b*= 0.

_{k}Vectors *X*_{1}, …, *X _{k}* are

**dependent**if they are not independent, i.e. there are scalars

*b*

_{1}, …

*b*, at least one of which is non-zero, such that

_{k}*b*

_{1}

*X*

_{1}

*+⋯+ b*= 0.

_{k}X_{k}**Observation**: If *X*_{1}, …, *X _{k}* are independent, then

*X*≠ 0 for all

_{j}*j*.

**Property 6**: *X*_{1}, …, *X _{k}* are dependent if and only if at least one of the vectors can be expressed as a linear combination of the others.

Proof: Suppose *X*_{1}, …, *X _{k}* are dependent. Then there are scalars

*b*

_{1}, …

*b*, at least one of which is non-zero such that

_{k}*b*

_{1}

*X*

_{1}

*+⋯+ b*= 0. Say

_{k}X_{k}*b*≠ 0. Then

_{i}Now suppose that *X _{i}* = . Then

*b*

_{1}

*X*

_{1}

*+⋯+ b*= 0, where

_{k}X_{k}*b*= -1, and so

_{i}*X*

_{1}, …,

*X*are dependent.

_{k}**Definition 7**: The **dot product** of two vectors *X* = [*x _{i}*] and

*Y*= [y

*] of the same shape is defined to be the scalar*

_{j}**Observation**: If *X* and *Y* are *n* × 1 column vectors, then *X∙Y *= *X*^{T}*Y *= *Y*^{T}*X*. Also ||*X*|| = √*X·X.*

**Excel Function**: If R1 is the range containing the data in *X* and R2 is the range containing the data in *Y* then *X · Y* = **SUMPRODUCT**(R1, R2).

**Definition 8**: Two non-null vectors of the same shape are **orthogonal** if their dot product is 0.

can i get a idea how to calculate via row operation method?

Dave,

Sorry, but I don’t understand your question. What is the row operation method?

Charles

Hi Charles,

I’m not sure how you arrive at the a + c, b + d, -a + 2c, and -b + 2d figures for the inverse matrix. I’m looking at the algebraic expression you wrote to arrive at it but the light bulb isn’t turning on, sorry.

I know I can plug those in when determining one for a 2×2 matrix but it’d be nice to know in case I need to find the inverse of a larger one.

Jonathan,

I just used matrix multiplication and the fact that the product of the two 2×2 matrices must be the 2×2 identity matrix, which means that the values on the main diagonal are 1’s and the values off this diagonal are 0’s.

Charles

Hi Charles,

Question:

What is I2? You mention it in example 1 but I’m not clear what it means or how you arrived at the [1, 0, 0, 1] figures.

Thank you.

Jonathan,

I_2 is the 2 x 2 identity matrix.

Charles

Dear Charles:

Thanks for your program! it has been very useful. I have a question. When I use the matrix operations eigenvalues/eigenvectors on the same matrix of your example:

3 -4 9

5 2 3

7 1 6

I get the following result:

Eigenvalues/vectors (QR Factorization)

11,72209274 -1,651750232 0,929657496

0,511963902 -0,38469524 -0,768051128

0,482584475 -0,610875362 0,627649199

0,71063717 0,691983283 0,127098183

1,79176E-12 6,03212E-12 3,84593E-12

1,06581E-14 6,881844823 4,104277377

the first row 1×2 is the eigenvalues, next is the 3×3 matrix of eigenvectors, but next there are 2 new 1×3 rows, what are they?

Thanks a lot!

See the following webpages

http://www.real-statistics.com/linear-algebra-matrix-topics/eigenvalues-eigenvectors/

http://www.real-statistics.com/linear-algebra-matrix-topics/eigenvectors-for-non-symmetric-matrices/

Charles

Thanks Charles!

the matrix operation doesn’t work on mac!

Fran,

This is the first time I have heard of this. What exactly doesn’t work?

Charles

Good Article

When I try to create the output from Matrix Operations I get the error “Compile error in hidden module: frmMatrix”. The Addin is saved in my Downloads folder but it opens, it just doesn’t compute. I do not have Admin authority and frequently IM blocks access to certain functions / executions. Am I missing something or could this function be blocked?

You need to install the software so that Excel recognizes it as an addin. The instructions for doing this are described on the Download webpage.

Charles