**Definition 1**: The **rank** of a matrix *A*, denoted rank(*A*), is the maximum number of independent rows in *A*.

**Observation**: Here we view each row in matrix *A* as a row vector. Thus rank(*A*) = the dimension of the span of the set of rows in *A* (see Definition 2 of Linear Independent Vectors). For an *m* × *n* matrix *A*, clearly rank(*A*) ≤ *m*.

It turns out that the rank of a matrix *A* is also equal to the column rank, i.e. the maximum number of independent columns in *A* (per Property 1). Thus if *A* is an *m* × *n* matrix, then rank(*A*) ≤ min(*m*, *n*). This also means that rank(*A*) = the dimension of the span of the rows in *A* = the dimension of the span of columns in *A *(see Definition 3 of Linear Independent Vectors).

**Property 1**: For any matrix *A*, rank(*A*) = the maximum number of independent columns in *A*.

Proof: Let *A* be an *m* × *n* matrix and let *p* = column rank of *A*, i.e. the maximum number of independent column vectors in *A*. Now let *C*_{1}*, …, C _{p}* be any basis for the span of the column vectors in

*A*. Thus any column

*A*in

_{j}*A*can be expressed uniquely as a linear combination of the basis elements, and so there are

*b*such that

_{ij}*A*= . Thus

_{i}*A = CB*where

*C*is the

*m*×

*p*matrix whose columns are

*C*

_{1}

*, …, C*and

_{p}*B*= [

*b*], a

_{ij}*p*×

*n*matrix.

Since *A = CB*, every row in *A* can be expressed as a linear combination of the rows in *B*, namely . This means that span of the rows in *A* is a subset of the span of the rows in *B*, and rank(*A*) ≤ rank(*B*) ≤ *p* = column rank of *A*.

Applying a similar argument to *A*^{T}, we conclude that rank(*A*^{T} ) ≤ column rank of *A*^{T}. But clearly rank(*A*^{T} ) = column rank of *A* and column rank of *A*^{T} = rank(*A*). Thus we conclude that column rank of *A* ≤ rank(*A*). Putting both conclusion together proves that rank(*A*) = column rank of *A*.

**Observation**: Since each of the transformations described in Property 9 of Simultaneous Linear Equations (i.e. the transformation in Gaussian elimination) doesn’t change the number of independent rows in a matrix, the rank of matrix *A* is equal to the number of non-zero rows that remain after Gaussian Elimination has been applied.

**Property 2**: A square *n* × *n* matrix is invertible if and only if its rank is *n*.

Proof: As observed just prior to Example 9 in Simultaneous Linear Equations, we can use Gaussian Elimination on to invert a square *n* × *n* matrix A. The procedure produces an inverse if and only if the rank of *A* is *n*.

**Observation**: This means that a square matrix *A* is invertible if and only if its rows are linearly independent. Since a square matrix is invertible if and only if its transpose is invertible, this also means that a matrix is invertible if and only if its columns are linearly independent.

**Observation**: Since Gaussian Elimination preserves the rank of a matrix, we can restate the conclusions made in Simultaneous Linear Equations about solutions to simultaneous linear equations in the following property.

**Property 3**: For an *m × n* matrix *A* and *m × *1 column vector *C*, If rank(*A|C*) = rank(*A*) = *n* then the equation *AX = C* has a unique solution. If rank(*A|C*) = rank(*A*) < *n* there are an infinite number of solutions and if rank(*A|C*) > rank(*A*) there are no solutions.

In the case where *C = *0, *AX* = 0 has a unique solution, namely the trivial solution *X* = 0, when rank(*A*) = *n* and an infinite number of solutions otherwise.

**Property 4**: The following are equivalent for an *n* × *n* matrix *A*:

*A*is invertible- det
*A*≠ 0 *X*= 0 is the only solution of*AX*= 0*AX = C*has a unique solution for all*n*× 1 vectors*C*- rank(
*A*) =*n*

Proof: This is a combination of Property 2 and 3 and Property 4 of Determinants and Simultaneous Linear Equations.

**Observation**: Note that if *AX = C* has a unique solution for some *n* × 1 column vector *C*, then by Property 3, rank(*A*) = *n* and so by Property 2, *A* is invertible. Conversely, if *A* is invertible then for any *n* × 1 column vector *C*, *X* = *A*^{-1}*C* is the unique solution to *AX = C*.

**Observation**: Since an *n × n* matrix *A* is invertible if and only if rank(*A*) = *n*, it follows by Corollary 4 of Linear Independent Vectors, that the rows in *A* form a basis for the set of all the 1 *×* *n* row vectors. Similarly the columns in *A* form a basis for the set of all the *n* *×* 1 column vectors.

**Property 5**: For a square matrix *A*, if there is a matrix *B* such that *AB = I* or *BA = I* then *A* is invertible and *A*^{-1} = *B*.

Proof: Assume first that there is a square matrix *B* such that *BA *= *I*. Now if *AX* = 0, then *X* = *IX* = (*BA*)*X* = *B*(*AX*) = *B*0 = 0. This shows that *AX* = 0 only has the trivial solution *X* = 0, which by Property 4 means that *A* is invertible and *A*^{-1} exists. Thus, *B *= *BI* = *B*(*AA*^{-1}) = (*BA*)*A*^{-1} = *IA*^{-1} = *A*^{-1}, and so *B = A*^{-1}

Now assume instead that *AB = I*. By the above result, we conclude that *B* is invertible and *A = B*^{-1}. But then *AB = I = BA*, and by Definition 5 of Matrix Operations, *A* is invertible and *A*^{-1} = *B*.

**Real Statistics Function**: The following function is provided by the Real Statistics Resource Pack:

**MRANK**(R1, *prec*) = the rank of the matrix specified in range R1. The function treats numbers smaller than *prec* as if they were zero. If *prec* is omitted it defaults to .00001.

**Definition 2**: The **range** of a matrix *A* is the set of column vectors *C* for which there is a solution to the equation *AX = C*. The **null space** of *A* is the set of solutions to the equation *AX* = 0.

**Observation**: The range of *A* is simply the span of the set of columns in *A*. Thus the rank of *A* is the dimension of the range of *A*. A square *n* × *n* matrix is invertible if and only if rank of *A* is *n* if and only if the dimension of the null space is 0 (i.e. the 0 vector is the only solution to the equation *AX* = 0).