The Fundamental Theorem of Linear Algebra

When solving a linear equation system ${\bf Ax}={\bf b}$, with an $M\times N$ coefficient matrix ${\bf A}$, we need to answer some questions such as the following:

The fundamental theorem of linear algebra can reveal the structure of the solutions of a given linear system ${\bf Ax}={\bf b}$, and thereby answer all such questions.

The $M\times N$ coefficient matrix ${\bf A}\in R^{M\times N}$ can be expressed in terms of either its $N$ M-D column vectors ${\bf c}_j$ or its $M$ N-D row vectors ${\bf r}_i^T$:

  $\displaystyle {\bf A}=\left[\begin{array}{ccc}a_{11}&\cdots&a_{1N}\\
=\left[\begin{array}{c}{\bf r}^T_1\\ \vdots\\ {\bf r}^T_M\end{array}\right],
$ (303)
  $\displaystyle {\bf A}^T=\left[\begin{array}{ccc}a_{11}&\cdots&a_{M1}\\
=\left[\begin{array}{c}{\bf c}^T_1\\ \vdots\\ {\bf c}^T_N\end{array}\right]
$ (304)
where ${\bf r}_i,\;(i=1,\cdots,M)$ and ${\bf c}_j,\;(j=1,\cdots,N)$ are respectively the ith row vector and jth column vector (all vectors are assumed to be vertical):
  $\displaystyle {\bf c}_j=\left[\begin{array}{c}a_{1j}\\ \vdots\\ a_{Mj}\end{arra...
\;\;\;\;\;\mbox{i.e.}\;\;\; {\bf r}_i^T=[a_{i1},\cdots,a_{iN}]
$ (305)

In general a function $y=f(x)$ can be represented by $f:\;X\rightarrow Y$, where


The matrix ${\bf A}$ can be considered as a function, a linear transformation ${\bf f}({\bf x})={\bf Ax}$, which maps an N-D vector ${\bf x}\in \mathbb{R}^N$ in the domain of the function into an M-D vector ${\bf Ax}\in R^M$ in the codomain of the function. The fundamental theorem of linear algebra concerns the following four subspaces associated with any $M\times N$ matrix ${\bf A}$ with rank $R=rank({\bf A})\le \min(M,\,N)$ (i.e., ${\bf A}$ has $R$ independent columns and rows).

The four subspaces are summarized in the figure below, showing the domain $\mathbb{R}^N=R({\bf A})\oplus N({\bf A})$ (left) and the codomain $\mathbb{R}^M=C({\bf A})\oplus N({\bf A}^T)$ (right) of the linear mapping ${\bf Ax}={\bf b}$, where

On the other hand, ${\bf y}_p\in C({\bf A})=R({\bf A}^T)$, ${\bf y}_h\in N({\bf A}^T)$, and ${\bf y}_c={\bf y}_p+{\bf y}_h$ are respectively the particular, homogeneous and complete solutions of ${\bf A}^T{\bf y}={\bf c}$. Here we have assumed ${\bf b}\in C({\bf A})$ and ${\bf c}\in R({\bf A})$, i.e., both ${\bf Ax}={\bf b}$ and ${\bf A}^T{\bf y}={\bf c}$ are solvable. We will also consider the case where ${\bf b}\notin C({\bf A})$ later.


We now consider specifically how to find the solutions of the system ${\bf Ax}={\bf b}$ in light of the four subspaces of ${\bf A}$ defined above, through the examples below.

Example 1:

Solve the homogeneous equation system:

  $\displaystyle {\bf A}{\bf x}=\left[\begin{array}{ccc}1&2&5\\ 2&3&8\\ 3&1&5\end{...
=\left[\begin{array}{c}0\\ 0\\ 0\end{array}\right]
$ (321)
We first convert ${\bf A}$ into the rref:
  $\displaystyle \left[\begin{array}{ccc}1&2&5\\ 2&3&8\\ 3&1&5\end{array}\right]
...2\\ x_3\end{array}\right]
=\left[\begin{array}{c}0\\ 0\\ 0\end{array}\right]
$ (322)
The $R=rank({\bf A})=2$ columns in the rref containing a single 1, called a pivot, are called the pivot columns, and the rows containing a pivot are called the pivot rows. Here, $R=2<M=N=3$, i.e., ${\bf A}$ is a singular matrix. The two pivot rows ${\bf r}_1^T=[1,\;0,\;1]$ and ${\bf r}_2^T=[0,\;1,\;2]$ can be used as the basis vectors that span the row space $R({\bf A})$:
  $\displaystyle R({\bf A})=c_1{\bf r}_1+c_2{\bf r}_2
=c_1\left[\begin{array}{r}1\\ 0\\ 1\end{array}\right]
+c_2\left[\begin{array}{r}0\\ 1\\ 2\end{array}\right]
$ (323)
Note that the pivot columns of the rref do not span the column space $C({\bf A})$, as the row reduction operations do not reserve the columns of ${\bf A}$. But they indicate the corresponding columns ${\bf c}_1=[1\;2\;3]^T$ and ${\bf c}_2=[2\;3\;1]^T$ in the original matrix ${\bf A}$ can be used as the basis that spans $C({\bf A})$. In general the bases of the row and column spaces so obtained are not orthogonal.

The $R$ pivot rows are the independent equations in the system of $M$ equations, and the variables corresponding to the pivot columns (here $x_1$ and $x_2$) are the pivot variables. The remaining $M-R$ non-pivot rows containing all zeros are not independent, and the variables corresponding to the non-pivot rows are free variables (here $x_3$), which can take any values.

From the rref form of the equation, we get

  $\displaystyle x_1+x_3=0,\;\;\;x_2+2x_3=0
$ (324)
If we let the free variable $x_3$ take the value 1, then we can get the two pivot variables $x_1=-1$ and $x_2=-2$, and a special homogeneous solution ${\bf x}_h=[-1\;-2\;1]^T$ as a basis vector that spans the 1-D null space $N({\bf A})$. However, as the free variable $x_3$ can take any value $c$, the complete solution is the entire 1-D null space:
  $\displaystyle N({\bf A})=c {\bf x}_h=c\left[\begin{array}{r}-1\\ -2\\ 1\end{array}\right]
$ (325)

Example 2:

Solve the non-homogeneous equation with the same coefficient matrix ${\bf A}$ used in the previous example:

  $\displaystyle {\bf A}{\bf x}=
\left[\begin{array}{ccc}1&2&5\\ 2&3&8\\ 3&1&5\en...
=\left[\begin{array}{c}19\\ 31\\ 22\end{array}\right]={\bf b}
$ (326)
We use Gauss-Jordan elimination to solve this system:
  $\displaystyle [\begin{array}{cc}{\bf A}&{\bf b}\end{array}]
\left[\begin{array}{rrr\vert r}1&0&1&5\\ 0&1&2&7\\ 0&0&0&0\end{array}\right]
$ (327)
The $R=2$ pivot rows correspond to the independent equations in the system, i.e., $rand({\bf A})=2$, while the remaining $M-R=1$ non-pivot row does not play any role as they map any ${\bf x}$ to ${\bf0}$. As ${\bf A}$ is singular, ${\bf A}^{-1}$ does not exist. However, we can find the solution based on the rref of the system, which can also be expressed in block matrix form:
  $\displaystyle \left[\begin{array}{rrr}1&0&1\\ 0&1&2\\ 0&0&0\end{array}\right]
=\left[\begin{array}{c}{\bf b}_1\\ {\bf0}\end{array}\right]
$ (328)
  $\displaystyle {\bf I}=\left[\begin{array}{cc}1&0\\ 0&1\end{array}\right],\;\;\;...
{\bf b}_1=\left[\begin{array}{c}5\\ 7\end{array}\right]
$ (329)
Solving the matrix equation above for ${\bf x}_{pivot}$, we get
  $\displaystyle {\bf I}{\bf x}_{pivot}+{\bf F}{\bf x}_{free}={\bf b}_1,\;\;\;\;\;...
...c}5\\ 7\end{array}\right]
+x_3\left[\begin{array}{c}-1\\ -2\end{array}\right]
$ (330)
If we let ${\bf x}_{free}=x_3=0$, we get a particular solution ${\bf x}_p=[5\;7\;0]^T$, which can be expressed as a linear combination of ${\bf r}_1$ and ${\bf r}_2$ that span $R({\bf A})$, and ${\bf x}_h$ that span $N({\bf A})$:
  $\displaystyle {\bf x}_p=\left[\begin{array}{c}5\\ 7\\ 0\end{array}\right]
-\frac{19}{6}\left[\begin{array}{r}-1\\ -2\\ 1\end{array}\right]
$ (331)
We see that this solution is not entirely in the row space $R({\bf A})$. In general, this is the case for all particular solutions so obtained.

Having found both the particular solution ${\bf x}_p$ and the homogeneous solution ${\bf x}_h$, we can further find the complete solution ${\bf x}_c$ as the sum of ${\bf x}_p$ and the entire null space spanned by ${\bf x}_h$:

  $\displaystyle {\bf x}_c=\left[\begin{array}{c}5\\ 7\\ 0\end{array}\right]
+c \...
...}{r}-1\\ -2\\ 1\end{array}\right]
={\bf x}_p+c {\bf x}_h={\bf x}_p+N({\bf A})
$ (332)
Based on different constant $c$, we get a set of equally valid solutions. For example, if $c=0,1,2$, then we get
  $\displaystyle {\bf x}_0=\left[\begin{array}{c}5\\ 7\\ 0\end{array}\right],\;\;\...
{\bf x}_2=\left[\begin{array}{c}3\\ 3\\ 2\end{array}\right]
$ (333)
These solutions have the same projection onto the row space $R({\bf A})$, i.e., they have the same projections onto the two basis vectors ${\bf r}_1$ and ${\bf r}_2$ that span $R({\bf A})$:
  $\displaystyle {\bf x}_0^T{\bf r}_1={\bf x}_1^T{\bf r}_1={\bf x}_2^T{\bf r}_1=5,...
{\bf x}_0^T{\bf r}_2={\bf x}_1^T{\bf r}_2={\bf x}_2^T{\bf r}_2=7
$ (334)

The figure below shows how the complete solution ${\bf x}_c$ can be obtained as the sum of a particular solution ${\bf x}_p$ in $R({\bf A})$ and the entire null space $N({\bf A})$. Here $N=3$ and space $\mathbb{R}^3$ is composed of $R({\bf A})$ and $N({\bf A})$, respectively 2-D and 1-D on the left, but 1-D and 2-D on the right. In either case, the complete solution is any particular solution plus the entire null space, the vertical dashed line on the left, the top dashed plane on the right. All points on the vertical line or top satisfy the equation system, as they are have the same projection onto the row space $R({\bf A})$.


If the right hand side is ${\bf b}'=[19,\,31,\,20]^T$, then the rref of the equation becomes:

  $\displaystyle \left[\begin{array}{rrr}1&0&1\\ 0&1&2\\ 0&0&0\end{array}\right]
...x_2\\ x_3\end{array}\right]=
\left[\begin{array}{c}5\\ 7\\ 2\end{array}\right]
$ (335)
The non-pivot row is an impossible equation $0=2$, indicating that no solution exists, as ${\bf b}'\notin C({\bf A})$ is not in the column space spanned by ${\bf r}_1=[1\;0\;1]^T$ and ${\bf r}_2=[0\;1\;2]^T$.

Example 3: Find the complete solution of the following linear equation system:

  $\displaystyle {\bf A}{\bf x}=\left[\begin{array}{rccc}1&2&3&4\\ 4&3&2&1\\ -2&1&...
=\left[\begin{array}{r}3\\ 2\\ 4\end{array}\right] ={\bf b}
$ (336)
This equation can be solved in the following steps:

If the right-hand side is ${\bf b}'=[1,\;3,\;5]^T$, then the row reduction of the augmented matrix yields:

  $\displaystyle \left[\begin{array}{rrrr\vert r}1&2&3&4&1\\ 4&3&2&1&3\\ -2&1&4&7&...
...gin{array}{rrrr\vert r}1&2&3&4&1\\ 0&1&2&3&0.2\\ 0&0&0&0&1.2\end{array}\right]
$ (345)
The equation corresponding to the last non-pivot row is $0=1.2$, indicating the system is not solvable (even though the coefficient matrix does not have full rank), because ${\bf b}'\notin C({\bf A})$ is not in the column space.

Example 4: Consider the linear equation system with a coefficient matrix ${\bf A}^T$, the transpose of ${\bf A}$ used in the previous example:

  $\displaystyle {\bf A}^T{\bf y}=\left[\begin{array}{rrr}1&4&-2\\ 2&3&1\\ 3&2&4\\...
=\left[\begin{array}{c}3\\ 11\\ 19\\ 27\end{array}\right]={\bf c}
$ (346)

If the right-hand side is ${\bf c}'=[1,\;1,\;2,\;3]^T$,

  $\displaystyle \left[\begin{array}{rrr\vert r}1&4&-2&1\\ 2&3&1&1\\ 3&2&4&2\\ 4&1...
...{array}{rrr\vert r}1&4&-2&1\\ 0&1&-1&1/5\\ 0&0&0&1\\ 0&0&0&2\end{array}\right]
$ (354)
indicating the system is not solvable, as this ${\bf c}'\notin C({\bf A}^T)=R({\bf A})$, i.e., it is not in the column space of ${\bf A}^T$ or row space of ${\bf A}$.

In the two examples above, we have obtained all four subspaces associated with this matrix ${\bf A}$ with $M=3$, $N=4$, and $R=2$, in terms of the bases that span the subspaces:

  1. The row space $R({\bf A})=C({\bf A}^T)$ is an R-D subspace of $\mathbb{R}^4$, spanned by the $R=2$ pivot rows of the rref of ${\bf A}$:
      $\displaystyle R({\bf A})=C({\bf A}^T)=
span\left(\left[\begin{array}{r}1\\ 0\\...
\left[\begin{array}{c}0\\ 1\\ 2\\ 3\end{array}\right]\right)
$ (355)
  2. The null space $N({\bf A})$ is an (N-R)-D subspace of $\mathbb{R}^4$ spanned by the $N-R$ independent homogeneous solutions:
      $\displaystyle N({\bf A})=span\left(\left[\begin{array}{r}1\\ -2\\ 1\\ 0\end{array}\right],
\left[\begin{array}{r}2\\ -3\\ 0\\ 1\end{array}\right] \right)
$ (356)
    Note that the basis vectors of $N({\bf A})$ are indeed orthogonal to those of $R({\bf A})$.
  3. The column space of ${\bf A}$ is the same as the row space of ${\bf A}^T$, which is the R-D subspace of $R^3$ spanned by the two pivot rows of the rref of ${\bf A}^T$.
      $\displaystyle C({\bf A})=R({\bf A}^T)=span\left(\left[\begin{array}{c}1\\ 0\\ 2\end{array}\right],
\left[\begin{array}{r}0\\ 1\\ -1\end{array}\right]\right)
$ (357)
  4. The left null space $N({\bf A}^T)$ is a (M-R)-D subspace of $\mathbb{R}^3$ spanned by the homogeneous solutions (here one $3-2=1$ solution):
      $\displaystyle N({\bf A}^T)=span\left(\left[\begin{array}{r}-2\\ 1\\ 1\end{array}\right] \right)
=c\;\left[\begin{array}{r}-2\\ 1\\ 1\end{array}\right]
$ (358)
    Again note that the basis vectors of $N({\bf A}^T)$ are orthogonal to those of $C({\bf A})$.

In general, here are the ways to find the bases of the four subspaces:

Note that while the basis of $R({\bf A})$ are the pivot rows of the rref of ${\bf A}$, as its rows are equivalent to those of ${\bf A}$, the pivot columns of the rref basis of $C({\bf A})$ are not the basis of $C({\bf A})$, as the columns of ${\bf A}$ have been changed by the row deduction operations and are therefore not equivalent to the columns of the resulting rref. The columns in ${\bf A}$ corresponding to the pivot columns in the rref could be used as the basis of $C({\bf A})$. Alternatively, the basis of $C({\bf A})$ can be obtained from the rref of ${\bf A}^T$, as its rows are equivalent to those of ${\bf A}^T$, which are the columns of ${\bf A}$.

We further make the following observations:

Here is a summary of the four subspaces associated with an $M$ by $N$ matrix ${\bf A}$ of rank $R$.

$M$, $N$, $R$ dim $R({\bf A})$ dim $N({\bf A})$ dim $C({\bf A})$ dim $R({\bf A}^T)$ solvability of ${\bf A}{\bf x}={\bf b}$
$M=N=R$ $R$ 0 $R$ 0 solvable, ${\bf x}={\bf A}^{-1}{\bf b}$ is unique solution
$M>N=R$ $R$ 0 $R$ $M-R$ over-constrained, solvable if ${\bf b}\in C({\bf A})$
$N>M=R$ $R$ $N-R$ $R$ 0 under-constrained, solvable, infinite solutions ${\bf x}={\bf x}_p+N({\bf A})$
$R<min(M,N)$ $R$ $N-R$ $R$ $M-R$ solvable only if ${\bf b}\in C({\bf A})$, infinite solutions

The figure below illustrates a specific case with $M=N=3$ and $R=2$. As ${\bf b}\notin C({\bf A})$, the system can only be approximately solved to find ${\bf Ax}={\bf p}\in C({\bf A})$, which is the projection of ${\bf b}$ onto the column space $C({\bf A})$. The error $\vert\vert{\bf e}\vert\vert=\vert\vert{\bf p}-{\bf b}\vert\vert=\vert\vert{\bf Ax}_p-{\bf b}\vert\vert$ is minimized, ${\bf x}_p$ is the optimal approximation. We will consider ways to obtain this optimal approximation in the following sections.