When solving a linear equation system , with an coefficient matrix , we need to answer some questions such as the following:

- Does the solution exist, i.e., can we find an so that holds?
- If a solution exists, is it unique? If it is not unique, how can we find all of the solutions?
- If no solution exists, can we still find the optimal approximate solution so that the error is minimized?
- If the system has fewer equations than unknowns (), are there infinite solutions?
- If the system has more equations than unknowns (), is there no solution?

The coefficient matrix can be expressed in terms of either its M-D column vectors or its N-D row vectors :

(303)

(304)

where
and
are respectively the ith row vector and jth column vector (all vectors
are assumed to be vertical):
(305)

In general a function can be represented by , where

- is the
*domain*of the function, the set of all input or argument values; - the
*codomain*of the function, the set into which all outputs of the function are constrained to fall; - the function value of any is called the
*image*of ; - the set of for all is the image of the function, a subset of the codomain.

The matrix can be considered as a function, a linear
transformation
, which maps an N-D
vector
in the domain of the function
into an M-D vector
in the codomain of the function.
The *fundamental theorem of linear algebra*
concerns the following four subspaces associated with any
matrix with rank
(i.e.,
has independent columns and rows).

- The
*column space*of is a space spanned by its M-D column vectors (of which are independent):(306)which is an R-D subspace of composed of all possible linear combinations of its column vectors:(307)The column space is the*image*of the linear transformation , and the equation is solvable if and only if . The dimension of the column space is the rank of , . - The
*row space*of is a space spanned by its N-D row vectors (of which are independent):(308)which is an R-D subspace of composed of all possible linear combinations of its row vectors:(309)The row space is the*image*of the linear transformation , and the equation is solvable if and only if . As the rows and columns in are respectively the columns and rows in , the row space of is the column space of , and the column space of is the row space of :(310)The rank is the number of linearly independent rows and columns of , i.e., the row space and the column space have the same dimension, both equal to the rank of :(311) - The
*null space (kernel)*of , denoted by , is the set of all N-D vectors that satisfy the homogeneous equation(312)i.e.,(313)In particular, when , we get , i.e., the origin is in the null space.As for any and , we see that the null space and the row space are orthogonal to each other, .

The dimension of the null space is called the

*nullity*of : . The rank-nullity theorem states the sum of the rank and the nullity of an matrix is equal to :(314)We therefore see that and are two mutually exclusive and complementary subspaces of :(315)i.e., they are*orthogonal complement*of each other, denoted by(316)Any N-D vector is in either of the two subspaces and . - The null space of (left null space of ,
denoted by , is the set of all M-D vectors
that satisfy the homogeneous equation
(317)i.e.,(318)As all are orthogonal to , is orthogonal to the column space :(319)We see that and are two mutually exclusive and complementary subspaces of :

(320)

Any M-D vector is in either of the two subspaces and .

The four subspaces are summarized in the figure below, showing the domain (left) and the codomain (right) of the linear mapping , where

- is the particular solution that is mapped to , the image of ;
- is a homogeneous solution that is mapped to ;
- is the complete solution that is mapped to .

We now consider specifically how to find the solutions of the system in light of the four subspaces of defined above, through the examples below.

**Example 1:**

Solve the homogeneous equation system:

(321)

We first convert into the rref:
(322)

The
columns in the rref containing a single 1,
called a
(323)

Note that the pivot columns of the rref do not span the column space
, as the row reduction operations do not reserve the
columns of . But they indicate the corresponding columns
and
in the original
matrix can be used as the basis that spans .
In general the bases of the row and column spaces so obtained are
not orthogonal.
The pivot rows are the independent equations in the system of
equations, and the variables corresponding to the pivot columns
(here and ) are the *pivot variables*. The remaining
non-pivot rows containing all zeros are not independent, and the
variables corresponding to the non-pivot rows are *free variables*
(here ), which can take any values.

From the rref form of the equation, we get

(324)

If we let the free variable take the value 1, then we can get
the two pivot variables and , and a special homogeneous
solution
as a basis vector that spans the 1-D
null space . However, as the free variable can take
any value , the complete solution is the entire 1-D null space:
(325)

**Example 2:**

Solve the non-homogeneous equation with the same coefficient matrix used in the previous example:

(326)

We use Gauss-Jordan elimination to solve this system:
(327)

The pivot rows correspond to the independent equations in the
system, i.e.,
, while the remaining non-pivot
row does not play any role as they map any to . As
is singular, does not exist. However, we can
find the solution based on the rref of the system, which can also be
expressed in block matrix form:
(328)

where
(329)

Solving the matrix equation above for
, we get
(330)

If we let
, we get a particular solution
, which can be expressed as a linear
combination of and that span ,
and that span :
(331)

We see that this solution is not entirely in the row space .
In general, this is the case for all particular solutions so obtained.
Having found both the particular solution and the homogeneous solution , we can further find the complete solution as the sum of and the entire null space spanned by :

(332)

Based on different constant , we get a set of equally valid
solutions. For example, if , then we get
(333)

These solutions have the same projection onto the row space
, i.e., they have the same projections onto the two
basis vectors and that span :
(334)

The figure below shows how the complete solution can be obtained as the sum of a particular solution in and the entire null space . Here and space is composed of and , respectively 2-D and 1-D on the left, but 1-D and 2-D on the right. In either case, the complete solution is any particular solution plus the entire null space, the vertical dashed line on the left, the top dashed plane on the right. All points on the vertical line or top satisfy the equation system, as they are have the same projection onto the row space .

If the right hand side is , then the rref of the equation becomes:

(335)

The non-pivot row is an impossible equation , indicating that
no solution exists, as
is not in the
column space spanned by
and
.
**Example 3:**
Find the complete solution of the following linear equation system:

(336)

This equation can be solved in the following steps:
- Construct the augmented matrix and then convert it to the
rref form:

The two pivot rows and in the rref span , and the two columns in the*original*matrix corresponding to the pivot columns in the rref, and could be used as the basis vectors that span .The equation system can be represented in block matrix form:

(337)where(338)Multiplying out we get(339) - Find the homogeneous solution for equation
by setting
:
(340)We let be either of the two standard basis vectors and of the null space , and get(341)and the two corresponding homogeneous solutions:(342)
- Find the particular solution of the non-homogeneous equation
by setting
(343)
- Find the complete solution:
(344)

If the right-hand side is , then the row reduction of the augmented matrix yields:

(345)

The equation corresponding to the last non-pivot row is , indicating
the system is not solvable (even though the coefficient matrix does not
have full rank), because
is not in the column
space.
**Example 4:** Consider the linear equation system with a
coefficient matrix , the transpose of used in the
previous example:

(346)

- Convert the augmented matrix into the rref form:
(347)The two pivot rows and are the basis vectors that span . The two vectors that span found in Example 3 can be expressed as linear combinations of and , and , i.e., either of the two pairs can be used as the basis that span .
Based on the rref above, the equation system can now be written as

(348)where(349)Multiplying out we get(350) - Find the homogeneous solution for
by setting
and thereby
. We let
the free variable
and get
(351)
- Find the particular solution of the non-homogeneous equation
by setting
:
(352)
- Find the complete solution:
(353)

If the right-hand side is ,

(354)

indicating the system is not solvable, as this
, i.e., it is not in the column
space of or row space of .
In the two examples above, we have obtained all four subspaces associated with this matrix with , , and , in terms of the bases that span the subspaces:

- The row space
is an R-D subspace of
, spanned by the pivot rows of the rref of
:
(355)
- The null space is an (N-R)-D subspace of
spanned by the independent homogeneous solutions:
(356)Note that the basis vectors of are indeed orthogonal to those of .
- The column space of is the same as the row space of ,
which is the R-D subspace of spanned by the two pivot rows of the
rref of .
(357)
- The left null space is a (M-R)-D subspace of
spanned by the homogeneous solutions (here one solution):
(358)Again note that the basis vectors of are orthogonal to those of .

In general, here are the ways to find the bases of the four subspaces:

- The basis vectors of are the pivot rows of the rref of .
- The basis vectors of are the pivot rows of the rref of .
- The basis vectors of are the independent homogeneous solutions of . To find them, reduce to the rref, identify all free variables corresponding to non-pivot columns, set one of them to 1 and the rest to 0, solve homogeneous system to find the pivot variables to get one basis vector. Repeat the process for each of the free variables to get all basis vectors.
- The basis vectors of can be obtained by doing the same as above for .

Note that while the basis of are the pivot rows of the rref of , as its rows are equivalent to those of , the pivot columns of the rref basis of are not the basis of , as the columns of have been changed by the row deduction operations and are therefore not equivalent to the columns of the resulting rref. The columns in corresponding to the pivot columns in the rref could be used as the basis of . Alternatively, the basis of can be obtained from the rref of , as its rows are equivalent to those of , which are the columns of .

We further make the following observations:

- The basis vectors of each of the four subspaces are independent,
the basis vectors of and are orthogonal,
and
. Similarly, the basis vectors
of and are orthogonal, and
.
In other words, the four subspaces indeed satisfy the following orthogonal
and complementary properties:
(359)i.e., they are orthogonal complements: .
- For
to be solvable, the constant vector
on the right-hand side must be in the column space,
. Otherwise the equation is not solvable,
even if . Similarly, for
to be
solvable, must be in the row space
.
In the examples above, both and are indeed in
their corresponding column spaces:
(360)But as and , the corresponding systems have no solutions.
- All homogeneous solutions of
are in
the null space
, but in general the
particular solutions
are not necessarily in the row space . In the example
above,
is a linear combination of the
basis vectors of and basis vectors of :
(361)where(362)are the projections of onto and , respectively, and is another particular solution without any homogeneous component that satisfies , while is a homogeneous solution satisfying .
- All homogeneous solutions of
are in
the left null space
, but in general the
particular solutions
are not necessarily in the column space. In the example above,
is a linear combination of the basis vectors
of and basis vector of :
(363)where is a particular solution (without any homogeneous component) that satisfies .

Here is a summary of the four subspaces associated with an by matrix of rank .

, , | dim | dim | dim | dim | solvability of |

0 | 0 | solvable, is unique solution | |||

0 | over-constrained, solvable if | ||||

0 | under-constrained, solvable, infinite solutions | ||||

solvable only if , infinite solutions |

The figure below illustrates a specific case with and . As , the system can only be approximately solved to find , which is the projection of onto the column space . The error is minimized, is the optimal approximation. We will consider ways to obtain this optimal approximation in the following sections.