Next: Solving Linear Systems by Up: Solving Linear Algebraic Equations Previous: LU Decomposition

## Fundamental theorem of linear algebra

An M by N matrix can be expressed in terms of either its M-D column vectors or its N-D row vectors :

where and are respectively the ith row vector and jth column vector:

The by matrix in the linear function (a transformation) converts an N-D vector from the domain of the function to an M-D vector in the codomain of the function, and converts an M-D vector to an N-D vector .

The fundamental theorem of linear algebra concerns the following four fundamental subspaces associated with (and ):

• The column space of is a space spanned by all of its M-D column vectors (which may not be independent):

which is subspace of composed of all possible linear combinations of its column vectors:

The column space is the image (or range) of the linear transformation from to . If , then the equation system is solvable. The column vectors of may not be independent. The number of linearly independent column vectors is the dimension of the column space , which is the rank of .

• The row space of is a space spanned by all of its N-D row vectors (which may not be independent):

which is subspace of composed of all possible linear combinations of its row vectors:

Note that is the column space of , i.e., and . The row vectors of may not be independent. The number of linearly independent row vectors is the dimension of the row space , which is the rank , i.e.,

• The nullspace (kernel) of is the set of all vectors that satisfy the homogeneous equation

The above indicates that the nullspace is orthogonal to the row space :

Also, when , we get , i.e., . In other words, must contain the origin . The dimension of the nullspace is called the nullity of .

• The nullspace of (also called left nullspace of is the set of all vectors that satisfy

Again we have

The rank-nullity theorem states:

Also all vectors in the nullspace of are orthogonal to each of the row vectors of , i.e., they are mutually exclusive and complementary:

In other words, the nullspace and the row space are orthogonal complement. For and we also have

The bases of these four subspaces can be found by converting into the reduced row echelon form (rref). by the elementary row operations (row reduction).

Example

By elementary row operations, we can convert into rref form:

We see that . Alternatively,

Now we have . We see that in either case, the column space is a 2-D plane spanned by any two of the three column vectors.

The columns in the rref matrix that contain a single 1, called a pivot element, are the pivot columns, and the corresponding rows each containing a pivot are the pivot rows, which span the R-D row space . However, note that the columns of are changed by the row reduction operations and they do not span the column space .

The dimensions of and are both equal to the number of pivots in the rref form, which is the rank of defined as the number of independent rows or columns in :

The non-pivot columns are not independent as they can always be expressed as a linear combination of the pivot columns. The pivot rows correspond to the independent equations in the system , while the remaining non-pivot rows containing all zeros are not independent. They don't play any row in the equation system , as they map any to .

Example

Solve the following homogeneous equation

Applying the elementary row operations, we get

The second to the last matrix is in the row echelon form (ref) of the coefficient matrix , while the last matrix is the reduced row echelon form (rref) of . In this rref form, there are only two pivot columns and pivot rows. The last row is all zeros, indicating that is singular (), i.e., is a free variable which can take any value. We have

If we let , then we get , , and a special solution , which is in the nullspace . However, the free variable can take any value , the complete solution, i.e., the entire nullspace is

Example

Solve the following non-homogeneous equation

We use Gauss-Jordan elimination to solve this system by applying a sequence of elementary row operations:

There are only two independent columns (or rows) in the rref form, i.e., the rank is , indicating that it is singular, and its inverse does not exist, we cannot convert into an identity matrix . Instead we just have the following system with a coefficient matrix in rref form, which can also be expressed in block matrix form:

where

Here is a solution corresponding to the pivot columns and represents free variables corresponding to the non-pivot columns, which are free variable that can take any value.

Solving for , we get

If we let , we get a particular solution

But as the free variable can take any value , the complete solution is

where is the solution for we already obtained in previous example, which is in the 1-D nullspace . We therefore see that the complete solution of is the entire nullspace translated by the particular solution . For example, when we get the following solutions:

The following figure illustrates a 3-D space with a 2-D nullspace. The complete solution is the entire nullspace translated by a particular solution .

Example

We construct the augmented matrix and then convert it to rref form:

The equation system now takes this block matrix form:

where

We first find the solution of the homogeneous equation :

We use the two basis vectors and of the nullspace and get

We next find the particular solution of the non-homogeneous equation by setting

We finally get the complete solution

If the right-hand side is , then the row reduction of the augmented matrix yields:

The equation corresponding to the last row is , indicating the system is not solvable (even though the system has infinite solutions!) as this is not in the image of .

Example:

Carrying out elementary row operations to the augmented matrix we get

The original system can now be written as

where

We first find the solution for by assuming the free variable :

We next find the particular solution of by assuming :

The complete solution is

We can summarize the two examples above by listing all four subspaces associated with this matrix with , , and rank :

1. The column space of is the same as the row space of , which is a 2-D subspace of spanned by the first two pivot rows of the rref of .

2. The row space of is the same as the column space of , which is a 2-D subspace of spanned by the first two pivot rows of the rref of :

3. The nullspace of is a 2-D subspace of spanned by

4. The nullspace of is a 1-D subspace of spanned by

Note that the basis of can be obtained from the rref form of as its rows are equivalent to those of . But the basis of cannot be obtained as the columns of the rref form are changed by the row deduction operations and are no longer equivalent to the columns of . However, the basis of can be obtained from the rref form of , as its rows are equivalent to those of , which are the columns of .

We can easily verify that these vectors are independent and can be used as the basis to span the corresponding subspaces, and these subspaces indeed satisfy the following orthogonal and complementary properties:

i.e., they are orthogonal complements:

We can further verify that

and indeed and , i.e., the systems are solvable. However, when , the system is unsolvable. The particular solutions is in neither nor , it is a linear combination of the four basis vectors of the two subspaces:

Similarly, is in neither nor , but a linear combination of the three basis vectors of the two subspaces:

In summary, here are the ways to find the bases of the four subspaces:

• To find basis for , reduce to rref form, the pivot rows are the basis vectors.
• Do the same as above for , we get basis for .
• To find the basis for , reduce to rref form, identify all free variables corresponding to non-pivot columns, set one of them to 1 and all others to 0, solve homogeneous system to find the pivot variables to get one basis vector. Repeat the process for each of the free variables to get all basis vectors.
• Do the same as above for , we get the basis for .
Note that these bases are not necessarily orthogonal. While , in general.

All these results can be illustrated in the figure below, where we assume and , which is not the case in these examples, but will be true when the systems were solved by the SVD method, discussed in the next section.

Here is a summary of the four subspaces associated with an by matrix of rank .

 , , dim dim dim dim solvability of 0 0 solvable, is unique solution 0 solvable only if 0 solvable, infinite solutions solvable only if , infinite solutions

The following figure illustrates the case when , and when . The system can only be approximately solved for the projection of in the column space. This is the optimal approximation in the least-squares sense as the error is minimized .

Next: Solving Linear Systems by Up: Solving Linear Algebraic Equations Previous: LU Decomposition
Ruye Wang 2014-10-09