In some applications, only the eigenvalue or of maximum or minimum absolution values (if real) or moduli (if commplex) is of interes (e.g., projecting data points in a high-dimensional space into a single dimension in the pricipal component direction). In such cases, we may be able to use the methods of power iteration to find and together with their corresponding eigenvectors, based on the assumption that is a full-rank square matrix. In the following, we assume the eigenvalues are arranged in descending order based on their absolution values or modulus:

(39) |

**The basic power algorithm:**We express the eigenvector matrix of and its inverse as and , respectively, and write the eigenequation as

(40) (41) (42) (43) (44) (45) (46) (47) *Rayleigh quotient*.To find the eigenvector corresponding to , we express the arbitrary initial vector as a linear combination of the independent eigenvectors of , and get

(48)

At the limit , for all , and approaches the first eigenvector with some scaling coefficient:(49) In summary, here are the steps of the power method:

- Choose an artibrary with ;
- Iterate for
until convergence, i.e.,
the difference between two consecutive eigenvalues
is smaller than a preset error:

**Inverse iteration:**The power method can be modified to find the smallest eigenvalue and the corresponding eigenvector, based on the fact that the eigenvalues of are the reciprocals of those of , i.e., if , then . To find the smallest eigenvalue of , we can use the power method to find the largest eigenvalue of by iteration , which is equivalent to solving for . Here is the iteration step:

- Choose an arbitrary with ;
- Iterate for
until convergence:

**Inverse iteration with shift**The power method can be further modified to find the eigenvalues between the largest and the smallest ones. Let and be one of the eigenvalues and the corresponding eigenvector of , i.e., , then we have

(50) *shift*, is closest to one of the eigenvalues of , i.e., for all , then is the maximum eigenvalue of . Now we can find the eigenvalue of closest to and the corresponding eigenvector:- Choose an artitrary with
, and
get an initial guess of a certain eigenvalue
(closest to );
- Iterate for
until convergence:

- Choose an artitrary with
, and
get an initial guess of a certain eigenvalue
(closest to );

We make a few observations about the power method.

- The rate of the convergence depends on the ratio , the smaller the ratio, the faster the convergence.
- If a real matrix has a complex conjugate eigenvalue pair of equal modulus , which happens to be the maximum (or minimum) among all eigenvalues, then and the power method (or the inverse iteration method) will not converge.
- If an eigenvalue is repeated times, there exist corresponding eigenvectors. However, the power method can find only one eigenvector, which is a linear combination of the eigenvectors.

For example, if the eigenvalues of a real matrix are , then the power iteration methods above will find both and . But if the eigenvalues are , then the power method for will not converge because the modulus of the complex conjugate pair is the maximum of all eigenvalues. Similarly if the eigenvalues are , then the inverse iteration method for will not converge because is the minimum of all eigenvalues.

**Example:**

The eigenvalues of the following matrix

The Matlab code for the power method is listed below:

% The power algorithm to find the largest and smallest eigenvalues and % eigenvectors of a given matrix A. So, given an estimate mu of any of the % eigenvalues, find the eigenvalue closest to mu and its eigenvector. rng(2) clear all n=4; a=3; b=4; A=eye(n); A(1,1)=a; A(2,2)=a; A(1,2)=b; A(2,1)=-b; A(3,3)=8; A(4,4)=-2; A [v d]=eig(A) V=rand(n); A=V*A*inv(V); A [v d]=eig(A) diag(d)' abs(diag(d)) pause u=rand(n,1); u=u/norm(u); u1=u; u2=u; u3=u; mu=2.9+j*4.1 done=0; k=0; s1=1; s2=1; s3=1; while ~done & k<99 k=k+1; v1=u1; v2=u2; v3=u3; r1=s1; r2=s2; r3=s3; w=A*u1; u1=w/norm(w); ev1=norm(v1-u1); s1=u1'*A*u1; es1=abs(s1-r1); w=linsolve(A,u2); w=inv(A)*u2; u2=w/norm(w); ev2=norm(v2-u2); s2=u2'*A*u2; es2=abs(s2-r2); w=linsolve(A-mu*eye(n),u3); w=inv(A-mu*eye(n))*u3; u3=w/norm(w); ev3=norm(v3-u3); s3=u3'*A*u3; es3=abs(s3-r3); fprintf('%d (%e %e %e), (%e %e %e)\n',k,ev1,ev2,ev3,es1,es2,es3) % if ev1+ev2+ev3<10^(-6) if es1+es2+es3 < 10^(-6) done=1; end end k u1'; u1'*A*u1 u2'; u2'*A*u2 u3'; u3'*A*u3