We have discussed quite a few different orthogonal transforms, including
Fourier transform, cosine transform, Walsh-Hadamard transform, Haar transform,
etc., each of which, in its discrete form, is associated with an orthogonal
(or unitary) matrix that satisfies
or
. This orthogonal matrix can
be expressed in terms of its column vectors:

As these column vectors are orthonormal:

they form a set of orthogonal basis vectors to span an N-dimensional vector space.

We have seen several times in the previous discussions of the various
orthogonal transforms, the transform of any given discrete signal, represented
as a vector in the N-dimensional space
, is
carried out as a matrix multiplication:

In the transform domain, the ith component of the vector is actually the projection of the signal vector onto the ith basis vector . Left multiplying on both sides of the transform equation above, we get the inverse transform:

which can be rewritten as

We see that the original signal vector is expressed by the inverse transform as a linear combination of the orthogonal basis vectors , the column vectors of the transform matrix. In other words, an orthogonal transform of the signal vector is actually a rotation in the N-dimensional space, represented by the orthogonal matrix, and different transform methods correspond to different rotations of the signal vector. In particular, the orthogonal transform matrix can be the identity matrix where is the ith basis vector of the N-dimensional vector space. The transform associated with this matrix is , i.e., the coordinate system is not rotated.

In the previous chapters, we have also seen some specific applications of each of the transforms, such as various filtering in the transform domain (e.g., in frequency domain by Fourier transform and discrete cosine transform), such as encoding and data compression. Do all of these transforms, each represented by a totally different transform matrix from others, share some intrinsic properties and essential characteristics in common? Moreover, we may want to ask some more general questions, why do we want to carry out such transforms to start with? If an orthogonal transform is nothing more than a certain rotation of the signal vector in the N-dimensional space, what can be achieved by such a rotation? And, finally, is there a best rotation among all possible transform rotations (infinitely many of them)?

The following discussion for the principal component analysis (PCA) and the associated Karhunen-Loeve Transform (KLT) will answer all these questions.

- Multivariate Random Signals
- Covariance and Correlation
- Karhunen-Loeve Transform (KLT)
- KLT Completely Decorrelates the Signal
- KLT Optimally Compacts Signal Energy
- Geometric Interpretation of KLT
- Comparison with Other Orthogonal Transforms
- Applications
- About this document ...