# Applications

• Image compression

Assume a set of images of size are to be stored or transmitted. The pixels of the same position in all these images are used to form a N-dimensional vector and there are in total such vectors. Treating these vectors as random vectors, we can find their mean vector and covariance matrix , and the KLT can be carried out to transform these vectors into a lower dimensional space of dimensions.

Example:

Twenty images of faces:

The eigen-images after KLT:

Percentage of energy contained in the  components 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 percentage energy 48.5 11.6 6.1 4.6 3.8 3.7 2.6 2.5 1.9 1.9 1.8 1.6 1.5 1.4 1.3 1.2 1.1 1.1 0.9 0.8 accumulative energy 48.5 60.1 66.2 70.8 74.6 78.3 81 83.5 85.4 87.3 89 90.7 92.2 93.6 94.9 96.1 97.2 98.2 99.2 100

Reconstructed faces using 95% of the total information (15 out of 20 components):

• Remote sensing

In remote sensing, pictures of the surface of either the Earth or other planets such as Mars are taken by satellites, for various studies (e.g., geology, geography, etc.). The camera system on the satellite has an array of sensors, typically a few tens or even over a hundred, each of which is sensitive to a different wavelength band in the visible and infrared range of the electromagnetic spectrum. These sensors will produce a set of images all covering the same surface area on the ground. For the same pixel in these images, there are values (each from one wavelength band) representing the spectral profile that characterizes the material on the surface area corresponding to the pixel. Depending on the number of sensors , the data are referred to as either multi or hyper-spectral images.

As different types of materials on the ground surface usually have different spectral profiles, one typical application of the multi or hyper-spectral image data is to classify the pixels into different materials. When is large, KLT can be used to reduce the dimensionality without loss of much useful information. Specifically, we consider the values associated with each pixel form an N-dimensional random vector, and then carry out KLT to reduce its dimensionality to . All classification will subsequently carried out in this low dimensional space, thereby significantly reducing the computational complexity.

• Visualization and Feature extraction for pattern recognition

In many applications, various objects, called patterns in the field of machine learning, in the images (e.g., hand-written characters, human faces, etc.) need to be classified. As the first step of this process, a set of features pertaining to the patterns of interest need to be extracted. KLT can be used for this purpose. Assume a set of images are taken, each containing one of the ten numbers from 0 to 9 (or the face of one individual). Each image is treated as a vector by concatenating all of its rows one after another. Next the mean vector and covariance matrix of these vectors are obtained. Based on the covariance matrix, the KLT is carried out to reduce the dimensionality of the vectors from to . Alternatively, to better extract the information pertaining to the difference between different classes of patterns, the KLT can be based on a different matrix called between-class scatter matrix, which represents the separability of the classes. Specifically, we use the eigenvectors corresponding to the m largest eigenvalues of the between-class scatter matrix to form an by transform matrix. After the transform by this matrix, some classification algorithm can be carried out in the much reduced M-dimensional space.

Example: The image below shows the ten numbers from 0 to 9the hand written by different students. Each number is represented by an image of pixels, which can be considered as an dimensional vector.

To visualize the data set, the KLT based on the covariance matrix of the dataset is used to project the dimensional onto an dimensional space, as shown in the figure below with on the left and on the right. The sample vectors in each of the ten different classes are color-coded. It can be seen that even when the dimensions are much reducrf from to , it is still possible to separate the ten different classes reasonably well.

The following is obtained by the KLT based on the between-class scatter matrix , i.e., the classes should be more separable.