Probability of Random Vectors

• Multiple Random Variables

Each outcome of a random experiment may need to be described by a set of random variables , or in vector form:

which is called a random vector. In signal processing is often used to represent a set of samples of a random signal (a random process).

• Joint Distribution Function and Density Function

The joint distribution function of a random vector is defined as

where is the joint density function of the random vector .

• Independent Variables

For convenience, let us first consider two of the variables and rename them as and . These two variables are independent iff

where events and are defined as '' and '', respectively. This definition is equivalent to

Similarly, a set of variables are independent iff

• Mean Vector

The expectation or mean of random variable is defined as

The mean vector of random vector is defined as

which can be interpreted as the center of gravity of an N-dimensional object with being the density function.

• Covariance Matrix

The variance of random variable is defined as

The covariance of and is defined as

The covariance matrix of a random vector is defined as

where

is the covariance of and . When , is the variance of , which can be interpreted as the amount of information, or energy, contained in the ith component of the signal . And the total information or energy contained in is represented by

is symmetric as . Moreover, it can be shown that is also positive definite, i.e., all its eigenvalues are greater than zero for all , and we have

and

Two variables and are uncorrelated iff , i.e.,

If this is true for all , then is called uncorrelated or decorrelated and its covariance matrix becomes a diagonal matrix with only non-zero on its diagonal.

If are independent, , then it is easy to show that they are also uncorrelated. However, uncorrelated variables are not necessarily independent. (But uncorrelated variables with normal distribution are also independent.)

• Autocorrelation Matrix

The autocorrelation matrix of is defined as

where

Obviously is symmetric and we have

When , we have .

Two variable and are orthogonal iff . Zero mean random variables which are uncorrelated are also orthogonal.

• Mean and Covariance under Unitary Transforms

A unitary (orthogonal) transform of is defined as

where is a unitary (orthogonal) matrix

and is another random vector.

The mean vector and the covariance matrix of are related to the and of as shown below:

Unitary transform does not change the trace of :

which means the total amount of energy or information contained in is not changed after a unitary transform (although its distribution among the components is changed).

• Normal Distribution

The density function of a normally distributed random vector is:

where and are the mean vector and covariance matrix of , respectively. When , and become and , respectively, and the density function becomes single variable normal distribution.

To find the shape of a normal distribution, consider the iso-value hyper surface in the N-dimensional space determined by equation

where is a constant. This equation can be written as

where is another constant related to , and . For variables and , we have

Here we have assumed

The above quadratic equation represents an ellipse (instead of any other quadratic curve) centered at , because , as well as , is positive definite:

Recall in general that the discriminant of a quadratic equation

determines the shpae of the corresponding conic section in a plane:

When , the equation represents a hyper ellipsoid in the N-dimensional space. The center and spatial distribution of this ellipsoid are determined by and , respectively.

In particular, when is decorrelated, i.e., , becomes a diagonal matrix

and equation can be written as

which represents a standard ellipsoid with all its axes parallel to those of the coordinate system.

• Estimation of and

When is not known, and cannot be found by their definitions. However, they can be estimated if a large number of outcomes ( ) of the random experiment in question can be observed.

The mean vector can be estimated as

i.e., the ith element of is estimated as

where is the ith element of .

The autocorrelation can be estimated as

And the covariance matrix can be estimated as