<< numerical_analysis
Eigenvalues
1. Introduction
Recall that for matix , if , we define as the eigenvalue and as the corresponding eigenvector of .
To compute the eigenvalues and eigenvectors by definition, we need to solve equation . The order of this equation is at most the order of matrix. For instance, to compute the eigenvalue of , we must solve the euqation However, Abel–Ruffini theorem states that there is no solution in radicals to general polynomial equations of degree five or higher with arbitrary coefficients. Hence computing the eigenvalue/eigenvector by definition is not an universal approach.
The dominant eigenvalue, or largest magnitude eigenvalue of is an eigenvalue whose magnitude is greater than all other eigenvalues. If it exists, an eigenvector associated to is called a dominant eigenvector.
The dominant eigenvalues is much more significant than others in many practical problems. For instance, whether the iteration matrix converges when solving linear equations depends on the spectral radius (a.k.a. dominant eigenvalue) of matix.
Power method is one of the numerical technique to compute dominant eigenvalue, and based on which we developed inverse power method to compute smallest magnitude eigenvalue.
2. Power Method
# Power Method Introduction
Any vector (as the initial vector) can be expressed by linear independent eigenvector of
And the multiplication can be expressed by
Applying mathematical induction , we have This expression indicates that how change when iterating depends on the distribution of eigenvalues. We’ll discuss two simple cases.
# Two Cases when Applying Power Method
Case 1. If the ratio converges, the largest magnitude eigenvalue is unique and is a simple root. The dominant eigenvalue and the dominant eigenvector can be computed by
Proof. For the unique dominant eigenvalue , For sufficiently large , we have for since is the dominant eigenvalue. Hence
Case 2. If the ratio does not converges but does, then dominant eigenvalues are the real roots with different signs. The dominant eigenvalue and eigenvector can be computed by
Proof. For two dominant eigenvalues and For sufficiently large , we have
Those trends that do not fit the two cases above require other special processings.
# Normalized Power Method
If the dominant eigenvalue is larger or smaller than , some components of will increase or decrease continuously when iterating, which leads to the precision problem due to IEEE754. Hence we can normalize the vector after each step: The normalization process gurantees , that is, the component that represents dominant eigenvalue of remains .
There are three cases for normalized power method:
If converges, there is unique positive dominant eigenvalue
If converges to two opposite vectors, there is unique negative dominant eigenvalue
If converges to two different vectors, there are two different dominant eigenvectors with opposite sign. For sufficiently large , we exectue one more unnormalized operation , then
It requires special pocessing if does not fit any cases mentioned above.
3. Inverse Power Method
Inverse power method is used to compute the smallest magnitude eigenvalue.
In equation we multiple on both sides: That is, the smallest magnitude eigenvalue of is exactly the reciprocal of dominant eigenvalue of .
In practice, we do apply power method after computing but keep solving equation decomposition method is applied to decompose first then we solve We can also normalize the inverse power method as well to avoid the precision problem: