### Jordan normal form – Generalized Eigenspaces

#### by Arjun Jain

It’s been a long time since I last posted. I’ve been studying about the Jordan Normal Form recently, and found some good but incomplete websites elaborating this interesting construction. The first time I read about this, I was very impressed- these are representative matrices for a whole families of similar matrices. Moreover, so many results about matrices requiring them to be diagonalizable can be generalized using the normal forms- the next best things to a diagonal representation.

As I searched for clear proofs surrounding this, I found about **Sheldon Axler**‘s book **Linear Algebra Done Right**, and the article **The Jordan Canonical Form: an Old Proof** by **Richard A. Brualdi. **The first one does it by introducing generalized eigenspaces, while the second one uses the language of graph theory. Investigating the relation between these two approaches will be an excellent thing to do, I think.

In this first part, I’ll outline the usual generalized eigenspaces approach, as described in Sheldon Axler’s book.

->

We want to describe an operator by decomposing it’s domain into invariant subspaces (If W is an invariant subspace of V, where , then if , also). If the operator is diagonalizable, these invariant subspaces are the eigenspaces, and .

As normally there aren’t enough eigenvectors, we want to show that in general, , where are the distinct eigenvalues of T.

Let’s start by studying the nullspaces of powers of an operator.

First of all .

Of course, as V is finite dimensional, this can’t go on forever. So, we must have , because if we don’t, the dimensions of the nullspaces, which are subspaces of V, will go on increasing.

A similar thing is true for ranges of powers of operators. Here, . Again this stops at after which all subsequent ranges are equal.

Now, Schur’s lemma in linear algebra says that every square complex matrix is unitarily triangularizable (). The diagonal elements of are the eigenvalues of T. We can prove this by seeing that if is an eigenvalue of T, then which shows that is one of the diagonal elements of . The same reasoning in reverse can be used to prove the converse.

If T has distinct eigenvalues, all of these are the diagonal elements of . If not an eigenvalue is repeated times.

To prove this, consider the case of . We prove by induction.

For ,

if , the fact is clearly true.

We assume that it is true for .

Now, suppose that is a basis for which T has an upper triangular representation with as diagonal elements. If , then is clearly invariant due to being triangular.

The matrix for with the basis is without the last row and column. By the induction hypothesis, 0 appears times. As , 0 appears times.

Now for the remaining matrix, which has as the diagonal element, we consider two cases:

1. : If has as the diagonal elements, then the matrix representation of is the upper triangular with as diagonal elements. So for some . Now suppose that . Then where and (field of V), which gives . The first two terms are in , but the third one is not. So, , meaning that . As a result , giving . 0 appears times.

2. : If , then giving . So, we can construct a vector such that and . Now, . Here, as , and as , therefore, , which is as desired.

Statement proved.

Note that the multiplicity corresponding to an eigenvalue is defined as , i.e. the dimension of the associated generalized eigenspace. The sum of these multiplicities is equal to as all the diagonal elements of are eigenvalues of . The characteristic polynomial associated with a matrix is , where are the multiplicities.

Using Schur’s theorem as above, we can also prove the Cayley- Hamilton theorem very easily, through induction. The theorem states that if q is the characteristic polynomial of , then . We need only show that for all basis vectors of , , where the s are basis vectors for which the matrix of T is upper triangular, as in Schur’s theorem.

Suppose that . Using the triangular form of , we have .

Now, assume that for between to :

.

Now, because of the triangular form of , . So .

Now come the main theorems leading up to Jordan’s form.

If is an operator on a complex vector space with distinct eigenvalues , with corresponding subspaces of generalized eigenvectors , then . For the proof, we have already seen that . Now we can see that each is invariant under T, as if , so is . As a result, if we consider , where , then has the same eigenvalues and multiplicities as T. We therefore get the desire result.

Note that the generalized eigenspaces are disjoint, as they should be. Generalized eigenspaces of T are invariant under T. Consider the eigenvalues and as examples. So if , then so do and , where c is any scalar. As a result, even . Now assume that . Then . Therefore . So now . So if , then . If , then . Applying the same argument as above, we get that if , then an so on till we reach , giving .

Now, if is nilpotent, there exist vectors (), such that () is a basis of , and () is a basis of , where is the largest non negative integer such that .

For the proof, we use induction. As is nilpotent, . Assume that the claim holds for all vector spaces of lesser dimensions. So, () is a basis of and () is a basis of .

As each , we can choose a corresponding , such that for each r. Therefore, . Now we choose a subspace of such that , and then a basis (). As these are in , .

To show that the basis for V in the statement is linearly independent, suppose that . Then .

Now by the induction hypothesis, for . Also, as () is a basis of and as () is a basis of .

Now, as assumed, and . It can be seen that . Therefore the set of vectors under consideration is indeed a basis of . Also, ()

= () is a basis of .

To get to the Jordan Canonical form, consider a nilpotent operator . Then for the vectors (). With these, (first vector)=0, (second vector)=(first vector). The resultant block has s on the diagonal and s on the super diagonal.

For a , with distinct eigenvalues , as where each is nilpotent, we have our Jordan basis, giving the form as in the opening image. The exact structure of the Jordan form depends not only on the arithmetic and geometric multiplicities of the eigenvalues, but also the dimensions of the powers of , with being the number of Jordan blocks of size corresponding to the eigenvalue .

Note that two matrices are conjugate if and only if they have the same Jordan canonical forms, up to a permutation of the Jordan Blocks.