### Jordan normal form – Graph Theory

The last post was about the usual generalized eigenspaces approach to Jordan normal forms. Here I’ll describe another approach using the language of graph theory as in Turnbull and Aitken’s proof, in the article The Jordan Canonical Form: an Old Proof by Richard A. Brualdi.

Consider an $nXn$ complex matrix $A$. The associated digraph (directed graph) $D(B)$ has $n$ vertices (labelled $1,2,...n$) and has an arc $(i,j)$ from vertex $i$ to $j$ if $A(i,j)\neq 0$.

Now by Schur’s Theorem, every $nXn$ matrix A is similar to an upper triangular matrix, $T$. Note that for upper triangular matrices, the associated digraphs have arcs only from lower to higher vertices.

In a digraph $D$$(i_1, i_2, i_3, ... i_{k-1}, i_k, i_1)$ where all $i$s are distinct and where $(i_1, i_2), (i_2, i_3), ...(i_{k-1}, i_k), (i_k, i_1)$ are all arcs of $D$, is called a cycle.

Note that since the arcs of the digraph of a triangular matrix only follow one direction, these are acyclic graphs. The converse is not true. However, a matrix with an acyclic digraph can be made triangular by a permutation similarity. To see this, note that if we have an acyclic graph we only need to reorder the vertices so that all arcs point towards higher numbered or lower numbered vertices. So for example, if you want to exchange the vertices $1$ and $3$, the permutation matrix $P(1,3)$ operated on the right will redirect all arcs going to $3$, to $1$ and vice verse. Operated on the left, it will redirect all arcs coming from $3$ to now come from $1$, and vice verse.

Also, a path in $D$ is a sequence $(i_1, i_2, i_3, ..., i_k)$ of distinct vertices such that $(i_1, i_2), (i_2, i_3), ...(i_{k-1}, i_k)$ are all arcs of $D$. A digraph with such an arc is called a path digraph. Again, any matrix with a path digraph can be brought to the form where the path goes from the smallest vertex to the largest vertex or vice verse, by a permutation similarity.

An example of a matrix with a path digraph is $\begin{bmatrix} 3 & 2 & 0 & 0 \\ 0 & 1 & 7 & 0 \\ 0 & 0 & 0 & 3 \\ 0& 0 & 0 & 4 \end{bmatrix}$. This looks like a Jordan block!

To make all the non zero super diagonal entries $1$, we can use a diagonal similarity.  So if a diagonal matrix is represented by $diag(d_1, d_2, ..., d_n)$, then for the above example the matrix that works is $diag(1,1/2,1,1)diag(1,1,1/14,1)diag(1,1,1,1/42)$.

The digraph of the Jordan form of a given $A$, say $A_J$ has a more complicated form.  It is a collection of paths which considered pairwise, have no common vertices. Each path here is associated with a single eigenvalue, although an eigenvalue can be associated with multiple paths. As an example $\begin{bmatrix} 3 & 1 & 0 & 0 \\ 0 & 3 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0& 0 & 0 & 0 \end{bmatrix}$ has the following digraph:

Now to the main point. Jordan’s theorem can be rephrased in the language of graph theory as: Every $nXn$ matrix $A$ is similar to a matrix $A_J$, whose digraph is a collection of pairwise vertex disjoint paths where each path is associated with exactly one eigenvalue.

For the proof, we need the permutation, diagonal and combination similarities. The combination similarity $C_{(i,j)}$ applied to a matrix $B$, adds $h$ times column $i$ of $B$ to $j$, and $-h$ times row $j$ of $BC_{(i,j)}$ to row $i$. The actual form of $C_{(i,j)}$ is the identity matrix with $h$ in the $(i,j)$ position, where $i \neq j$. The inverse has a $-h$ in place of the $h$.

To see how a combination symmetry affects an upper triangular matrix $B$:

$\begin{array}{lcl} BC_{(i,j)}(a,b) & = & B(a,b), b\neq j \\ " & = & B(a,j)+hB(a,i), b=j \end{array}$

$\begin{array}{lcl} C_{(i,j)}^{-1}BC_{(i,j)}(a,b) & = & B(a,b), a\neq i, b\neq j \\ " & = & B(a,j)+hB(a,i), a\neq i, b=j \\ " & = & B(i,b)-hB(j,b), a=i, b\neq j \\ " & = & B(i,j)+h(B(i,i)-B(j,j)) - h^2 B(j,i), a=i, b=j \end{array}$

If $i, then of course $B(j,i) =0$.

Theorem: An $nXn$ matrix $A$ is similar to a matrix which is a direct sum of Jordan blocks, where a Jordan Block $J_k$ is a $kXk$ matrix with a single eigenvalue as all its diagonal terms and ones on the super diagonal.

Proof:

1. By Schur’s theorem, there exists an invertible $P$ such that $T=P^{-1}AP$, where $T$ is upper triangular. Here equal diagonal entries can be made to occur consecutively.

2. Consider the block form of $T$, i.e. $\begin{bmatrix} T_1 & X \\ 0 & T_2 \end{bmatrix}$. Here $T_1$ is an upper triangular matrix with equal diagonal entries, say $a$, and $T_2$ is an upper triangular matrix, none of whose diagonal entries are equal to $a$. Using combination similarities, $X$ can be made 0 without altering $T_1$ or $T_2$.

We can repeat the process in 2. again and again, till $T$ is a direct sum of constant diagonal upper triangular matrices. …

Let’s first consider an $nXn$ upper triangular matrix $T$, with all diagonal entries equal to some $a$. We now use an induction argument:

a. If n=1, $T=\begin{bmatrix} a \end{bmatrix}$, which is already a Jordan block. If n=2, $T=\begin{bmatrix} a & p \\ 0 & a \end{bmatrix}$. The $p$ here can be made $1$ by a diagonal symmetry, giving a Jordan block.

b. Suppose that $(n-1)X(n-1)$ upper triangular matrices as above can be transformed into Jordan forms. Then so can the leading $(n-1)X(n-1)$ upper triangular block of an $nXn$ upper triangular matrix. So,

$T_1 = \begin{bmatrix} F_{(n-1)X(n-1)} & *_{(n-1)X1} \\ 0_{1X(n-1)} & a \end{bmatrix}$, where subscripts indicate the size of the blocks, and $*$ is for a possible non zero block.

Following b., we have

$\begin{bmatrix} F_{(n-1)X(n-1)} & *_{(n-1)X1} \\ 0_{1X(n-1)} & a \end{bmatrix} = \begin{bmatrix} (Q^{-1}SQ)_{(n-1)X(n-1)} & *_{(n-1)X1} \\ 0_{1X(n-1)} & a \end{bmatrix}$

$= \begin{bmatrix} Q^{-1}_{(n-1)X(n-1)} & 0_{(n-1)X1} \\ 0_{1X(n-1)} & 1 \end{bmatrix} \begin{bmatrix} S_{(n-1)X(n-1)} & **_{(n-1)X1} \\ 0_{1X(n-1)} & a \end{bmatrix} \begin{bmatrix} Q_{(n-1)X(n-1)} & 0_{(n-1)X1} \\ 0_{1X(n-1)} & 1 \end{bmatrix}$

Now, to complete the argument, we just need to show that the middle matrix in the above expression can be reduced to it’s Jordan form. As an example, consider the following matrix:

$\begin{bmatrix} a & 1 & 0 &0 & 0 & * \\ 0 & a & 0 & 0 & 0 & * \\ 0 & 0 & a & 1 & 0 & * \\ 0 & 0 & 0 & a & 1 & * \\ 0 & 0 & 0 & 0 & a & * \\ 0 & 0 & 0 & 0 & 0 & a \end{bmatrix}$

As the combination symmetry $C_{(i,j)}$ only affects the parts of row $i$ and column $j$ to the right and to the top of the element $(i,j)$ respectively, we can make most of the elements of the last column in the above matrix 0. For instance, if element $(3,6)$ is $h$, then $C_{(4,6)}(-h)$ makes this 0 due to presence of a $1$ in the 4th column. This can be done for all elements in the last column except $(2,6)$ and $(5,6)$ which do not have $1$s available in their respective rows.

We’re almost there. If both $(2,6)$ and $(5,6)$ are 0, we already have the Jordan form. If both of them are non zero,  using a diagonal symmetry we can make the second last element of the last column, $(5,6)$, here equal $1$. Now if $(2,6)$ is $s$, then if we do a $C_{(2,5)}(s)$ followed by a $C_{(1,4)}(s)$, element $(2,6)$ becomes $0$. We follow a similar procedure for any other pairs of non zero entries left in the general case, always eliminating the non zero entry across the smaller Jordan block. This will leave us with one non zero non diagonal entry in the last column. With a permutation similarity, this can be brought to be the second last element of the last column giving us a Jordan Form.

Thus, we have proved the existence of a Jordan form for every matrix.