MathGroup Archive 1999

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Characteristic Polynomials and Eigenvalues

  • To: mathgroup at smc.vnet.net
  • Subject: [mg19401] Re: Characteristic Polynomials and Eigenvalues
  • From: Martin Kraus <i-martin at wolfram.com>
  • Date: Mon, 23 Aug 1999 13:57:15 -0400
  • Organization: Wolfram Research, Inc.
  • References: <7pl6uv$cg9@smc.vnet.net>
  • Sender: owner-wri-mathgroup at wolfram.com

Hi Manuel!

MAvalosJr at aol.com wrote:
> 
> Gentlemen:
> 
> I have been studying linear algebra and with the aid of several programs and
> add- ons to Mathematica the task has been a piece of cake. However, the time
> comes when suddenly "understanding" leers its ugly head.
> Given the vectors {4,-6}, {3, -7}, the characteristic polynomial is x^2 + 3 x
> -10. The eigenvalues are (-5,2), the eigenvectors are (2,3) and (3,1). My
> question:

Just to be a little more explicit here is the Mathematica code:

In[1]:=
matrix = {{4, -6}, {3, -7}}

Out[1]=
{{4, -6}, {3, -7}}

In[2]:=
Eigenvectors[matrix]

Out[2]=
{{2, 3}, {3, 1}}

In[3]:=
Eigenvalues[matrix]

Out[3]=
{-5, 2}

The characterisitc polynomial is calculated this way: 

In[4]:=
Det[matrix - x*IdentityMatrix[2]]

Out[4]=
\!\(\(-10\) + 3\ x + x\^2\)

> What does the characteristic polynomial (since it describes a curve) have to
> do with the vectors (which are straight lines)? 

Good question. I have no idea. :)
However, I know what the characteristic polynomial is good for!

Why is it interesting to look at that determinant anyway?
The reason is this matrix equation:

matrix.v == x*v

with the unknown vector v and the unknown scalar x. 
Let's use {v1,v2} instead of v and solve this equation with Mathematica:

In[12]:=
Solve[matrix.{v1, v2} == x {v1, v2}, {v1, v2, x}]

Solve::"svars": "Equations may not give solutions for all \"solve\" \
variables."

Out[12]=
\!\({{v1 -> \(2\ v2\)\/3, x -> \(-5\)}, {v1 -> 3\ v2, x -> 2}, {v1 -> 0, 
      v2 -> 0}}\)

*wow* It works! Mathematica returns the eigenvalues for x and the
eigenvectors for {v1, v2}! :)
In fact the equation above defines the eigenvectors and the
eigenvalues. Thus, if we map a matrix on one of its eigenvectors
we get the eigenvector multiplied with the corresponding eigenvalue.
In other words, eigenvectors do not change their direction but only
their magnitude when the corresponding linear mapping is applied to
them.

Well, but that's kind of off-topic, I wanted to explain what the
characteristic polynomial is. Let's modify our matrix equation 
by subtracting x*v on both sides:

matrix.v - x*v == 0

and now rewrite this to

(matrix - x*IdentityMatrix[2]).v == 0

Thus, we are close to understanding why Det[matrix - x*IdentityMatrix[2]]
is an interesting polynomial!
In fact, it is one result of linear algebra, that any matrix equation
m.v==0 with a given matrix m and an unknown vector v has non-trivial 
solutions for v if Det[m]==0.
Thus, it is not really the polynomial Det[matrix - x*IdentityMatrix[2]]
which is interesting but its roots! 

Let's have a look at them:

In[15]:=
Roots[Det[matrix - x *IdentityMatrix[2]] == 0, x]

Out[15]=
x == -5 || x == 2

Ah, again the eigenvalues! 
And that's the only meaning of the characteristic polynomial 
I am aware of: Its roots are the eigenvalues of the corresponding matrix.
(Note: the complete set of roots of a polynomial define it completely.)

> Or for that matter, the
> eigenvalues and eigenvectors -derived from the matrix or the polynomial have
> to do with the vectors?
> I plotted the polynomial but can't figure out what it has to do with the
> vectors.
> 
> Thanks for whatever
> Manuel

OK, I have explained what the relation between the characterisitc 
polynomial and the eigenvalues is. Also, we have seen the relation
between eigenvalues and eigenvectors. So, what is the relation
between the two row vectors of the matrix and the eigenvectors?

Again: I have no idea!
However, I know how to relate the column vectors of our matrix
with the linear mapping represented by that matrix. (And we have
already seen how the eigenvectors are related to the linear mapping.)

The relation is quite simple: The column vectors are the images
of the canonical basis vectors, i.e. applying a matrix on the canonical 
basis vectors returns the column vectors.

In[16]:=
matrix.{1, 0}

Out[16]=
{4, 3}

and {4, 3} is the first column vector:

In[17]:=
Transpose[matrix][[1]]

Out[17]=
{4, 3}

The same is true for the second column vector:

In[18]:=
matrix.{0, 1}

Out[18]=
{-6, -7}

In[19]:=
Transpose[matrix][[2]]

Out[19]=
{-6, -7}

Thus, the images of the canonical basis vectors are the column 
vectors of a matrix, which represents a linear mapping, 
which has the feature that it does not change the
direction of some vectors, which are called eigenvectors. However,
this linear mapping does change the magnitude of these eigenvectors
by factors called eigenvalues, which turn out to be the roots of
a polynomial called characteristic polynomial.

Can we draw a picture of this stuff?
Sure:

In[20]:=
<< Graphics`PlotField`

In[21]:=
<< Graphics`Arrow`

In[22]:=
PlotVectorField[matrix.{x, y}, {x, -6, 4}, {y, -7, 3}, 
    Epilog -> {RGBColor[0, 0, 1], Arrow[{1, 0}, matrix.{1,0}], 
        Arrow[{0, 1}, matrix.{0,1}], RGBColor[0, 1, 0], 
        Arrow[{0, 0}, Eigenvectors[matrix][[1]]], 
        Arrow[{0, 0}, Eigenvectors[matrix][[2]]]}, PlotPoints -> {11,
11}, 
    Axes -> True];

The small black arrows show the matrix: The corresponding linear mapping is
applied on each position vector of the grid and the result is shown as
a small black vector starting at each of the grid positions. 
However, the length of the resulting vectors is reduced by a common
factor, 
such that the vectors do not intersect. 
For the points {1,0} and {0,1} the blue arrows
correspond to the small black vectors at their starting points but with
the correct length. (Obviously the plot would be a mess if all the
black vectors had the correct length!)
But the blue arrows given by matrix.{1,0} and matrix.{0,1} are equal to
the column vectors! 
Thus, thinking of the small black vectors at {1,0} and {0,1} as the 
images of the canonical basis vectors, we see that at least their direction
is equal to the direction of the column vectors which are represented by
the blue arrows. The difference in magnitude results from the way 
PlotVectorField scales these black arrows. 

The two column vectors already specify the whole matrix (black arrows), 
and, therefore, the images of the canonical basis vectors (blue arrows)
also specify the whole linear mapping represented by our matrix. 
Thus, instead of talking of two column vectors we can also talk about
a linear mapping in two dimensions.

The green arrows are the eigenvectors of this linear mapping. 
They indicate two lines on which
each position vector is mapped onto a vector which has the same
direction (scaled by the corresponding eigenvalue, which is negative 
for one of the eigenvectors). Actually, there are not only two eigenvectors
but all position vectors (except the origin) on the two lines indicated by
the green arrows are eigenvectors. (These two lines are called "eigenspaces".)

That's my idea of this stuff. I don't know whether this makes sense,
but probably it is not perfectly correct. Any comments by the experts?

Hope that helps, nonetheless!

Martin Kraus


  • Prev by Date: Re: Finding Solution to Set of Equations with Inequality Constraints
  • Next by Date: Re: QUESTION (EigenVectors)
  • Previous by thread: Re: Characteristic Polynomials and Eigenvalues
  • Next by thread: QUESTION (EigenVectors)