Services & Resources / Wolfram Forums / MathGroup Archive
-----

MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Nonorthogonal Eigenvectors

  • To: mathgroup at smc.vnet.net
  • Subject: [mg116411] Re: Nonorthogonal Eigenvectors
  • From: "Kevin J. McCann" <kjm at KevinMcCann.com>
  • Date: Mon, 14 Feb 2011 04:26:14 -0500 (EST)
  • References: <201102121019.FAA20109@smc.vnet.net> <ij83f7$o6$1@smc.vnet.net>

Leonid,

You are quite right. I am not questioning the correctness of the answer. 
However, even if you allow for the different linear combinations, it 
remains that the exact eigenvector matrix is not orthogonal.

So, if you use the eigenvector matrix to diagonalize A, viz.

P.A.Transpose[P]

you get a diagonal matrix, but, of course, the diagonal is not 
necessarily the eigenvalues.

There is nothing wrong here aside from the inconsistency between the 
exact and numeric results, which I think is a mistake if not 
mathematically incorrect. It is this inconsistency that I am 
questioning. The issue is, why are the exact eigenvectors not normalized?

Kevin

On 2/13/2011 3:07 AM, Leonid Shifrin wrote:
> Well, you obviously have chosen a bad example. Your matrix has two
> degenerate zero eigenvalues,
> which means that  any linear combination of  the two corresponding
> eigenvectors  can be chosen as
> the basis of zero-eigenvalue subspace. The fact that analytical and
> numerical routines make different
> choices for that does not make any of the two wrong. What IMO would actually
> be wrong would be
> to rely on any particular choice, given the degeneracy.
>
> Regards,
> Leonid
>
> On Sat, Feb 12, 2011 at 1:19 PM, Kevin J. McCann<kjm at kevinmccann.com>wrote:
>
>> I have seen some threads from the past on this, but never got a
>> satisfactory answer.
>>
>> Suppose I have an exact matrix A:
>>
>> A = {{1, 0, 0, 0, 2}, {0, 16, 0, 0, 0}, {0, 0, 9, 0, 0}, {0, 0, 0, 0,
>>    0}, {2, 0, 0, 0, 4}};
>>
>> P = Eigenvectors[A]
>>
>> produces the following
>>
>> {{0,1,0,0,0},{0,0,1,0,0},{1,0,0,0,2},{-2,0,0,0,1},{0,0,0,1,0}}
>>
>> which is not an unitary matrix, although the vectors are orthogonal,
>> just not normal, i.e.
>>
>> Transpose[P].P
>>
>> is not the identity matrix.
>>
>> However, if I make A numeric:
>>
>> nA = A//N
>>
>> then
>>
>> nP = Eigenvectors[nA]
>>
>> produces
>>
>> {{0., 1., 0., 0., 0.}, {0., 0., 1., 0., 0.}, {0.447214, 0., 0., 0.,
>>    0.894427}, {-0.894427, 0., 0., 0., 0.447214}, {0., 0., 0., -1., 0.}}
>>
>> and
>>
>> Transpose[nP].nP
>>
>> is the identity matrix.
>>
>> I do not understand why making the matrix inexact produces the result
>> that I would expect, but when the matrix is exact it doesn't. Also, I
>> don't think the inconsistency is a useful thing.
>>
>> Any ideas why someone decided to do it this way?
>>
>> Kevin
>>
>>
>


  • Prev by Date: Re: Another point about Mathematica 8.0
  • Next by Date: Re: how to drop ContextPath when printing from a package?
  • Previous by thread: Re: Nonorthogonal Eigenvectors
  • Next by thread: NInegrate Bug