Re: Lyapunov Equation
- To: mathgroup at christensen.cybernetics.net
- Subject: [mg868] Re: Lyapunov Equation
- From: Allan Hayes <hay at haystack.demon.co.uk>
- Date: Thu, 27 Apr 1995 01:40:35 -0400
I have been looking over the recent postings about the solution of
the Liapunov equation
p . a + Transpose[ a ] . p == -q (1)
for p; where a is a matrix, p is a symmetric matrix and q is
symmetric positive definite matrix.
In mg[729 ] Paul Rubin uses Solve to find the entries in p (he has
q on the right instead of -q);
In mg[713] Bernd Cebulski obtains a nice formula for the solution.
I wondered whether it would be better to substitute LinearSolve for
Solve in Paul's solution.
The investigation turned up some questions that I had not expected
to have to deal with
(a) an apparent bug in LinearSolve;
(b) a query about reliability of solutions.;
(c) can we allow the precision estimates use in arbitrary
precision arithmetic to be influenced by the algorithm being used.
I apologise for the length of the following account
The following picks out the working part of Pauls's code ( with q
changed to -q)
In[1]:=
LpSolve[a_,q_]:=
Flatten[
p/.
Solve[
p . a + Transpose[a] . p == -q,
Union[Flatten[p]]
],
1
]
Here is my LinearSolve code
In[2]:=
LpSolve2[a_,q_] :=
p/.Thread[(Union@@p) ->
LinearSolve[
Outer[Coefficient,
Flatten[p.a + Transpose[a] . p],
Union@@p
],
Flatten[-q]
] ]
Test inputs are generated by
In[3]:=
test[n_] :=
( SeedRandom[1];
p = Array[ Sort[e[##]]&,{n,n}];
q = IdentityMatrix[n]; (*could be arbitrary positive definite*)
a = Table[ 2 Random[] - .5, {i, n}, {j, n} ]
)
(the technique for constructing the symmetric matrix p might be of
independent use)
Try the codes with n=2.
In[4]:= test[2];
(a) BUG IN LINEAR SOLVE?
In[5]:=
LpSolve2[a,q];
LinearSolve::nosol: Linear equation encountered
which has no solution.
This is presumably due to some test used in LinearSolve.
Work round: use arbitrary precision arithmetic.
In[6]:=
LpSolve2@@SetPrecision[{a,q}, $MachinePrecision +1]
Out[6]=
{{-0.1485960799538594, -0.3531050582315298},
{-0.3531050582315298, 0.357157867314548}}
The following table shows that
* LpSolve2 is faster than LpSolve for n = 2 to 9
* the advantage decreases,
* the precision of the solution, steadily decreases.(for n
= 9 it is zero).
The last is to be expected from arbitrary precision arithmetic,
which gives only the precision that can be conservatively
guaranteed.
In[7]:=
data =
TableForm[
Table[
test[n];
{n, Timing[s[n] = LpSolve[a,q]][[1]],
Timing[s2[n] =
LpSolve2@@SetPrecision[{a,q},
$MachinePrecision +1]
][[1]],
Precision[s2[n]]
},
{n,2,9}
],
TableHeadings ->
{ {},
{"n", "LpSolve\ntime\n", "LpSolve2\ntime", "Precision
of \nLpSolve2"}
}
]
Out[7]//TableForm=
n LpSolve LpSolve2 Precision of
time time LpSolve2
2 0.116667 Second 0.0666667 Second 15
3 0.45 Second 0.183333 Second 14
4 1.3 Second 0.633333 Second 13
5 2.96667 Second 1.45 Second 10
6 6.01667 Second 3.3 Second 8
7 10.7667 Second 7.18333 Second 7
8 18.4667 Second 12.9333 Second 3
9 30.4167 Second 26.0833 Second 0
The same degree of loss of precision shows up if we use arbitrary
precision with LpSolve ,also the speed goes down quite a bit..
(b) RELIABILITY OF SOLUTIONS?
If this loss of precision represents the true precision of the
algorithm then prsumably the machine arithmetic we used in the table
above for LpSolve gives a n unreliable answer.
However, when we substitute s[9] for p on the right side of (1) we get
In[8]:=
s[9] . a + Transpose[a] . s[9]//Chop
Out[8]=
{{-1., 0, 0, 0, 0, 0, 0, 0, 0}, {0, -1., 0, 0, 0, 0, 0, 0, 0},
{0, 0, -1., 0, 0, 0, 0, 0, 0}, {0, 0, 0, -1., 0, 0, 0, 0, 0},
{0, 0, 0, 0, -1., 0, 0, 0, 0}, {0, 0, 0, 0, 0, -1., 0, 0, 0},
{0, 0, 0, 0, 0, 0, -1., 0, 0}, {0, 0, 0, 0, 0, 0, 0, -1., 0},
{0, 0, 0, 0, 0, 0, 0, 0, -1.}}
Which is pretty good..
We could increase the precision of s2[9] by increasing the
precision of the input. A little slowing down results but we can
still get a speed up for n from 2 to 9 , but not beyond n = 9.
Interestingly, this higher precision result is almost exactly the
same as the machine arithmetic solution s[9]:
In[9]:=
(s2h[9] = LpSolve2@@SetPrecision[{a,q}, $MachinePrecision +
18]);//Timing
Out[9]=
{29.7167 Second, Null}
In[10]:=
Precision[s2h[9]]
Out[10]=
16
In[11]:=
Max[Abs[s2h[9] - s[9]]]
Out[11]=
-13
1.98952 10
(c) IS THE REDUCTION OF PRECISION WHEN USING ARBITRARY PRECISION
ARITHMETIC EXCESSIVE IN THESE CALCULATIONS, AND, IF SO, WHAT CAN BE
DONE ABOUT IT?
The formula solution given by Bernd Cebulski turns out to be
slower than the two methods used above for n >3.
In[12]:=
Needs["LinearAlgebra`MatrixManipulation`"];
rows[a_]:=Part[Dimensions[a],1]; (* get the rows of a matrix *)
columns[a_]:=Part[Dimensions[a],2];(* get the columns " *)
kron[a_,b_]:=BlockMatrix[Array[Part[Outer[Times,a,b],#1,#2]&,Dimensions[a]]];
tovector[a_]:=Flatten[Transpose[a]]; (* vectorize-operator*)
tomatrix[a_]:=Transpose[Partition[a, Sqrt[rows[a]]]]; (* the
inverse ...*)
ljapunow[a_,c_]:=tomatrix[ Inverse[kron[Transpose[a],
IdentityMatrix[rows[a]]] +
kron[IdentityMatrix[columns[a]],Transpose[a]]].
tovector[-Transpose[c]]]
(* gets the matrix P from the system matrix A and the symmetric positive
definite matrix C (perhaps identity-matrix, but not necessary) *)
In[20]:=
Do[Print[test[n];{n, Timing[ljapunow[a,q]][[1]] }], {n,2,7}]
{2, 0.0666667 Second}
{3, 0.416667 Second}
{4, 1.81667 Second}
{5, 5.96667 Second}
{6, 17.1333 Second}
{7, 41.0833 Second}
The corresponding timings for LpSolve and LpSolve2 were
In[21]:=
Print/@(Take[#,3]&/@ Take[data[[1]],6]);
{2, 0.116667 Second, 0.0666667 Second}
{3, 0.45 Second, 0.183333 Second}
{4, 1.3 Second, 0.633333 Second}
{5, 2.96667 Second, 1.45 Second}
{6, 6.01667 Second, 3.3 Second}
{7, 10.7667 Second, 7.18333 Second}
Allan Hayes
hay at haystack.demon.co.uk