MathGroup Archive 2005

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: NDSolve and differential equation system

  • To: mathgroup at smc.vnet.net
  • Subject: [mg54039] Re: NDSolve and differential equation system
  • From: Maxim <ab_def at prontomail.com>
  • Date: Tue, 8 Feb 2005 05:31:28 -0500 (EST)
  • Organization: MTU-Intel ISP
  • References: <cu1vjh$ltj$1@smc.vnet.net>
  • Sender: owner-wri-mathgroup at wolfram.com

On Sat, 5 Feb 2005 08:20:01 +0000 (UTC), Christian Moosmann <some@Adress>  
wrote:

> Hi MathGroup
>
> I have a question concerning NDSolve and a system of differential  
> equations.
>
> consider an equation system A x'[t] + (K1 + v[t] * K2).x[t]=B, where
> A,K1,K2 are Matrices, B and x are vectors and v[t] is a defined skalar
> function. I want to compute this system with NDSolve, however, it takes
> a very long time.
>
> To try it you should use Theodor Gray's ShowStatus function in the  
> frontend.
>
> ShowStatus[status_] :=
>      LinkWrite[$ParentLink,
>        SetNotebookStatusLine[FrontEnd`EvaluationNotebook[],
>          ToString[ status ] ] ];
>
> Dim = 40;
>
> I use random Matrices here. They show the same basic behaviour as the
> matrices I usually use (those are also dense), so this problem should
> not be related directly to the matrices.
>
> K1 = Table[Random[], {i, Dim}, {j, Dim}];
> K2 = Table[Random[], {i, Dim}, {j, Dim}];
> Ainv = Inverse[Table[Random[], {i, Dim}, {j, Dim}]];
> B = Table[Random[], {i, Dim}];
>
> tStart = 0.;
> tEnd = 1;
>
> At first we solve without the function v[t]:
>
> Timing[NDSolve[{D[ x[t], t] + Ainv.( K1 + K2).x[t] == B  ,
>        x[tStart] == Table[0. , {Dim}]} , x , { t, tStart, tEnd },
>      SolveDelayed -> True, StepMonitor :> ShowStatus[t] ] ]
>
> {0.080987 Second, {{x -> InterpolatingFunction[{{0., 1.}}, <>]}}}
>
> is quite fast, now we would like to include the function:
>
> v[t_] := UnitStep[t - 0.2]*4*(t - 0.2) - UnitStep[t - 0.45]*4*(t - 0.45);
>
> Timing[NDSolve[{D[ x[t], t] + Ainv.( K1 + v[t]*K2).x[t] == B  ,
>        x[tStart] == Table[0. , {Dim}]} , x , { t, tStart, tEnd },
>      SolveDelayed -> True, StepMonitor :> ShowStatus[t] ] ]
>
> {137.341 Second, {{x -> InterpolatingFunction[{{0., 1.}}, <>]}}}
>
> Well, this is a factor of > 1000, and what I would like to compute is at
> least Dimension 100. If you use the stepmonitor you also may notice,
> that the time counts up, then stops for some time, counts up again...
>
>
> Does anyone have any suggestions how to deal with that?
>
> Thanks in advance
>
> Christian
>

I managed to do it numerically only up to Dim=80 by rewriting the vector  
ODE as a system of scalar equations and using Method -> IDA:

In[1]:=
SeedRandom[1]
Dim = 80;

K1 = Table[Random[], {Dim}, {Dim}];
K2 = Table[Random[], {Dim}, {Dim}];
Ainv = Inverse[Table[Random[], {Dim}, {Dim}]];
B = Table[Random[], {Dim}];

v[t_] := UnitStep[t - 2/10]*4*(t - 2/10) -
   UnitStep[t - 45/100]*4*(t - 45/100);

Module[
   {Lf = Array[x, {Dim}], x},
   x = Evaluate[Through[Lf[#]]]&;
   NDSolve[Thread /@
     {x'[t] + Ainv.(K1 + v[t]*K2).x[t] == B,
      x[0] == Table[0, {Dim}]},
     Lf, {t, 0, 1},
     Method -> IDA, SolveDelayed -> True]
]; // Timing

Out[8]=
{175.625 Second, Null}

In[9]:=
MaxMemoryUsed[]

Out[9]=
695334616

This might be a case where the piecewise handling routines of Mathematica  
5.1 do more harm than good, because if we choose a different v[t], say  
v[t_] = (Erf[10(t - 13/40)] + 1)/2, which is very close to the original  
v[t], then NDSolve can handle this equation with Dim=80 much easier --  
both time and memory requirements will be significantly lower.

However, if Ainv.K1 and Ainv.K2 commute or, equivalently, if K1.Ainv.K2 ==  
K2.Ainv.K1 (Ainv is nonsingular), then the solution can be written out  
explicitly by analogy with the solution of the scalar equation x'[t]  
+ a[t]*x[t] == b. We still won't be able to find the matrix exponentials  
symbolically, but we can use them to approximate the solution.

Let's take K2 = -2*K1, which satisfies the above condition:

In[1]:=
(*starting with a fresh kernel*)
SeedRandom[1]
Dim = 80;

K1 = Table[Random[], {Dim}, {Dim}];
K2 = -2*K1;
Ainv = Inverse[Table[Random[], {Dim}, {Dim}]];
B = Table[Random[], {Dim}];

v[t_] := UnitStep[t - 2/10]*4*(t - 2/10) -
   UnitStep[t - 45/100]*4*(t - 45/100);

Lsol = Module[
   {Lf = Array[x, {Dim}], x, V, f, df, g, gv, ig, Fv, dFv},
   x = Evaluate[Through[Lf[#]]]&;
   V[t_] = Integrate[v[tau], {tau, 0, t}, Assumptions -> t > 0];
   f[t_] := f[t] = MatrixExp[-Ainv.(K1*t + K2*V[t])];
   df[t_] := df[t] = -Ainv.(K1 + K2*v[t]).f[t];
   g[t_] := g[t] = MatrixExp[Ainv.(K1*t + K2*V[t])].B;
   gv[t_?NumericQ, i_] := g[t][[i]];
   ig[t_] := ig[t] = NIntegrate[
     Evaluate@ Table[gv[tau, i], {i, Dim}],
     {tau, 0, Min[t, 1/5], Min[t, 9/20], t}];
   Fv[t_?NumericQ, i_] := (f[t].ig[t])[[i]];
   dFv[t_?NumericQ, i_] := (df[t].ig[t] + f[t].g[t])[[i]];
   Table[Interpolation[
       Table[{t, {Fv[t, i], dFv[t, i]}}, {t, 0., 1., .01}]],
     {i, Dim}]
]; // Timing

Out[8]=
{25.266 Second, Null}

In[9]:=
MaxMemoryUsed[]

Out[9]=
19204344

Some jumping through hoops is required because we cannot declare the  
arguments of NIntegrate as vectors. An interesting observation is that  
Interpolation seems to work much better than FunctionInterpolation in this  
case.

This method scales much better, and the agreement between the two  
solutions is reasonably good:

Lsol2 = Module[
   {Lf = Array[x, {Dim}], x},
   x = Evaluate[Through[Lf[#]]]&;
   NDSolve[Thread /@
     {x'[t] + Ainv.(K1 + v[t]*K2).x[t] == B,
      x[0] == Table[0, {Dim}]},
     Lf, {t, 0, 1},
     Method -> IDA, SolveDelayed -> True]
];

Plot[Module[
        {Lf = Array[x, {Dim}], x},
        x = Evaluate[Through[Lf[#]]]&;
        Norm[Through[Lsol[t]] - (x[t] /. Lsol2[[1]])]
      ],
   {t, 0, 1}, PlotRange -> All]

As to why NDSolve issues the NDSolve::ndfdmc warning when the equation is  
given in the vector form with SolveDelayed set to False, this is most  
likely due to the incorrect preprocessing of the input: thus, the next two  
examples have well-defined solutions, but NDSolve gives a warning in the  
first case and just crashes the kernel in the second case:

In[10]:=
NDSolve[
   {x''[t].x''[t] == 1,
    x''[t].{-1, 1} == 0,
    x[0] == {1, 1}, x'[0] == {0, 0}},
   x, {t, 0, 1}]

NDSolve::overdet: There are fewer dependent variables, {x[t]}, than  
equations, so the system is underdetermined.

Out[10]=
NDSolve[{Derivative[2][x][t] . Derivative[2][x][t] == 1,  
Derivative[2][x][t] . {-1, 1} == 0, x[0] == {1, 1}, Derivative[1][x][0] ==  
{0, 0}}, x, {t, 0, 1}]

NDSolve[
   {x''[t].x''[t] + x[t].x[t] == 0,
    x[0] == {1}, x'[0] == {0}},
   x, {t, 0, 1},
   SolveDelayed -> True]
(*crashes the kernel*)

It is interesting to compare this with the behaviour of the previous  
versions of Mathematica: in version 5.0 DSolve could return all kinds of  
weird output when the equations were given as lists. Now DSolve just  
generates the warning message DSolve::nolist, but NDSolve has to work with  
vector equations, and the processing of vector input is still far from  
perfect:

In[11]:=
NDSolve[
   {{x'[t], y'[t]} == {1, 1},
    {x[0], y[0]} == {0, 0}},
   {x, y}, {t, 0, 1}]

NDSolve::underdet: There are more dependent variables, {x[t], y[t]}, than  
equations, so the system is underdetermined.

Out[11]=
NDSolve[{{Derivative[1][x][t], Derivative[1][y][t]} == {1, 1}, {x[0],  
y[0]} == {0, 0}}, {x, y}, {t, 0, 1}]

In the last case the problem can be resolved simply by applying Thread.

Maxim Rytin
m.r at inbox.ru


  • Prev by Date: Re: NDSolve and differential equation system
  • Next by Date: Re: Fourier function...having problems reproducing answers in a paper
  • Previous by thread: Re: NDSolve and differential equation system
  • Next by thread: Re: NDSolve and differential equation system