Re: Using NDSolve in NonlinearFit
- To: mathgroup at smc.vnet.net
- Subject: [mg48863] Re: Using NDSolve in NonlinearFit
- From: ab_def at prontomail.com (Maxim)
- Date: Sat, 19 Jun 2004 04:31:28 -0400 (EDT)
- References: <cap36v$c82$1@smc.vnet.net> <capcqu$eu2$1@smc.vnet.net>
- Sender: owner-wri-mathgroup at wolfram.com
"Jens-Peer Kuska" <kuska at informatik.uni-leipzig-de> wrote in message news:<capcqu$eu2$1 at smc.vnet.net>...
> Hi,
>
> what is with:
> fun[omega_?NumericQ, x_?NumericQ] :=
> Module[{t, sol},
> sol = NDSolve[
> {y1'[t] == y2[t],
> y2'[t] == -omega^2*y1[t],
>
> y1[0] == 0, y2[0] == omega},
>
> {y1[t], y2[t]},
> {t, 0, x}];
> y1[t] /. sol[[1]] /. t -> x
> ]
>
> data = Table[{x, Sin[2*x]}, {x, 0, 2Pi, 2Pi/256}];
>
> FindFit[data, fun[omega, x], {{omega, 2.2}}, x]
>
> Regards
> Jens
>
>
In this particular case we can speed up the computations by finding
f[omega] once and then simply evaluating the obtained
InterpolatingFunction for different values of x:
In[1]:=
Module[{fun, f, data},
fun[omega_?NumericQ, x_?NumericQ] := f[omega][x];
f[omega_] := f[omega] =
y1 /. First@NDSolve[
{y1'[t] == y2[t], y2'[t] == -omega^2y1[t],
y1[0] == 0, y2[0] == omega},
{y1, y2}, {t, 0, 2Pi}];
data = Table[{x, Sin[2x]}, {x, 0, 2Pi, 2Pi/256}];
FindFit[data, fun[omega, x], {{omega, 2.2}}, x]
] // Timing
Out[1]=
{7.281 Second, {omega -> 2.0000001}}
It should be noted that we cannot use f[omega][x] in FindFit, we have
to add the extra definition fun[omega_,x_]:=f[omega][x], because
FindFit, using the default optimization method, will need to evaluate
the derivatives and D[f[omega][x],omega] is 0. If we use
{{omega,2.2,2.3}} instead of {{omega,2.2}}, then the derivatives won't
be used and f[omega][x] will work just as well.
It's interesting that of the following two definitions,
f[a_] = NIntegrate[Sin[a x], {x, 0, a}]
f[a_?NumericQ] := NIntegrate[Sin[a x], {x, 0, a}]
the first one will be more efficient in computations where f'[a] is
used, because in the first case Mathematica uses the built-in rules
for the derivative of NIntegrate, while in the second case it has to
resort to numerical differentiation.
As a not-quite-related issue, we can see that the numerical
optimization functions share similar syntax, with one exception:
FindRoot and FindMinumum don't handle vector-valued functions the same
way:
In[2]:=
FindMinimum[Norm[x], {x, {1,1}}]
FindRoot[Norm[x], {x, {1,1}}]
<FindMinimum::lstol warning>
Out[2]=
{6.280369834735101*^-16,
{x -> {4.440892098500626*^-16, 4.440892098500626*^-16}}}
FindRoot::nveq: The number of equations does not match
the number of variables in FindRoot[Norm[x], {x, {1, 1}}].
Out[3]=
FindRoot[Norm[x], {x, {1, 1}}]
It looks like FindRoot doesn't take into account that functions can be
not only R^n->R^n, but also R^n->R.
Maxim Rytin
m.r at inbox.ru