MathGroup Archive 2004

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Re: Re: A problem of numerical precision

  • To: mathgroup at smc.vnet.net
  • Subject: [mg52684] Re: [mg52651] Re: [mg52642] Re: [mg52610] A problem of numerical precision
  • From: Guofeng Zhang <guofengzhang at gmail.com>
  • Date: Thu, 9 Dec 2004 20:22:38 -0500 (EST)
  • References: <200412030815.DAA25377@smc.vnet.net>
  • Reply-to: Guofeng Zhang <guofengzhang at gmail.com>
  • Sender: owner-wri-mathgroup at wolfram.com

Based the exact A and B, i.e., A={{1,0}, {b,a}} and B={{-1,1},{-b,b}}
where a=9/10 and b=-3*Sqrt[2]/10 INSTEAD OF b=
-3*1.4142135623730950/10 or b = -3*1.4142135623730950``200/10, I
proved analytically that the trajectory is oscillaiting instead of
converging to a point.  I hope my proof is correct. :-)

I agree with you: it is the test if f leads to the numerical
instability.  I am currently doing research in chaos. You may know it
is normally hard to study a chaotic system analytically, more often
one has to resort to numerical analysis. The system above is at least
chaos-like, if it is not a chaos. The above numerical analysis problem
shows that for this type of systems numerical analysis may not be very
relible. Given this, what should one be cautious when doing this type
of research?


On Tue, 7 Dec 2004 04:09:36 -0500 (EST), DrBob <drbob at bigfoot.com> wrote:
> Oops, I was using an exact A and B also. Correcting that changes details, but the 200-digit fixed-precision result still matches the machine-precision result -- in both cases the test in f oscillates, and the result of f is precisely the same for any iteration.
> 
> But f gives different results when we start at 200 digits and let precision atrophy.
> 
> Bobby
> 
> 
> 
> On Sun, 05 Dec 2004 12:47:41 -0600, DrBob <drbob at bigfoot.com> wrote:
> 
> > You used exact versions of A and B in all your trials, where the OP (I think) and I (definitely) defined A and B based on the approximation to b.
> >
> > I believe Peter Pein did the opposite thing -- he used the lower-precision version of A and B when using the higher-precision b to define it[[1]].
> >
> > So we all got different results, just as we could expect to do.
> >
> > Short of using exact arithmetic throughout, I think the fixed-precision method is authoritative. The result was indistinguishable from the machine-precision values, for a very good reason -- errors don't actually accumulate in this calculation, as Mathematica's precision arithmetic assumes they do. Using fixed-precision OR machine-precision sidesteps that arithmetic and prevents the test in f from going wrong.
> >
> > If we reach an eigenvector corresponding to A's first eigenvalue, we should STAY there (unless the test in f comes True). If we're anywhere else, A.{x,y} tends to diminish error, not increase it. Even B.{x,y} decreases error for many inputs, and the term isn't usually added anyway. If it isn't added when it shouldn't be, no problem!
> >
> > Hence numerical instability comes almost exclusively from the test in f, and that's affected far less by accumulated error than by artificially diminished precision.
> >
> > Bobby
> >
> > On Sun, 5 Dec 2004 02:08:38 -0500 (EST), Daniel Lichtblau <danl at wolfram.com> wrote:
> >
> >> Guofeng Zhang wrote:
> >>>
> >>>  Hi,
> >>>
> >>>  I met one problem when I do some iteration: By increasing numerical
> >>>  precision, the results are so different from the original one!  I
> >>>  don't know what went wrong, and hope to get some answers.
> >>>
> >>>  The code is
> >>>
> >>>  delta = 1/100;
> >>>  a = 9/10;
> >>>  b = -3*1.4142135623730950/10;
> >>>  A = { {1,0}, {b,a} };
> >>>  B= { {-1,1}, {-b,b} };
> >>>
> >>>  f[v_,x_] := If[ Abs[ (10^20)*v-(10^20)*x]>(10^20)*delta, 1, 0 ];
> >>>
> >>>  M = 3000;
> >>>  it = Table[0, {i,M}, {j,2} ];
> >>>  it[ [1] ] = { -delta/2, (a+b)*(-delta/2) };
> >>>
> >>>  For[ i=1, i<M, it[ [i+1]]  = A.it[[i]]+f[ it[ [i,1] ], it[ [i,2] ]
> >>>  ]*B.it[ [i] ]; i++ ];
> >>>  temp = Take[it, -1000];
> >>>
> >>>  ListPlot[ temp ];
> >>>
> >>>  By evaluating this, I got an oscillating orbit. I got the same using
> >>>  another system. However, if I increase the numerical precision by using
> >>>  b = -3*1.4142135623730950``200/10;
> >>>  to substitute the original b, the trajectory would converge to a fixed
> >>>  point instead of  wandering around!
> >>>
> >>>  I don't know why. I am hoping for suggestions. Thanks a lot.
> >>>
> >>>  Guofeng
> >>>
> >>
> >>
> >> You are seeing the effect of plotting "zeros" of negative accuracy.
> >>
> >> Try it as below.
> >>
> >> delta = 1/100;
> >> a = 9/10;
> >> b = -3/10*Sqrt[2];
> >> A = {{1,0}, {b,a}};
> >> B = {{-1,1}, {-b,b}};
> >> f2[v_,x_] := UnitStep[Abs[v-x]-delta]
> >> M = 3000;
> >>
> >> itm = Table[0, {i,M}, {j,2}];
> >> itm[[1]] = N[{-delta/2, (a+b)*(-delta/2)}, MachinePrecision];
> >> Do[itm[[i+1]] = A.itm[[i]] + f2[itm[[i,1]],itm[[i,2]]]*B.itm[[i]],
> >>    {i,1,M-1}]
> >>
> >> it200 = Table[0, {i,M}, {j,2}];
> >> it200[[1]] = N[{-delta/2, (a+b)*(-delta/2)}, 200];
> >> Do[it200[[i+1]] = A.it200[[i]] + f2[it200[[i,1]],it200[[i,2]]]*B.it200[[i]],
> >>    {i,1,M-1}]
> >>
> >> it500 = Table[0, {i,M}, {j,2}];
> >> it500[[1]] = N[{-delta/2, (a+b)*(-delta/2)}, 500];
> >> Do[it500[[i+1]] = A.it500[[i]] + f2[it500[[i,1]],it500[[i,2]]]*B.it500[[i]],
> >>    {i,1,M-1}]
> >>
> >> (These could also be computed with FoldList).
> >>
> >> Now look at the last elements, or take the Precision, of it500. You will
> >> find that fewer than 13 digits are correct. At lower initial precision
> >> you get what amount to indeterminate values (except we have some
> >> information bounding their magnitudes). Using machine arithmetic there
> >> is no way to tell from the result whether it is "correct" or just how
> >> far off it might be. It turns out that they are just fine in comparison
> >> to it500. So plots of itm and it500 are reliable.
> >>
> >> The last 1000+ values for it200 are a different matter. They all appear
> >> to be zero. For example:
> >>
> >> In[156]:= it200[[1888]] == {0,0}
> >> Out[156]= True
> >>
> >> A closer inspection reveals that they have negative accuracy. The
> >> 1888th, for example, could be anywhere up to around 262 digits in
> >> magnitude (and it gets worse as we move further along in it200).
> >>
> >> In[157]:= InputForm[it200[[1888]]]
> >> Out[157]//InputForm= {0``-262.03379085102995, 0``-261.84220176123233}
> >>
> >> So they plot as zeros, but that's not a very reliable indication of the
> >> actual values.
> >>
> >>
> >> Daniel Lichtblau
> >> Wolfram Research
> >>
> >>
> >>
> >>
> >>
> >
> >
> >
> 
> --
> DrBob at bigfoot.com
> www.eclecticdreams.net
> 
>


  • Prev by Date: Integration and Summation of Piecewise Functions
  • Next by Date: Matrix m x n
  • Previous by thread: Re: Re: A problem of numerical precision
  • Next by thread: Re: in center of a triangle