MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: why extending numbers by zeros instead of dropping precision is a good idea

  • To: mathgroup at smc.vnet.net
  • Subject: [mg117826] Re: why extending numbers by zeros instead of dropping precision is a good idea
  • From: Daniel Lichtblau <danl at wolfram.com>
  • Date: Fri, 1 Apr 2011 02:32:18 -0500 (EST)

Richard Fateman wrote:
> It is occasionally stated that subtracting nearly equal quantities from 
> each other is a bad idea and somehow unstable or results in noise. (JT 
> Sardus said it on 3/29/2011, for example.)
> 
>   This is not always true; in fact it may be true hardly ever.
> 
>   It is, for example, the essence of Newton iteration.
> 
> Here's simple example.  If you have an approximation x for the square 
> root of n, then
> 
> r[x,n]  gives you a better approximation where
> 
> r[x_,n_]:=  x- (x^2-n)/(2*x).
> 
> Let's try it out.
> 
> x=1.0
> r[x,2]
> r[%,2]
> r[%,2]
> r[%,2]
>     gets us to 1.41421, which is pretty good.
> If we try
> Do[x = r[x, 2], {100}]
> we see that x is still 1.41421.
> 
> That's using machine hardware floats.
> 
> Now let's try it with significance arithmetic, using more precision
> 
> x=1.0`20
> two=2.0`20
> r[x,two]
> 
>    .... this gives (a highly precision version of) 1.5
> 
> Let's see if we can get a really good approximation for sqrt(2).
> 
> And print out the results each iteration
> 
> Do[x = r[x, two];Print[x], {100}]
> 
> Oh dear, we get lower and lower accuracy and precision, eventually 
> resulting in a division by zero.  This loop converges to the square root 
> of 2 being Indeterminate.
> 
> Now this is no surprise to some people, but it may be to some others. 
> Indeed one could say "there is no bug there, Mathematica is doing 
> exactly what it is documented to do."  Nevertheless, some other people 
> might find it surprising. Maybe even distressing.   If you figure out 
> what is going on  (it is not that complicated...)... you see:
> 
> It is the consequence of not computing x^2-n,  the difference of two 
> nearly equal quantities, accurately.
> 
> Now you can avoid this problem by setting $MinPrecision to (say) 20.
> 
> I know that, and maybe you knew it also.  But if you didn't know it 
> before, maybe now you do.
> 
> If you have sufficient history of this newsgroup available you can see 
> discussion from Jan 30, 2001.
> I don't know if it is a broken record or a golden oldie;  but it was 
> prompted by JT Sardus' comment.
> 
> RJF

Perhaps this will only serve to sow further confusion, but here is the 
flip side, so to speak. It comes directly from a question that appeared 
last night in a different Mathematica-related venue:

http://stackoverflow.com/questions/5493517/a-very-strange-question-in-mathematica

I'll show the parts below that are relevant to this discussion. Well, 
that I think are relevant...

What we do in this example is a reversal, so to speak, of above. We 
iterate square roots. In the process we actually gain in precision and 
accuracy. Below I will iterate taking square roots 55 times, beginning 
with the value 2 at 20 digits of precision.

s[x_, n_] := x^n
two = 2.0`20;

In[373]:= iters = 55;
res = NestList[s[#, 1/2] &, two, iters];
Last[res]
Last[res]^(2^iters)

Out[375]= 1.000000000000000019238698982791549888

Out[376]= 2.0000000000000000000

That was all Sweetness and Light (well, maybe not quite what Matthew 
Arnold had in mind).

Now we do this with 2 as a machine double.

In[377]:= machinetwo = 2.0;

In[378]:= iters = 55;
res = NestList[s[#, 1/2] &, machinetwo, iters];
Last[res]
Last[res]^(2^iters)

Out[380]= 1.

Out[381]= 1.

Bad outcome, in a sense. It's explained in one of the responses to that 
Stack Overflow question, available from the link above.

The problematic aspect to the computation, in this case, is fixed 
precision arithmetic. Not because Mathematica's significance arithmetic 
does error tracking but rather simply because it allows for precision to 
go up, when justified. I'll show using bigfloats but in fixed precision. 
I use more iterations to make quite clear that the result is bad. One 
can check that in default mathematica arithmetic this would still ahve 
worked just fine.

In[398]:= iters = 70;
Block[{$MinPrecision = 20, $MaxPrecision = 20},
   res = NestList[s[#, 1/2] &, two, iters]];
Last[res]
Last[res]^(2^iters)

Out[400]= 1.0000000000000000000

Out[401]= 0``-1.6531076715578628


Daniel Lichtblau
Wolfram Research



  • Prev by Date: Re: CAS that uses convert/binary/decimal ..... / Retraction.
  • Next by Date: Root Reduce
  • Previous by thread: Re: why extending numbers by zeros instead of dropping precision is a good idea
  • Next by thread: Re: why extending numbers by zeros instead of dropping precision is a good idea