Re: why extending numbers by zeros instead of dropping precision is a good idea
- To: mathgroup at smc.vnet.net
 - Subject: [mg117846] Re: why extending numbers by zeros instead of dropping precision is a good idea
 - From: Bill Rowe <readnews at sbcglobal.net>
 - Date: Fri, 1 Apr 2011 02:35:55 -0500 (EST)
 
On 3/31/11 at 4:06 AM, fateman at eecs.berkeley.edu (Richard Fateman)
wrote:
>It is occasionally stated that subtracting nearly equal
>quantities from each other is a bad idea and somehow unstable
>or results in noise. (JT Sardus said it on 3/29/2011, for example.)
>This is not always true; in fact it may be true hardly ever.
>It is, for example, the essence of Newton iteration.
>Here's simple example.  If you have an approximation x for the
>square root of n, then
>r[x,n]  gives you a better approximation where
>r[x_,n_]:=  x- (x^2-n)/(2*x).
>Let's try it out.
>x=1.0
>r[x,2]
>r[%,2]
>r[%,2]
>r[%,2]
>gets us to 1.41421, which is pretty good.
>If we try
>Do[x = r[x, 2], {100}]
>we see that x is still 1.41421.
>That's using machine hardware floats.
>Now let's try it with significance arithmetic, using more precision
>x=1.0`20
>two=2.0`20
>r[x,two]
>.... this gives (a highly precision version of) 1.5
>Let's see if we can get a really good approximation for sqrt(2).
>And print out the results each iteration
>Do[x = r[x, two];Print[x], {100}]
>Oh dear, we get lower and lower accuracy and precision,
>eventually resulting in a division by zero.  This loop
>converges to the square root of 2 being Indeterminate.
>Now this is no surprise to some people, but it may be to some
>others. Indeed one could say "there is no bug there,
>Mathematica is doing exactly what it is documented to do." 
>Nevertheless, some other people might find it surprising. Maybe
>even distressing.   If you figure out what is going on  (it is
>not that complicated...)... you see:
>It is the consequence of not computing x^2-n,  the difference
>of two nearly equal quantities, accurately.
>Now you can avoid this problem by setting $MinPrecision to (say) 20.
An alternative would be to use FixedPoint. That is
In[27]:= FixedPoint[r[#, 2] &, 1.0`20]
Out[27]= 1.414213562373095049
In[28]:= Precision[%]
Out[28]= 18.2351
In[29]:= FixedPoint[r[#, 2.0`20] &, 1.0`20]
Out[29]= 1.41421356237309505
In[30]:= Precision[%]
Out[30]= 18.0154
In[31]:= N[Sqrt[2], 20]
Out[31]= 1.4142135623730950488
It seems FixedPoint is doing something behind the scene that
avoids the problem you describe above.
Note, I am not arguing either Mathematica is doing the right
thing or the wrong thing when using a Do as you did. Simply, my
experience with Mathematica tells me I generally get better
behavior when I use functions intended to do exactly what I want
than I do when I attempt to code my own version.
It seems FixedPoint is doing something behind the scene that
avoids the problem you describe above.