MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: why extending numbers by zeros instead of dropping precision is a good idea

  • To: mathgroup at
  • Subject: [mg117865] Re: why extending numbers by zeros instead of dropping precision is a good idea
  • From: DrMajorBob <btreat1 at>
  • Date: Sat, 2 Apr 2011 02:41:37 -0500 (EST)

Using machine arithmetic OR using Block[{$MinPrecision = 20, $MaxPrecision  
= 20},...] limits the digits retained in a result, so that important  
information is lost (needlessly) when the function iterated is a  
contraction -- in which case precision should grow, not remain static.

Using an arbitrary precision starting point but letting precision increase  
as it will, a contraction like Sqrt does very well.

BUT. Mathematica uses pessimistic precision estimates in more complicated  
functions than Sqrt, hences misses the fact that some contractions ARE  
contractions, causing information to be lost needlessly again, and causing  
Newton iterations to fail when they shouldn't.

For that situation, something like Block[{$MinPrecision = 20,  
$MaxPrecision = 20},...] MAY save the day, if a 20-digit result is  
sufficient... which wasn't true (in the sense of being able to invert the  
process and get the starting number) for Nest[2.,Sqrt,55] or for  
Block[{$MinPrecision = 20, $MaxPrecision = 20},NestList[s[#, 1/2] &,  
2.0`20, iters]]. The end result of iterations were accurate to many  
decimals, but not enough to allow Last[res]^(2^iters) == First[res]. That  
could fail for many (maybe all) contractions, I suppose.

Hence, we can fail because we use machine arithmetic and need more  
precision; because we fix precision and need more; because we want to  
invert the iterations but haven't saved enough digits; or -- having dodged  
those problems -- because a function IS a contraction, but Mathematica  
doesn't know it. x - f[x]/f'[x] will often lose precision when it  
shouldn't. (Or so I recall, without an example off the top of my head.)

Is it infeasible for Mathematica to compute the derivative of the iterated  
function (however complicated it might be), and use more accurate  
precision estimates?

If that were done, RJF's earlier example, where the function derivative is  
one and the value should not change at all, still could lose precision  
because it's not a contraction; it's in the gray area. The derivative  
could be Complex with a norm of one. In either case, convergence may or  
may not be properly (or improperly) achieved, and pessimism may be  
justified when adjusting precision.

Have I committed "representation error", or is that a fair(ish) summary?


On Fri, 01 Apr 2011 02:32:18 -0500, Daniel Lichtblau <danl at>  

> Richard Fateman wrote:
>> It is occasionally stated that subtracting nearly equal quantities from
>> each other is a bad idea and somehow unstable or results in noise. (JT
>> Sardus said it on 3/29/2011, for example.)
>>   This is not always true; in fact it may be true hardly ever.
>>   It is, for example, the essence of Newton iteration.
>> Here's simple example.  If you have an approximation x for the square
>> root of n, then
>> r[x,n]  gives you a better approximation where
>> r[x_,n_]:=  x- (x^2-n)/(2*x).
>> Let's try it out.
>> x=1.0
>> r[x,2]
>> r[%,2]
>> r[%,2]
>> r[%,2]
>>     gets us to 1.41421, which is pretty good.
>> If we try
>> Do[x = r[x, 2], {100}]
>> we see that x is still 1.41421.
>> That's using machine hardware floats.
>> Now let's try it with significance arithmetic, using more precision
>> x=1.0`20
>> two=2.0`20
>> r[x,two]
>>    .... this gives (a highly precision version of) 1.5
>> Let's see if we can get a really good approximation for sqrt(2).
>> And print out the results each iteration
>> Do[x = r[x, two];Print[x], {100}]
>> Oh dear, we get lower and lower accuracy and precision, eventually
>> resulting in a division by zero.  This loop converges to the square root
>> of 2 being Indeterminate.
>> Now this is no surprise to some people, but it may be to some others.
>> Indeed one could say "there is no bug there, Mathematica is doing
>> exactly what it is documented to do."  Nevertheless, some other people
>> might find it surprising. Maybe even distressing.   If you figure out
>> what is going on  (it is not that complicated...)... you see:
>> It is the consequence of not computing x^2-n,  the difference of two
>> nearly equal quantities, accurately.
>> Now you can avoid this problem by setting $MinPrecision to (say) 20.
>> I know that, and maybe you knew it also.  But if you didn't know it
>> before, maybe now you do.
>> If you have sufficient history of this newsgroup available you can see
>> discussion from Jan 30, 2001.
>> I don't know if it is a broken record or a golden oldie;  but it was
>> prompted by JT Sardus' comment.
>> RJF
> Perhaps this will only serve to sow further confusion, but here is the
> flip side, so to speak. It comes directly from a question that appeared
> last night in a different Mathematica-related venue:
> I'll show the parts below that are relevant to this discussion. Well,
> that I think are relevant...
> What we do in this example is a reversal, so to speak, of above. We
> iterate square roots. In the process we actually gain in precision and
> accuracy. Below I will iterate taking square roots 55 times, beginning
> with the value 2 at 20 digits of precision.
> s[x_, n_] := x^n
> two = 2.0`20;
> In[373]:= iters = 55;
> res = NestList[s[#, 1/2] &, two, iters];
> Last[res]
> Last[res]^(2^iters)
> Out[375]= 1.000000000000000019238698982791549888
> Out[376]= 2.0000000000000000000
> That was all Sweetness and Light (well, maybe not quite what Matthew
> Arnold had in mind).
> Now we do this with 2 as a machine double.
> In[377]:= machinetwo = 2.0;
> In[378]:= iters = 55;
> res = NestList[s[#, 1/2] &, machinetwo, iters];
> Last[res]
> Last[res]^(2^iters)
> Out[380]= 1.
> Out[381]= 1.
> Bad outcome, in a sense. It's explained in one of the responses to that
> Stack Overflow question, available from the link above.
> The problematic aspect to the computation, in this case, is fixed
> precision arithmetic. Not because Mathematica's significance arithmetic
> does error tracking but rather simply because it allows for precision to
> go up, when justified. I'll show using bigfloats but in fixed precision.
> I use more iterations to make quite clear that the result is bad. One
> can check that in default mathematica arithmetic this would still ahve
> worked just fine.
> In[398]:= iters = 70;
> Block[{$MinPrecision = 20, $MaxPrecision = 20},
>    res = NestList[s[#, 1/2] &, two, iters]];
> Last[res]
> Last[res]^(2^iters)
> Out[400]= 1.0000000000000000000
> Out[401]= 0``-1.6531076715578628
> Daniel Lichtblau
> Wolfram Research

DrMajorBob at

  • Prev by Date: Re: Royalty free runtime for Mathematica
  • Next by Date: Problem: Approximate complex values in Mathematica 8.0.1
  • Previous by thread: Re: why extending numbers by zeros instead of dropping precision is a good idea
  • Next by thread: Re: why extending numbers by zeros instead of dropping precision is a good idea