Services & Resources / Wolfram Forums
-----
 /
MathGroup Archive
2000
*January
*February
*March
*April
*May
*June
*July
*August
*September
*October
*November
*December
*Archive Index
*Ask about this page
*Print this page
*Give us feedback
*Sign up for the Wolfram Insider

MathGroup Archive 2000

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: RE:Working Precision

  • To: mathgroup at smc.vnet.net
  • Subject: [mg23976] Re: [mg23928] RE:Working Precision
  • From: David Withoff <withoff at wolfram.com>
  • Date: Sun, 18 Jun 2000 03:01:00 -0400 (EDT)
  • Sender: owner-wri-mathgroup at wolfram.com

> Richard Fateman showed how arbitrary precision arithmetic can
> produce strange results.  I have some more strange behavior.

These examples are really just different ways of disguising some
basic issues in numerical analysis, most of which are not specific
to Mathematica.  All of these behaviors are entirely reasonable.
Any system of arithmetic can be made to show behaviors that might
be surprising to some people, and machine arithmetic can be a lot
more amusing in this regard than Mathematica arithmetic.  16-digit
machine arithmetic, for example, routinely must make up digits to
fill in the required 16 digits, a fact which can lead to all sorts
of mathematically indefensible effects.

The first example, however, is not very elaborate, and since it is
just showing basic propagation of error, this behavior really
shouldn't be unexpected:

> Richard demonstrated that  (x == x+1)  returns True after
> evaluating the Do loop below. Well you can also get f[x] defined
> for lots of values you don't expect!
> 
> In[1]:=
>  x=1.11111111111111111111;
>  Do[x=2*x-x, {100}];
>  Clear[f];
>  f[x]=29;
>  {f[-5.2],f[-2.1],f[4.3],f[8.2]}
> 
> Out[5]=
>  {f[-5.2],29,29,f[8.2]}
> 
> -------------------------------
> 
> The way to fix this problem is to change the value of $MinPrecision
> as I do below. Then my definition for (f) doesn't apply to f[-2.1],
> f[4.3]. This will also ensure  (x==x+1)  returns False. I haven't
> checked but a positive value for $MinPrecision might solve the
> problem Bernd Brandt had with NIntegrate.
> 
> In[6]:=
>  $MinPrecision=1.0;
>  x=1.11111111111111111111;
>  Do[x=2*x-x, {100}];
>  Clear[f];
>  f[x]=29;
>  {f[-5.2],f[-2.1],f[4.3],f[8.2]}
> 
> Out[11]=
>  {f[-5.2],f[-2.1],f[4.3],f[8.2]}

This behavior isn't necessarily a problem, since it is all correct,
and changing $MinPrecision isn't necessarily a reasonable way
to "fix" it.

To understand the behavior of this example it is useful to recall
that inexact numbers correspond to intervals.  With 3-digit decimal
arithmetic, for example, the number 1.23 is the best available
3-digit decimal representation for any number between 1.225 and
1.235, so mathematically this number represents the interval between
those points.  For any inexact number, the numerical "roundoff
error" of the number gives the width of the corresponding interval.

The Do[x=2*x-x, {100}] example generates a value of x that corresponds
to an interval around 1 and with a width of approximately 2.  This
interval arises through application of standard propagation of error
in which the error in the result is estimated by accumulating upper
bounds for the error in each step, with all errors assumed to be
uncorrelated.  In this example, which repeatedly adds and subtracts
the same number, all of the errors are perfectly correlated, so this
calculated upper bound to the error is significantly larger than the
actual error.  Most practical calculations aren't like this.  In
most practical calculations, the assumption of uncorrelated errors
is a good assumption, or at least it is better than any realistic
alternative assumption.

With that value of x, the rule for f[x] will apply to any argument
in the interval associated with the value of x.

By setting $MinPrecision to a larger value you can prevent the
estimated error from accumulating beyond a certain value, but
without analyzing the calculation there is no mathematical
justification for doing such a thing.  It might be justified in
this example, which is contrived so that the numerical errors in
each step do not accumulate, but resetting $MinPrecision is not
justified in general, and is not recommended.

Your next example is unrelated to the previous examples.  It also
has a straightforward explanation.

> ---------------------------
> Now consider another example. Below Mathematica thinks (b1) has
> much better precision than (a1). This doesn't make sense, and
> Mathematica doesn't do much better using the default setting
> ($MinPrecision=0.0).
> 
> In[12]:=
>  a1=Exp[800.0]/Exp[800];
>  b1=a1+Pi-1;
>  SetOptions[Precision,Round->False];
>  (* Precision only has an option in Version 4. *);
>  {Precision[a1],Precision[b1]}
> 
> Out[15]=
>  {13.0515,25.2788}

With $MinPrecision set to zero, the accuracy of the zero generated
by a1-1 must be quite large to get a precision bigger than zero.
That large accuracy shows up in the precision of b1, which results
from adding that high-accuracy zero to the exact constant Pi.

Removing this minimum on precision gives the behavior in
your next example:

> The result above comes out much better if $MinPrecision is much less than
> zero. However, I can't understand why (b1) below still has slightly better
> precision than (a1).
> 
> In[16]:=
>  $MinPrecision=-Infinity;
>  a1=Exp[800.0]/Exp[800];
>  b1=a1+Pi-1;
>  {Precision[a1],Precision[b1]}
> 
> Out[19]=
>  {13.0502,13.5473}

and the reason for the precision of b1 being slightly higher than
the precision of a1 should be obvious upon recalling the definition
of precision.  Precision in Mathematica is a measure of relative
error.  If you make the number bigger without adding any error,
which is what happens when adding a positive exact constant (such
as Pi) to this positive number, the relative error goes down, so the
precision goes up.

Returning then to the first example, but with $MinPrecision=Infinity:

> ----------------------------------
> Well, ($MinPrecision=-Infinity) allows a better result from the
> last example, but now the definitions for (f) that I considered
> earlier are even more strange.
> 
> In[20]:=
>  x=1.11111111111111111111;
>  Do[x=2*x-x, {100}] ;
>  Clear[f];
>  f[x]=29;
>  {f[-534.2],f[-2.1],f[4.3],f[815.2]}
> 
> Out[25]=
>  {29,29,29,29}

yes, releasing the lower bound on precision widens the interval
available to the value of x.

And finally, yes, reading up on this subject is certainly recommended
for anyone who is interested in these details, and yes, the
following discussion is a good place to start:

> ----------------------
> A very good discussion of arbitrary precision arithmetic can
> be found in the Help Browser under: 
>   Getting Started/Demos
>      Demos
>         Numerics Report (near the bottom)

Although Mathematica has perhaps given wider exposure to the
topic of reliable arithmetic, this has been an active area of
research for decades, and there is extensive literature on the
subject.  If you are interested in this subject you might try
exploring the literature in a good technical library.  There are
very few questions about the behavior of this aspect of Mathematica
that can't be answered by a review of The Mathematica Book, study
of the above mentioned Numerics Report, and a few minutes of
careful thought.

> --------------------
> Regards,
> Ted Ersek

Dave Withoff
Wolfram Research


  • Prev by Date: Re: RE:Working Precision
  • Next by Date: Re: Are notebooks platform dependent ?
  • Previous by thread: Re: RE:Working Precision
  • Next by thread: Re: RE:Working Precision