MathGroup Archive 2002

[Date Index] [Thread Index] [Author Index]

Search the Archive

Significance Arithmetic

Please allow me to summarize what I've learned in the recent discussion,
and retract my claim that Accuracy, Precision, and SetAccuracy are


Numbers come in three varieties - machine precision, Infinite precision,
and "bignum" or "bigfloat".  Bignums and bigfloats (synonymous?) aren't
called that in the Help Browser, but they're the result of using
N[expr,k] or SetAccuracy[expr,k] where k is bigger than machine
precision.  If k <= machine precision, the result is a machine precision
number, even if you know the expression isn't that precise.


If, when you use N or SetAccuracy as described above, the expression
contains undefined symbols, you get an expression with all its numerics
replaced by bignums of the indicated precision.  When the symbols are
defined later, if ANY of them are machine precision, the expression is
computed with machine arithmetic - with the side-effect that
coefficients that originally were Infinite-precision are now only
machine precision.  That is, x^2 might have become
x^2.0000000000000000000000000000000000 but later became x^2., for


If all the symbols have been set to "bignum" or Infinite precision
values, the computation will be done taking precision into account, and
the result has a Precision or Accuracy that makes sense.  In all other
cases, Precision returns Infinity for entirely Infinite-precision
expressions and 16 for everything else.


When one of the experts says "significance arithmetic" that's what they
mean - using SetAccuracy or N to give things more than 16 digits,
leaving no machine precision numbers anywhere in the expression, and
using Accuracy or Precision, which ARE meaningful in that case, to judge
the result.  (It's meaningful if all your inputs really have more than
16 digits of precision, that is.)


You can't use "significance arithmetic" to determine how much precision
a result has if your inputs have 16 or 15 or 2 digits of precision.  In
the example we've been looking at, you can give the inputs MORE accuracy
than you really believe they have, and still get back 0 digits from
Precision at the end, so there are clearly no trustworthy digits when
you use the original inputs either.  If an expression is on the razor's
edge, and has lost only a few digits of precision, that wouldn't work so


Oddly enough, "significance arithmetic" in the Browser doesn't take you
to any of that.  Instead, it takes you to Interval arithmetic, a more
sophisticated method, which may give a more accurate gauge of how much
precision you really have, and WILL deal with machine precision numbers
and numbers with even less precision.  It does a very good job on the
example.   However, it isn't very suitable for Complex numbers,
matrices, etc.  NSolve and NIntegrate probably can't handle it, either.


Daniel Lichtblau promises that all this will be clearer in the next



  • Prev by Date: Re: Re: Accuracy and Precision
  • Next by Date: RE: RE: RE: Accuracy and Precision
  • Previous by thread: Re: Significance Arithmetic
  • Next by thread: Re: Significance Arithmetic