Re: Significance Arithmetic
- To: mathgroup at smc.vnet.net
- Subject: [mg37011] Re: Significance Arithmetic
- From: Daniel Lichtblau <danl at wolfram.com>
- Date: Sun, 6 Oct 2002 05:32:59 -0400 (EDT)
- References: <000001c26bd9$e486f400$0300a8c0@HolyCow>
- Sender: owner-wri-mathgroup at wolfram.com
>Please allow me to summarize what I've learned in the recent discussion, and >retract my claim that Accuracy, Precision, and SetAccuracy are useless.
>Numbers come in three varieties
The technical term is "flavors".
>- machine precision, Infinite precision,
>and "bignum" or "bigfloat". Bignums and bigfloats (synonymous?)
Actually bignums can also refer to integers too large to represent as
machine integers. But I tend to use "bignum" when I really mean
"bigfloat", and I suspect this sloppy practice may be common.
>aren't called that in the Help Browser, but they're the result of using
>N[expr,k] or >SetAccuracy[expr,k] where k is bigger than machine precision.
>If k <= machine precision, the result is a machine precision number, even
>if you know the expression isn't that precise.
>If, when you use N or SetAccuracy as described above, the expression
>contains undefined symbols, you get an expression with all its numerics
>replaced by bignums of the indicated precision. When the symbols are
>defined later, if ANY of them are machine precision, the expression is
>computed with machine arithmetic - with the side-effect that coefficients
>that originally were Infinite-precision are now only machine precision.
>That is, x^2 might have become x^2.0000000000000000000000000000000000
>but later became x^2., for instance.
I think this is correct in cases where all symbolic stuff gets replaced
by numeric values. In general there is a sort of coercion to lowest
precision, with the caveat that machine floats pollute everything.
>If all the symbols have been set to "bignum" or Infinite precision values,
>the computation will be done taking precision into account, and the result
>has a Precision or Accuracy that makes sense. In all other cases,
>Precision returns Infinity for entirely Infinite-precision expressions
>and 16 for everything else.
I'm not sure I understand this last sentence. My interpretation:
"Computations that are exact will have infinite precision. Computations
in machine arithmetic will claim a precision of 16". If that is what you
are claiming, then yes, that's what Mathematica is doing (but see my
>When one of the experts says "significance arithmetic" that's what they
>mean - using SetAccuracy or N to give things more than 16 digits, leaving
>no machine precision numbers anywhere in the expression, and using Accuracy
>or Precision, which ARE meaningful in that case, to judge the result.
>(It's meaningful if all your inputs really have more than 16 digits of
>precision, that is.)
I'm as guilty as anyone else in this thread, perhaps more so, of being
too loose with the technical jargon. Also I am not certain what version
4 makes of SetAccuracy/SetPrecision in terms of significance arithmetic.
In the development kernel they will force everything in sight to have
the indicated precision, whether justified or not. This may well
introduce error even with exact input, e.g. in cases where intermediate
computations would require higher precision in order to get an end
result with the requested precision or accuracy. N, on the other hand,
will handle that and, except in pathological circumstances, will give a
result with the correct precision.
As another minor point, "arbitrary precision numbers" are simply
(tautologically?) numbers that may have arbitrarily large precision
(subject to software limitations). "Significance arithmetic" refers to a
particular model of manipulating such numbers with a mechanism for
tracking precision. There are other models, in particular fixed
precision arithmetic; that we use the former, by default, is an
occasional source of sturm und drang in this news group. I'm sure the
distinction between arbitrary precision numbers and significance
arithmetic has at least minor relevance to this thread, and I imagine
I've helped to confuse the issue in some places by using the terms
>You can't use "significance arithmetic" to determine how much precision a
>result has if your inputs have 16 or 15 or 2 digits of precision.
One can, if the numbers are really bignums (of low precision,
naturally). What one cannot do at present is create such low precision
numbers via N.
>In the example we've been looking at, you can give the inputs MORE accuracy
>than you really believe they have, and still get back 0 digits from
>Precision at the end, so there are clearly no trustworthy digits when
>you use the original inputs either. If an expression is on the razor's
>edge, and has lost only a few digits of precision, that wouldn't work
>Oddly enough, "significance arithmetic" in the Browser doesn't take you
>to any of that. Instead, it takes you to Interval arithmetic, a more
>sophisticated method, which may give a more accurate gauge of how much
>precision you really have, and WILL deal with machine precision numbers
>and numbers with even less precision. It does a very good job on the
>example. However, it isn't very suitable for Complex numbers, matrices,
>etc. NSolve and NIntegrate probably can't handle it, either.
I have filed a suggestion in-house that the documentation on
significance arithmetic take one to the section on arbitrary precision
numbers (3.1.5), as that would be more appropriate. Note that that
section, while primarily concerned with the significance arithmetic
model, also briefly mentions fixed precision bignum arithmetic.
>Daniel Lichtblau promises that all this will be clearer in the next release.
I'm not sure I'd go that far. What I will claim is that the distinction
between machine numbers and bignums will be more transparent to users.
At present if one does, say, N[number,precision] then one will get a
machine number if prec<=$MachinePrecision. We have made a change so that
this will no longer be the case. I am not prepared to go into details at
this time (sorry).
Perhaps more important for everyday use, and certainly more pertinent
for this thread, Precision will distinguish between bignums of 16
digits precision and machine numbers. Again, I have to defer on details.
At the very least I think the pitfall of believing a claim of 16 digits
precision for machine numbers will be removed.
Prev by Date:
Re: RE: RE: Accuracy and Precision
Next by Date:
Re: Plotting ellipses and other functions
Previous by thread:
Next by thread:
trouble with pattern matching & manipulating