MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Numerical accuracy/precision - this is a bug or a feature?

On 7/6/2011 2:39 AM, Oleksandr Rasputinov wrote:
> On Tue, 05 Jul 2011 10:17:49 +0100, slawek<slawek at>  wrote:
>> U=BFytkownik "Kevin J. McCann"<kjm at>  napisa=B3 w wiadomo=B6ci
>> grup dyskusyjnych:ius7b6$30t$1 at
>>> The answer to this puzzle is that the N[2.0,20] is 2.0, not
>>> 2.00000000... Try N[2,20] and all is well. I think that when you put 2.0
>>> in you have already limited yourself to machine precision, and N[2.0, 20]
>>> is then just machine accuracy.
>> It is still a-bug-and-a-feature.
>> And this bug make Mathematica nearly useless in numerical computations ..
>> "MS Windows Calculator" is much more reliable!
>> The number of written digits IS NEITHER the precision NOR the accuracy ..
>> Mathematica treat 2.0 as a 2.0+-0.1, but it is not the proper way to
>> handle numbers.
> In fact:
> In[1] :=
> Interval[2.0] // InputForm
> Out[1]//InputForm =
> Interval[{1.9999999999999998, 2.0000000000000004}]
> While:
> In[2] :=
> Interval[2.0``1] // InputForm
> Out[2]//InputForm =
> Interval[{1.875`1.2730012720637347, 2.125`1.327358934386329}]
> Precision (which, as defined by Mathematica, means relative uncertainty)
> and accuracy (absolute uncertainty) are expressed as annotations after the
> number. In the special case of a number with a decimal point and no
> annotation, the number is taken to be a machine precision real. I agree
> that these definitions and the notational convention chosen by Mathematica
> are strange. However, there is nothing "improper" about it as a choice of
> formalism--at least, this is no worse a design choice than for Mathematica
> to have standardized on decimal notation for input and presentation of
> numerical values rather than binary as it uses internally.
> The purpose of this approach is as a crude but often adequate
> approximation to interval arithmetic, whereby these (approximations of)
> errors are carried through arithmetic operations using first-order
> algebraic methods. When functions (such as N) that pay attention to
> Precision and Accuracy (by Mathematica's definitions) see them decreasing,
> they increase the working precision so as to avoid numerical instability
> being expressed in the final result. This is by no means intended to be
> rigorous; it is merely a heuristic, but one that comes at little cost and
> works in many cases. Of course, if a user's own code treats this
> approximation as somehow sacrosanct and ignores the precision adjustments
> necessary during the calculation while taking the final answer as correct,
> it is more likely that the approximation will have fallen apart somewhere
> down the line.
> If you don't like significance arithmetic, you have (at least) two other
> options at hand: either work in fixed precision ($MinPrecision
> $MaxPrecision = prec) or use interval arithmetic. These have their own
> drawbacks, of course (most notably that Mathematica tacitly assumes all
> intervals are uncorrelated), but your hand isn't forced either way and you
> may even use all three methods simultaneously if you wish. Alternatively
> you may program your own, more accurate algebraic or Monte Carlo error
> propagation methods if you prefer.


This is an excellent summary of Mathematica's approach to arithmetic on
numbers.  Unfortunately many people come to use Mathematica with their
own notions of numbers, accuracy, precision, and equality. These words
are redefined in a non-standard way in Mathematica, leading to
unfortunate situations sometimes.  "unexplainable" behavior. confusion.
Or worse, erroneous results silently delivered and accepted as true by a
user, who "knows" about precision, accuracy, floating-point arithmetic, etc.

WRI argues that this is a winning proposition. Perhaps Wolfram still
believes that someday all the world will use Mathematica for all
programming purposes and everyone will accept his definition of terms
like Precision and Accuracy, and that (see separate thread on how to
write a mathematical paper) it will all be natural and consistent.
(or that people who want to hold to the standard usage will be forced to
use something like SetGlobalPrecision[prec_]:=
I believe this is routinely used by people who find Mathematica's
purportedly "user-friendly" amateurish error control to be hazardous.


'When I use a word,' Humpty Dumpty said, in rather a scornful tone, 'it
means just what I choose it to mean =97 neither more nor less.'

'The question is,' said Alice, 'whether you can make words mean so many
different things.'

'The question is,' said Humpty Dumpty, 'which is to be master =97 that's

  • Prev by Date: Re: How can I get better solution for this...?
  • Next by Date: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Previous by thread: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Next by thread: Re: Numerical accuracy/precision - this is a bug or a feature?