Mathematica 9 is now available
Services & Resources / Wolfram Forums / MathGroup Archive
-----

MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Numerical accuracy/precision - this is a bug or a feature?

  • To: mathgroup at smc.vnet.net
  • Subject: [mg120028] Re: Numerical accuracy/precision - this is a bug or a feature?
  • From: "Oleksandr Rasputinov" <oleksandr_rasputinov at hmamail.com>
  • Date: Wed, 6 Jul 2011 05:40:49 -0400 (EDT)
  • References: <ius5op$2g7$1@smc.vnet.net> <ius7b6$30t$1@smc.vnet.net>

On Tue, 05 Jul 2011 10:17:49 +0100, slawek <slawek at host.pl> wrote:

> U=BFytkownik "Kevin J. McCann" <kjm at KevinMcCann.com> napisa=B3 w wiadomo=B6ci  

> grup dyskusyjnych:ius7b6$30t$1 at smc.vnet.net...
>> The answer to this puzzle is that the N[2.0,20] is 2.0, not
>> 2.00000000... Try N[2,20] and all is well. I think that when you put 2.0
>> in you have already limited yourself to machine precision, and N[2.0,20]
>> is then just machine accuracy.
>
> It is still a-bug-and-a-feature.
> And this bug make Mathematica nearly useless in numerical computations..  
> "MS Windows Calculator" is much more reliable!
>
> The number of written digits IS NEITHER the precision NOR the accuracy..
> Mathematica treat 2.0 as a 2.0+-0.1, but it is not the proper way to  
> handle numbers.
>

In fact:

In[1] :=
Interval[2.0] // InputForm

Out[1]//InputForm =
Interval[{1.9999999999999998, 2.0000000000000004}]

While:

In[2] :=
Interval[2.0``1] // InputForm

Out[2]//InputForm =
Interval[{1.875`1.2730012720637347, 2.125`1.327358934386329}]

Precision (which, as defined by Mathematica, means relative uncertainty) 
and accuracy (absolute uncertainty) are expressed as annotations after the  
number. In the special case of a number with a decimal point and no  
annotation, the number is taken to be a machine precision real. I agree 
that these definitions and the notational convention chosen by Mathematica  
are strange. However, there is nothing "improper" about it as a choice of  
formalism--at least, this is no worse a design choice than for Mathematica  
to have standardized on decimal notation for input and presentation of  
numerical values rather than binary as it uses internally.

The purpose of this approach is as a crude but often adequate  
approximation to interval arithmetic, whereby these (approximations of) 
errors are carried through arithmetic operations using first-order  
algebraic methods. When functions (such as N) that pay attention to  
Precision and Accuracy (by Mathematica's definitions) see them decreasing,  
they increase the working precision so as to avoid numerical instability
being expressed in the final result. This is by no means intended to be 
rigorous; it is merely a heuristic, but one that comes at little cost and  
works in many cases. Of course, if a user's own code treats this  
approximation as somehow sacrosanct and ignores the precision adjustments  
necessary during the calculation while taking the final answer as correct,  
it is more likely that the approximation will have fallen apart somewhere  
down the line.

If you don't like significance arithmetic, you have (at least) two other
options at hand: either work in fixed precision ($MinPrecision   
$MaxPrecision = prec) or use interval arithmetic. These have their own 
drawbacks, of course (most notably that Mathematica tacitly assumes all 
intervals are uncorrelated), but your hand isn't forced either way and you  
may even use all three methods simultaneously if you wish. Alternatively
you may program your own, more accurate algebraic or Monte Carlo error  
propagation methods if you prefer.

> I know, that it is common mistake to treat 2.0 as "not an integer number"
> and/or "exact" number, but 2.0 is an integer number AND also it is a
> rational number AND also a real number AND also a complex number. And 
> 2.0 is simply 1+1+ 0/10 . Therefore, as you see, there is no "roudning",  
> "limited
> precision", "error" or "uncertinaity". It is only a matter of a notation  
> of decimal fractions. And decimal fractions are exact. Any "tolerance"
> is not
> indicated in any way by this notation. Thus it is a bug. Nasty, big, fat  
> bug in the core of Mathematica.
>
> Even from "CS view" 2.0 is translated to IEEE representation with  
> 56-bits of
> the mantisa. Nobody declare float x = 2.0000000000 to iniject the float
> point two into a code.
>
> slawek


  • Prev by Date: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Next by Date: Re: How to write a "proper" math document
  • Previous by thread: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Next by thread: Re: Numerical accuracy/precision - this is a bug or a feature?