MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Numerical accuracy/precision - this is a bug or a feature?

  • To: mathgroup at smc.vnet.net
  • Subject: [mg120104] Re: Numerical accuracy/precision - this is a bug or a feature?
  • From: "Oleksandr Rasputinov" <oleksandr_rasputinov at hmamail.com>
  • Date: Fri, 8 Jul 2011 04:54:28 -0400 (EDT)
  • References: <ius5op$2g7$1@smc.vnet.net> <ius7b6$30t$1@smc.vnet.net>

On Thu, 07 Jul 2011 12:33:49 +0100, Richard Fateman  
<fateman at cs.berkeley.edu> wrote:

> On 7/6/2011 2:39 AM, Oleksandr Rasputinov wrote:
>>
>> Precision (which, as defined by Mathematica, means relative uncertainty)
>> and accuracy (absolute uncertainty) are expressed as annotations after  
>> the
>> number. In the special case of a number with a decimal point and no
>> annotation, the number is taken to be a machine precision real. I agree
>> that these definitions and the notational convention chosen by  
>> Mathematica
>> are strange. However, there is nothing "improper" about it as a choice  
>> of
>> formalism--at least, this is no worse a design choice than for  
>> Mathematica
>> to have standardized on decimal notation for input and presentation of
>> numerical values rather than binary as it uses internally.
>>
>> The purpose of this approach is as a crude but often adequate
>> approximation to interval arithmetic, whereby these (approximations of)
>> errors are carried through arithmetic operations using first-order
>> algebraic methods. When functions (such as N) that pay attention to
>> Precision and Accuracy (by Mathematica's definitions) see them  
>> decreasing,
>> they increase the working precision so as to avoid numerical instability
>> being expressed in the final result. This is by no means intended to be
>> rigorous; it is merely a heuristic, but one that comes at little cost  
>> and
>> works in many cases. Of course, if a user's own code treats this
>> approximation as somehow sacrosanct and ignores the precision  
>> adjustments
>> necessary during the calculation while taking the final answer as  
>> correct,
>> it is more likely that the approximation will have fallen apart  
>> somewhere
>> down the line.
>>
>> If you don't like significance arithmetic, you have (at least) two other
>> options at hand: either work in fixed precision ($MinPrecision
>> $MaxPrecision = prec) or use interval arithmetic. These have their own
>> drawbacks, of course (most notably that Mathematica tacitly assumes all
>> intervals are uncorrelated), but your hand isn't forced either way and  
>> you
>> may even use all three methods simultaneously if you wish. Alternatively
>> you may program your own, more accurate algebraic or Monte Carlo error
>> propagation methods if you prefer.
>>
>> ....
>>
>
> This is an excellent summary of Mathematica's approach to arithmetic on
> numbers.  Unfortunately many people come to use Mathematica with their
> own notions of numbers, accuracy, precision, and equality. These words
> are redefined in a non-standard way in Mathematica, leading to
> unfortunate situations sometimes.  "unexplainable" behavior. confusion.
> Or worse, erroneous results silently delivered and accepted as true by a
> user, who "knows" about precision, accuracy, floating-point arithmetic,  
> etc.

Apart from the curious definitions given to Precision and Accuracy (one  
imagines ApproximateRelativeError and ApproximateAbsoluteError were  
considered too verbose), I do not think Mathematica's way of doing things  
is particularly arbitrary or confusing in the broader context of  
multiprecision arithmetic. Mathematically, finite-precision numbers  
represent distributions over finite fields, and they therefore possess  
quantized upper and lower bounds, as well as quantized expectation values.  
Strictly, then, any two such distributions cannot be said to be equal if  
they represent numbers of different precision: they are then distributions  
over two entirely different fields, irrespective of whether their  
expectation values may be equal.

However, this definition is not very useful numerically and we are usually  
satisfied in practice that two finite-precision numbers equal if the  
expectation values are equal within quantization error. Note that the  
question of true equality for numbers of different precisions, i.e. the  
means of the distributions being equal, is impossible to resolve in  
general given that the means (which represent the exact values) are not  
available to us. Heuristically, the mean should be close, in a relative  
sense, to the expectation value, hence the tolerance employed by Equal;  
the exact magnitude of this tolerance may perhaps be a matter for debate  
but either way it is set using Internal`$EqualTolerance, which takes a  
machine real value indicating the number of decimal digits' tolerance that  
should be applied, i.e. Log[2]/Log[10] times the number of least  
significant bits one wishes to ignore. This setting has been discussed in  
this forum at least once in the past: see e.g.  
<http://forums.wolfram.com/mathgroup/archive/2009/Dec/msg00013.html>.

Note that if one wishes to be more rigorous when determining equality,  
SameQ operates in a similar manner to Equal for numeric comparands, except  
that its tolerance is 1 (binary) ulp. This is also adjustable, via  
Internal`$SameQTolerance.

In regard to erroneous results: undoubtedly it is a possibility. However,  
one would expect that an approximate first order method for dealing with  
error propagation should at least be better in the majority of cases than  
a zeroth-order method such as working in fixed precision. As stated  
previously, if one desires more accurate approximations then one is in any  
case free to implement them, although given the above it should be clear  
that all that is generally possible within the domain of finite-precision  
numbers is a reasonable approximation unless other information is  
available from which to make stronger deductions. I will also note that  
none of the example "problems" in this present topic are anything directly  
to do with significance arithmetic; they instead represent a combination  
of confusion due to Mathematica's (admittedly confusing) choice of  
notation, combined with an apparent misunderstanding of concepts related  
to multiprecision arithmetic in general.

>
> WRI argues that this is a winning proposition. Perhaps Wolfram still
> believes that someday all the world will use Mathematica for all
> programming purposes and everyone will accept his definition of terms
> like Precision and Accuracy, and that (see separate thread on how to
> write a mathematical paper) it will all be natural and consistent.
> (or that people who want to hold to the standard usage will be forced to
> use something like SetGlobalPrecision[prec_]:=
> $MaxPrecision=MinPrecision=prec.
> I believe this is routinely used by people who find Mathematica's
> purportedly "user-friendly" amateurish error control to be hazardous.
> )
>
> .........
>
> 'When I use a word,' Humpty Dumpty said, in rather a scornful tone, 'it
> means just what I choose it to mean =97 neither more nor less.'
>
> 'The question is,' said Alice, 'whether you can make words mean so many
> different things.'
>
> 'The question is,' said Humpty Dumpty, 'which is to be master =97 that's
> all.'


  • Prev by Date: Re: How to write a "proper" math document
  • Next by Date: Re: Bug 1+4/10
  • Previous by thread: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Next by thread: Re: Numerical accuracy/precision - this is a bug or a feature?