Re: Numerical accuracy/precision - this is a bug or a feature?

• To: mathgroup at smc.vnet.net
• Subject: [mg120128] Re: Numerical accuracy/precision - this is a bug or a feature?
• From: Richard Fateman <fateman at cs.berkeley.edu>
• Date: Sat, 9 Jul 2011 07:31:54 -0400 (EDT)
• References: <ius5op\$2g7\$1@smc.vnet.net> <ius7b6\$30t\$1@smc.vnet.net> <iv6h68\$s97\$1@smc.vnet.net>

```On 7/8/2011 2:04 AM, Oleksandr Rasputinov wrote:

.. snip...

>
> Apart from the curious definitions given to Precision and Accuracy (one
> imagines ApproximateRelativeError and ApproximateAbsoluteError were
> considered too verbose), I do not think Mathematica's way of doing things
> is particularly arbitrary or confusing in the broader context of
> multiprecision arithmetic. Mathematically, finite-precision numbers
> represent distributions over finite fields, and they therefore possess
> quantized upper and lower bounds, as well as quantized expectation values.

This is one view, but one that is not especially useful computationally.
A much more useful view is that a finite-precision number is simply a
single value. That's all.   The number 3 has no fuzz around it. The
double-float number 0.1d0 has no fuzz around it either.  It is not
exactly 1/10, but

7205759403792794 X 2 ^(-56).

exactly. It is a finite precision number because the fraction part, here
7205759403792794, is limited to a fixed number of bits.  There is, I
repeat, nothing that makes it a distribution.
Another way of writing it is

3602879701896397/36028797018963968

(1) arithmetic is well-defined and executable on a computer
(2) should you choose to implement some kind of distribution arithmetic
-- intervals, Gaussian bell curves, significance arithmetic,
fuzzy-set-theoretic whatever,  you can do so knowing that the underlying
implementation of arithmetic is supportive of any model that can be
reduced to ordinary mathematics.

> Strictly, then, any two such distributions cannot be said to be equal if
> they represent numbers of different precision: they are then distributions
> over two entirely different fields, irrespective of whether their
> expectation values may be equal.

This is something you are free to implement. I personally object to an
arithmetic system and a notion of equality that does not support the
fundamental properties of equivalence relations.  I much prefer that if
a==b, then a-b==0.  This of course fails, in Mathematica.

>
> However, this definition is not very useful numerically and we are usually
> satisfied in practice that two finite-precision numbers equal if the
> expectation values are equal within quantization error.

No, I disagree.  It used to be that programmers were taught that one
should never (or almost never)  compare floating point numbers for
equality.  You can easily ask if two numbers are relatively or
absolutely close.  That is not the same as being equal.

Note that the
> question of true equality for numbers of different precisions, i.e. the
> means of the distributions being equal, is impossible to resolve in
> general given that the means (which represent the exact values) are not
> available to us.

The fault here is that you view numbers as distributions.  I have no
problem telling if a number 0.1d0  is equal to
3602879701896397/36028797018963968 .

It is. That is its value.  It is equal to all other objects with the
same exact value. That's what equal means.

Is it equal to 0.099999999?  No, but it is relatively close by some measure.

Heuristically, the mean should be close, in a relative
> sense, to the expectation value, hence the tolerance employed by Equal;

You may choose to believe this, but it is mathematical nonsense.

> the exact magnitude of this tolerance may perhaps be a matter for debate
> but either way it is set using Internal`\$EqualTolerance, which takes a
> machine real value indicating the number of decimal digits' tolerance that
> should be applied, i.e. Log[2]/Log[10] times the number of least
> significant bits one wishes to ignore. This setting has been discussed in
> this forum at least once in the past: see e.g.
> <http://forums.wolfram.com/mathgroup/archive/2009/Dec/msg00013.html>.

I suspect there are more insightful discussions in the archives.

>
> Note that if one wishes to be more rigorous when determining equality,
> SameQ operates in a similar manner to Equal for numeric comparands, except
> that its tolerance is 1 (binary) ulp. This is also adjustable, via
> Internal`\$SameQTolerance.

It is nice that one can try to scrape out all the garbage of the
arithmetic by setting internal flags, but the way it is set up in
Mathematica, the ordinary user with default system settings is exposed
to a really defective arithmetic system.

>
> In regard to erroneous results: undoubtedly it is a possibility. However,
> one would expect that an approximate first order method for dealing with
> error propagation should at least be better in the majority of cases than
> a zeroth-order method such as working in fixed precision.
You might think that, but the choice is not so clear.
If you set \$MinPrecision=\$MaxPrecision
then
(i = 1.100000000000000000; Do[(i = 2*i - i; Print[i]), {4}])
gives Overflow[]  4 times.

In the default setting, \$MinPRecision is 0.
then
(i = 1.100000000000000000; Do[(i = 2*i - i; Print[i]), {50}]

sets i to decreasingly precise values, ending in 0., 0., 0X10^1, ... 0X10^5

(try it.. it is only one line!)

What is the naive user to do?

If you think this example is a good result, imagine what would happen to
a naive user (most are) who has a more elaborate program (most are)
which internally produces an answer "  0. "  which is entirely bogus?

As stated
> previously, if one desires more accurate approximations then one is in any
> case free to implement them, although given the above it should be clear
> that all that is generally possible within the domain of finite-precision
> numbers is a reasonable approximation unless other information is
> available from which to make stronger deductions.

Quite the opposite.  All numbers in a computer calculation should be
considered exact unless other information is available from which to
make WEAKER deductions.  Then the kind of deductions (based, perhaps, on
knowledge of physical measurements or uncertainties) can be incorporated
in the calculation.

I will also note that
> none of the example "problems" in this present topic are anything directly
> to do with significance arithmetic; they instead represent a combination
> of confusion due to Mathematica's (admittedly confusing) choice of
> notation,

I think that is right.

combined with an apparent misunderstanding of concepts related
> to multiprecision arithmetic in general.

I think that the concepts of multiprecision arithmetic as implemented
in Mathematica are different from other implementations in general.
Bringing such outside knowledge to bear on an utterance in Mathematica
is hazardous.

>
>>
>> WRI argues that this is a winning proposition. Perhaps Wolfram still
>> believes that someday all the world will use Mathematica for all
>> programming purposes and everyone will accept his definition of terms
>> like Precision and Accuracy, and that (see separate thread on how to
>> write a mathematical paper) it will all be natural and consistent.
>> (or that people who want to hold to the standard usage will be forced to
>> use something like SetGlobalPrecision[prec_]:=
>> \$MaxPrecision=MinPrecision=prec.
>> I believe this is routinely used by people who find Mathematica's
>> purportedly "user-friendly" amateurish error control to be hazardous.
>> )
>>
>> .........
>>
>> 'When I use a word,' Humpty Dumpty said, in rather a scornful tone, 'it
>> means just what I choose it to mean =97 neither more nor less.'
>>
>> 'The question is,' said Alice, 'whether you can make words mean so many
>> different things.'
>>
>> 'The question is,' said Humpty Dumpty, 'which is to be master =97 that's
>> all.'
>

```

• Prev by Date: Re: Interaction of Remove and Global variables in a Module
• Next by Date: Re: derivative of matrix in mathematica
• Previous by thread: Re: Numerical accuracy/precision - this is a bug or a feature?
• Next by thread: Re: Numerical accuracy/precision - this is a bug or a feature?