MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Numerical accuracy/precision - this is a bug or a feature?

  • To: mathgroup at smc.vnet.net
  • Subject: [mg120281] Re: Numerical accuracy/precision - this is a bug or a feature?
  • From: Daniel Lichtblau <danl at wolfram.com>
  • Date: Sat, 16 Jul 2011 05:42:43 -0400 (EDT)

On 07/08/2011 03:50 AM, Richard Fateman wrote:
> On 7/7/2011 4:35 AM, Daniel Lichtblau wrote:
>> On Jul 5, 4:15 am, Richard Fateman<fate... at cs.berkeley.edu>   wrote:
>>> In brief:  yes to both.
>>>
>>> On 7/4/2011 4:18 AM, Kevin J. McCann wrote:
>>> ...
>>>
>>>
>>>
>>>>> 1. N[2.0,20] should give 2 with accuracy/precision/whatsever about 20
>>>>> decimal digits, i.e. 2.00000000000000000000
>>>
>>> WRI will defend this as a feature.
>>>
>>> You thought that the semantics of N[] were the same as SetPrecision
>>> e.g.
>>>
>>> N[SetPrecision[2.0, 20]*N[Sqrt[2], 20], 20]
>>>
>>> works as you expected.
>>>
>>> So from your perspective, and from the perspective of anyone else who
>>> thinks along the same lines, it is a bug.  I would prefer to call it
>>> a design error.
>>>
>>> 2.0  (indeed, any floating point number) is a perfectly respectable way
>>> of denoting a number that can be expressed in higher precision by adding
>>> binary zeros. WRI doesn't agree.
>>>
>>> RJF
>>
>> I don't think this holds up for all floats in the sense you seem to
>> indicate. Take 2.1 for example.
>
> My contention is that 2.1, written that way, is a perfectly respectable
> way of denoting a number that can be expressed in higher precision by
> adding binary zeros.  2.1 is a double-float in most systems today.
> What is that number?  In decimal, to higher precision, it looks like
> 2.10000000000000008881784197001....
 >
> A program that decodes a float into its component fraction, exponent,
> and sign gives this information for the fraction:
> 4728779608739021  or in binary
> 10000110011001100110011001100110011001100110011001101
>
> That is the number to which we add binary zeros.
>
> Let us call this number P.

This is what SetPrecision will do. N will not do that.

If I recall correctly (unlikely, by now) the original point of 
contention was that N[] did not also do this. That is by design. N[] 
will not invent new digits, binary or otherwise. For that one requires 
the more arcane SetPrecision.


>> We can (and will) use SetPrecision to
>> pad with binary zeroes, just as you use it above on 2.0.
>>
>> In[606]:= N[SetPrecision[2.1, 20]*N[Sqrt[2], 20], 20]
>>
>> Out[606]= 2.9698484809834997281
>>
>> But this is not an accurate representation of 21/10 * Sqrt[2] to 20
>> places.
>
>
> Just because you are doing arithmetic on numbers represented to 20
> digits does not mean that the result of the arithmetic will be right to
> 20 digits, however, in the single operation of multiplication, you
> should not lose any digits.  What's going on here?
>
> You simply computed two different quantities.  Let S= sqrt(2) to 20
> decimal places.  In line 606 you multiplied P*S and rounded it to 20
> digits  [ actually you did something to the binary reps, but it doesn't
> matter here].

I was (and remain) fairly sure that your notion, applied to 2.1 rather 
than 2.0, would be to do exactly this. I never claimed it was a good idea...


> In line  607, below, you multiplied 21/10 by S.   Since P is not equal
> to 21/10, why should the results be the same?  Oh, Mathematica thinks
> that P ==21/10,  but that is merely the terribly broken notion of
> equality that Mathematica uses. We used to call that "close enough for
> government work".   But it is terribly broken because == fails to
> satisfy the axioms of "equivalence relations" such as a==b  and b==c
> implies a==c.

It satisfies trichotomy. That is to say, it plays nice with inequalities.

In any implementation of equality and inequality predicates that 
involves approximate numbers, something from exact arithmetic must be 
jettisoned. The choice we took might not be the one you would have 
preferred. But it has its justifications.

I think we've discussed this before. (Recursively, I probably said this 
the last time around.)


>> In[607]:= N[21/10*Sqrt[2], 20]
>> Out[607]= 2.9698484809834996025
>>
>> They are off by arould a machine epsilon. No surprise here, I think.
>>
>> In[608]:= % - %%
>> Out[608]= -1.256*10^-16
>>
>> I also think that this sort of numeric discrepancy is inevitable* in
>> any program that uses decimal input and binary representation.
>
> Absent any interior calculations, I would tend to agree.  But there is
> something else going on. I think that Mathematica has embedded in it
> several design decisions that elevate the (typical)
> one-half-unit-in-last-place error in binary-to-decimal and
> decimal-to-binary conversion that happens in input or output,  into a
> global slush bucket that infects and propagates into all interior
> computations with "software floats".

For input, I don't see that this slush bucket is any different than the 
underpinnings of other approximate floating point arithmetic. For output 
purposes it seems irrelevant. Maybe I'm misunderstanding the claim you 
are making.

Daniel Lichtblau
Wolfram Research


  • Prev by Date: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Next by Date: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Previous by thread: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Next by thread: Re: Numerical accuracy/precision - this is a bug or a feature?