MathGroup Archive 2001

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Zero does not equal Zero

  • To: mathgroup at smc.vnet.net
  • Subject: [mg31503] Re: Zero does not equal Zero
  • From: "Ersek, Ted R" <ErsekTR at navair.navy.mil>
  • Date: Thu, 8 Nov 2001 04:55:11 -0500 (EST)
  • Sender: owner-wri-mathgroup at wolfram.com

I have a copy of the out of print book, Elementary Numerical Computing With
Mathematica, by Robert D. Skeel  and  Jerry B. Keiper.  As many of you know
Jerry Kieper lead the team that developed Mathematica's numerics until his
untimely death in 1995.  It's curious that page 46 of the above book says
the following about comparing approximate numbers.  

"Programming languages should have relational operations that are executed
exactly without any tolerances. In particular, an equality between two
floating point values should succeed only if they are exactly equal. If
equality and other relational operations are fuzzy, it makes them more
complicated to define, it frustrates and inconveniences knowledgeable
programmers, and it destroys the familiar properties of these operators." 

I happen to agree with the opinion above, but as demonstrated at the start
of this thread comparison of approximate numbers in Mathematica is fuzzy.
For those who missed the beginning of this thread the lines below (similar
to Leszek Sczanieki's example) demonstrate this fuzziness.

In[1]:=
   a=1.0;
   b=1.0 + 64 * $MachineEpsilon;
   c=1.0 + 128 * $MachineEpsilon;
   {a==b, b==c, a==c}

Out[1]=
   {True, True, False}


In[2]:=
   InputForm[{a, b, c}]

Out[2]=
{1., 1.0000000000000142, 1.0000000000000284}

----------------------------------  
I suspect this was a topic of debate between Jerry Keiper and Stephen
Wolfram.  The above book says this fuzziness makes these features more
complicated to define.  Well in Mathematica it's worse than complicated.  I
have only been able to get a partial explanation from Wolfram Research for
when two approximate numbers are equal or not.  If my memory serves me
correctly, (a==b) is guaranteed to be False if (a) and (b) differ in the
eighth to the last bit of the binary mantissa/significand or in one ore more
of the higher bits. This is a sufficient but not necessary condition for
(a==b) to return False. Sometimes (a==b) returns False when (a) and (b)
differ by less than the eighth least significant bit. I tried to get a
complete explanation for what (a==b) does and was told that would require an
understanding of Mathematica's internal representation of approximate
numbers. Does this internal representation involve design details Wolfram
Research is not willing to share, or do they consider it too complicated for
mere users to grasp?

In any case, I found a way to get around this fuzziness in many cases. 
If you want an exact comparison 
   use (a-b==0)  instead of  (a==b),  
   use Negative[a-b]  instead of  (a<b), 
   use NonPositive[a-b] instead of (a<=b), 
   and make similar changes for (a!=b), (a>b), (a>=b).  
When (a) and (b) are machine numbers the alternate forms above will always
give the same result as an exact comparison. 
If (a) or (b) are arbitrary precision numbers the above trick will reduce
(but not eliminate) the fuzziness.

If you are interested in Mathematica's arbitrary precision arithmetic you
can read about lots of the details in a report by Mark Sofroniou.  The
report can be found in the Help Browser under:
    Getting Started/Demos  (Button)
           Demos  (in left column)
                Numerics Report  (second column)

You can find more information on the subject at 
http://support.wolfram.com/mathematica/mathematics/numerics/numericalerror.h
tml

---------------- 
Regards,
Ted Ersek
   See Mathematica Tips, Tricks at 
   http://www.verbeia.com/mathematica/tips/Tricks.html



  • Prev by Date: Re: Re: Fitting NormalDistribution to 2D List
  • Next by Date: Re: Limit question
  • Previous by thread: Re: Zero does not equal Zero
  • Next by thread: Mathematica 4.0 features