MathGroup Archive 2001

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Zero does not equal zero et al.

  • To: mathgroup at
  • Subject: [mg31597] Re: [mg31586] Zero does not equal zero et al.
  • From: David Withoff <withoff at>
  • Date: Thu, 15 Nov 2001 05:52:31 -0500 (EST)
  • Sender: owner-wri-mathgroup at

> Hello,
> I would like to comment on some of the posts in the thread "zero does not
> equal zero".  I agree completely with Richard Fateman and am disappointed
> that Dan Lichtblau "passed" on addressing the question of whether
> Mathematica's fuzzy handling of floating point makes such programming
> mainstays as x==y and x===y untenable.  This is not a minor point but a
> fundamental design issue.

The issues that have been raised in this thread have been addressed
many times, in this group and elsewhere.  The basic principles are
also discussed in the formidable open literature on this topic.  This
thread has expanded to include so much of numerical analysis that
it probably isn't realistic to respond to everything.  If you don't
see the responses to all of your questions in this particular thread,
the answers are available elsewhere.

> I continue to think that 1.0 == 1 returning True
> is an outright error, because it trashes the pattern matcher

The pattern matcher does not use Equal (==), so the design of Equal
has no effect on the pattern matcher one way or the other.

> and offers no
> defense against propagation of errors -- how fuzzy does something have to
> get before it becomes intolerable?

The notion of "defense against propagation of errors" in numerical
analysis generally means designing algorithms to reduce accumulated
error.  It isn't clear what equality testing might offer that is
relevant to this. 

> At the very least, why not introduce a new symbol, such as =N=, to test
> for floating point "equality".

You can use SameQ[Order[e1, e2], 0] if you want that operation.

> The meaning
> of == is sacrosanct and should not be sullied by contact with the dirty
> linen of floating point arithmetic, vagaries of hardware representations,
> etc.

Equality testing certainly must deal with those "vagaries" at some level.
If anything, functions like Equal and SameQ are less sensitive to those
vagaries than corresponding functions in other systems.

> I also think Mathematica's adoption of a "least common denominator"
> approach to comparing floats with other floats or even with what are
> supposed to be exact numbers (like 1 as opposed to 1.0) is objectionable.
> Thinking probabilistically, if 1.0 is the result of an experimental
> measurement, there is zero chance that 1.0 can exactly equal 1, so why
> confound them?

You can use SameQ if you don't want a comparison of 1 and 1.0 to
return True.  It doesn't seem that it is bad to provide several
functions that give different common ways of handling this

> Concerning Fateman's comment (in Re Re Zero does not equal Zero):
> > Another choice is to test, and reset the
> > accuracy of numbers at critical points.
> > Realize, for example, that certain convergent
> > iterations will not produce more accurate
> > answers (as is normal), but will produce
> > vaguer answers at each step because of Mathematica's
> > arithmetic. They will terminate not when answers
> > are close, but when they are essentially unknown.
> > 
> This is a major hazard, but at least a way to defeat this behavior-- by
> setting $MinPrecision=16 (say)-- is discussed in the Mathematica Book.  This
> device obviates the need to reset accuracy at various critical points (how
> tedious!) and is crucial, as otherwise with root-finding algorithms (e.g.)
> Mathematica will return a result with zero precision.  This is because those
> algorithms rely on the fact that although approximate roots of F[x] = 0 are
> accurate only to the square root of the working precision, when you feed the
> approximation back to F in the iterative loop you regain the lost precision;
> there is no way for Mathematica to know this, so it assumes the worst-case
> scenario of progressive degradation of accuracy.  I have found Mathematica's
> high-precision arithmetic, fuzzy or not, to be useful for empirical error
> analysis.

This (to me anyway) is the only somewhat interesting concern about
Mathematica arithmetic.  This too has been addressed many times,
but at least it is interesting.

While it is true that numerical algorithms often "regain lost
precision", this is certainly not always the case, or even usually
the case.  In most practical calculations numerical errors are
uncorrelated, and treating them as if they somehow cancel out is
just plain wrong.  The default behavior of variable-precision
arithmetic in Mathematica reflects this reality, and assumes by
default that numerical errors are uncorrelated.

A simple counter-example is a program that adds and subtracts the
same number many times.  In such a program, the error introduced
in each addition exactly cancels with the error introduced in each
subtraction, so the errors do not accumulate.  Treating the errors
as runcorrelated, as is done by default for variable-precision
arithmetic in Mathematica, lead to an over-estimate of the error
in the result.

The problem with that counter-example is that most numerical
calculations involve a lot more than just adding and subtracting
the same number many times.  In the overwhelming majority of
practical numerical calculations the algorithms have not been
meticulously contrived so that the errors cancel out, and the
only mathematically justifiable assumption is that all of the
errors are uncorrelated.  

Programs that have been so contrived can be run in Mathematica
either by using machine arithmetic, or by suitable use of
SetPrecision and SetAccuracy.

Since machine arithmetic (where all errors are effectively assumed
to be perfectly correlated) is fully available in Mathematica,
concerns about variable-precision arithmetic in a sense amount to
arguing that less is more: that providing an alternative to machine
arithmetic is somehow worse than providing no alternative.

And for those examples where high-precision is needed and the
algorithms have been designed so that errors tend to be correlated,
inserting SetPrecision and SetAccuracy in appropriate places is
hardly much of a hardship, especially in comparison to the vastly
more difficult task of doing the basic numerical analysis to arrange
for these functions to be appropriate in the first place.

In any case, the assumption that precision is fixed throughout
a calculation, as is done in machine arithmetic, or by corresponding
use of SetPrecision, SetAccuracy, $MinPrecision, and $MaxPrecision,
amounts to telling the computer to make up extra digits to pad out
all results to a fixed number of digits.   The absurdity of doing
numerical analysis that way should be obvious.  Machine arithmetic is
popular because it is fast and compact, not because there is anything
fundamentally good about it.

> But even leaving aside the question of whether Mathematica's treatment of
> arbitrary precision floats is sound numerically, and serious doubts have
> been raised, it's far from obvious (to me) that floating point arithmetic as
> currently implemented in Mathematica is compatible with Mathematica's
> essential purpose as a symbolics package.

All of the doubts that I know about have been addressed many times.
If anyone has any remaining questions about these issues, my best
suggestion would be to review the archives of this newsgroup, to
look at some old tutorials on Mathematica arithmetic (available on
MathSource), to prepare specific examples to illustrate your concerns,
or to review the basic issues in the open literature.  The book that
I happen to have on my desk right now, "Numerical Analysis" by
Burden and Faires, for example, has in the first few pages a decent
introduction to arithmetic.

In closing I could mention that the "zero does not equal
zero" example that introduced this thread was largely just an
illustration of the distinction between the way that numbers are
stored and the way that numbers are displayed.  The precision of
a number can be any real number, but the display can obviously
only use a discrete number of digits.  A number with a precision
of 3.4, for example, might display with three digits, a number
with a precision of 3.7 might display with four digits, and so
forth, and a number with precision less than 1 might display as
zero. Although one might disagree with the way that fractional
precision is rounded to a discrete number of digits for the
purpose of display, this has nothing to do with basic arithmetic.

Dave Withoff
Wolfram Research

  • Prev by Date: Re: Zero does not equal zero et al.
  • Next by Date: blank plots
  • Previous by thread: Re: Zero does not equal zero et al.
  • Next by thread: Re: Zero does not equal zero et al.