MathGroup Archive 2001

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Zero does not equal zero et al.

  • To: mathgroup at smc.vnet.net
  • Subject: [mg31596] Re: [mg31586] Zero does not equal zero et al.
  • From: Daniel Lichtblau <danl at wolfram.com>
  • Date: Thu, 15 Nov 2001 05:52:29 -0500 (EST)
  • References: <200111140842.DAA03423@smc.vnet.net>
  • Sender: owner-wri-mathgroup at wolfram.com

Alan Mason wrote:
> 
> Hello,
> I would like to comment on some of the posts in the thread "zero does not
> equal zero".  I agree completely with Richard Fateman and am disappointed
> that Dan Lichtblau "passed" on addressing the question of whether
> Mathematica's fuzzy handling of floating point makes such programming
> mainstays as x==y and x===y untenable. 

I passed on discussing Equal and SameQ because I did not wish to be
repetitive. If you want to see my opinion on all that, it is given in
some detail in the various threads to which I posted URLs. Here it is in
short form. They are quite tenable as currently implemented. They are
indeed quite useful. I will remark that I am not sure anyone actually
brought tenability into question; some clearly do not like the semantics
of those functions but that is not the same thing.


> This is not a minor point but a
> fundamental design issue.  I continue to think that 1.0 = = 1 returning True
> is an outright error, because it trashes the pattern matcher and offers no
> defense against propagation of errors -- how fuzzy does something have to
> get before it becomes intolerable?

You do not understand the purpose of testing for equality (==). It is
not some syntactic sort of thing wherein exact and approximate worlds
cannot mix (we leave that for SameQ). Nor is it deeply metaphysical. To
explain what I mean by that I'll point out that your later remark
regarding "experimental error", if taken seriously, would preclude 1.0
from being declared as equal to itself.

Equality testing is based on math, it is fuzzy when the inputs are
fuzzy, and the specifics of the fuzziness are documented. I make no
apology for any of this. Quite the contrary, I view it as utterly
necessary for the business of computational math. If ever you try to
code an approximate Groebner basis algorithm or anything of comparable
level and mix of symbolic and numeric functionality, you will either
come to agree with me or else abandon the project.

I am hard pressed to see how the outcome of 1.0==1 in any way affects
(let alone "trashes") pattern matching. The latter is in some sense
related to SameQ but not Equal.


> At the very least, why not introduce a
> new symbol, such as =N=, to test for floating point "equality".
> The meaning of == is sacrosanct and should not be sullied by
> contact with the dirty linen of floating point arithmetic, vagaries
> of hardware representations, etc.

We agree that the meaning is sacrosanct but have very different ideas of
what exactly that meaning is. While I speak only for myself, I strongly
suspect that your opinion is not widely shared. Certainly it does not
express the defined semantics of the Equal function in Mathematica. Nor
does it carry over to FindRoot or NSolve, where Equal is specified in
the inputs but only satisfied approximately by results.


> I also think Mathematica's adoption of a "least common denominator"
> approach to comparing floats with other floats or even with what are
> supposed to be exact numbers (like 1 as opposed to 1.0) is objectionable.
> Thinking probabilistically, if 1.0 is the result of an experimental
> measurement, there is zero chance that 1.0 can exactly equal 1, so why
> confound them?

Indeed, why would 1.0 == 1.0 under this interpretation?

The corruption model we employ (in brief: compare at lowest precision of
operands) is useful, fairly easy to explain, and involves no appeal to
philosophy. All of which argue in its favor.


>[...]

> This
> device obviates the need to reset accuracy at various critical points (how
> tedious!) and is crucial, as otherwise with root-finding algorithms (e.g.)
> Mathematica will return a result with zero precision.  This is because those
> algorithms rely on the fact that although approximate roots of F[x] = 0 are
> accurate only to the square root of the working precision, when you feed the
> approximation back to F in the iterative loop you regain the lost precision;

I confess I do not understand this. Why do you claim approximate roots
found in root finding are accurate to the square root of working
precision? Theory and practice agree that accuracy (or more correctly,
work required to attain a specified level of accuracy) will be related
to conditioning of the particular problem. 


> [...]
> 
> But even leaving aside the question of whether Mathematica's treatment of
> arbitrary precision floats is sound numerically, and serious doubts have
> been raised, it's far from obvious (to me) that floating point arithmetic as
> currently implemented in Mathematica is compatible with Mathematica's
> essential purpose as a symbolics package.
> 
> Alan

The essential purpose of Mathematica is to be a powerful, flexible,
general purpose program for technical computation. Strong symbolics
functionality is an important component of this, one that is under
constant development (in no small part by myself). It is by no means the
only one and hardly qualifies as the "essential purpose" when one views
a broad spectrum of users.


Daniel Lichtblau
Wolfram Research
Opinions herein are mine and not necessarily shared by my employer.


  • Prev by Date: Re: Limit question
  • Next by Date: Re: Zero does not equal zero et al.
  • Previous by thread: Zero does not equal zero et al.
  • Next by thread: Re: Zero does not equal zero et al.