Zero does not equal zero et al.
- To: mathgroup at smc.vnet.net
- Subject: [mg31586] Zero does not equal zero et al.
- From: "Alan Mason" <swt at austin.rr.com>
- Date: Wed, 14 Nov 2001 03:42:08 -0500 (EST)
- Organization: Road Runner - Texas
- Sender: owner-wri-mathgroup at wolfram.com
Hello, I would like to comment on some of the posts in the thread "zero does not equal zero". I agree completely with Richard Fateman and am disappointed that Dan Lichtblau "passed" on addressing the question of whether Mathematica's fuzzy handling of floating point makes such programming mainstays as x==y and x===y untenable. This is not a minor point but a fundamental design issue. I continue to think that 1.0 = = 1 returning True is an outright error, because it trashes the pattern matcher and offers no defense against propagation of errors -- how fuzzy does something have to get before it becomes intolerable? At the very least, why not introduce a new symbol, such as =N=, to test for floating point "equality". The meaning of == is sacrosanct and should not be sullied by contact with the dirty linen of floating point arithmetic, vagaries of hardware representations, etc. I also think Mathematica's adoption of a "least common denominator" approach to comparing floats with other floats or even with what are supposed to be exact numbers (like 1 as opposed to 1.0) is objectionable. Thinking probabilistically, if 1.0 is the result of an experimental measurement, there is zero chance that 1.0 can exactly equal 1, so why confound them? Concerning Fateman's comment (in Re Re Zero does not equal Zero): > Another choice is to test, and reset the > accuracy of numbers at critical points. > Realize, for example, that certain convergent > iterations will not produce more accurate > answers (as is normal), but will produce > vaguer answers at each step because of Mathematica's > arithmetic. They will terminate not when answers > are close, but when they are essentially unknown. > This is a major hazard, but at least a way to defeat this behavior-- by setting $MinPrecision=16 (say)-- is discussed in the Mathematica Book. This device obviates the need to reset accuracy at various critical points (how tedious!) and is crucial, as otherwise with root-finding algorithms (e.g.) Mathematica will return a result with zero precision. This is because those algorithms rely on the fact that although approximate roots of F[x] = 0 are accurate only to the square root of the working precision, when you feed the approximation back to F in the iterative loop you regain the lost precision; there is no way for Mathematica to know this, so it assumes the worst-case scenario of progressive degradation of accuracy. I have found Mathematica's high-precision arithmetic, fuzzy or not, to be useful for empirical error analysis. But even leaving aside the question of whether Mathematica's treatment of arbitrary precision floats is sound numerically, and serious doubts have been raised, it's far from obvious (to me) that floating point arithmetic as currently implemented in Mathematica is compatible with Mathematica's essential purpose as a symbolics package. Alan
- Follow-Ups:
- Re: Zero does not equal zero et al.
- From: Daniel Lichtblau <danl@wolfram.com>
- Re: Zero does not equal zero et al.