MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Why Mathematica does not issue a warning when the calculations

  • To: mathgroup at smc.vnet.net
  • Subject: [mg117661] Re: Why Mathematica does not issue a warning when the calculations
  • From: Daniel Lichtblau <danl at wolfram.com>
  • Date: Tue, 29 Mar 2011 06:55:42 -0500 (EST)

Richard Fateman wrote:

(RJF)
>>> So far as I can tell, it has nothing to do with finite precision
>>> arithmetic unless you
>>> have some notion that "equality" means something like "close enough
>>> for government work".

(DL)
>> Numeric conditioning combined with truncation or cancellation error 
>> can have the effect of messing up the proposed equivalence. You 
>> actually show that quite clearly below, in analyzing the Mathematica 
>> behavior of the original example at machine precision.

(RJF)
> This is simply not my understanding of any language (exception Iverson's 
> APL had "approximately equal", I think).
> 
> If two numbers x, y in Fortran are equal, then any arithmetic function f 
> of x will be equal to an arithmetic function f of y.
> (There are exceptions to this if the function f is not arithmetical or 
> x,y are perhaps not numbers ...   examples:  if you can ask "what is the 
> address in memory of x" and "what is the address in memory of y" they 
> might be equal or not.  But this is not an arithmetic function.  A 
> second exception is that in IEEE 754 standard binary floats,  +0 and -0 
> are equal.  Yet you can distinguish them with a function that looks at 
> their sign.  However, if you do ordinary non-trapping arithmetic on +0 
> it should result in the same exact bit configuration as -0. Also 
> not-a-numbers (NaNs) are supposed to compare not-equal EVEN IF THEY HAVE 
> THE SAME BITS.
> 
> If you have two numbers that are equal and you subtract them, then 
> cancellation cancels all the bits and you get zero.  You don't have any 
> cancellation error.   If you have two numbers that are equal and you 
> truncate them both to the same number of bits, they are still equal.  If 
> you have two numbers that are equal and you truncate them to different 
> numbers of bits then then may be different or the same.  If you have two 
> numbers that have different numbers of bits and you compare them, you 
> have to decide whether to discard the extra bits on the longer number, 
> or extend the shorter one (presumably with zeros).   If  x and y have 
> the same initial n bits, but x has an additional m non-zero string of bits,
> then the two numbers are not the same identical number, except 
> Mathematica says so.  Not just numerically equal (==) but identical.
> 
> Languages that support both single and double floats (or additional 
> float formats) as well as various lengths of integers  have to address 
> this problem, say if x and y were of different types. They could simply 
> give a type-mismatch error and  object that one cannot compare for 
> numerical equality integers of differing lengths 8, 16, 32, 64 ...  Or 
> in comparing 8 and 16 bit quantities they could compare only the first 8 
> and say they were equal.  Or extend the 8-bit quantity with zeros, or 
> extend it with the random 8 bits following it in memory.  And similarly 
> for doing arithmetic.
> But if two quantities have different bit configurations, I doubt that it 
> makes sense to say they are IDENTICAL.
> 
> There is another set of objects,  somewhat related here, that of Intervals.
> x=Interval[{-1,1}]
> y=Interval[{-1,1}]
> 
> There are several questions that can be asked.  Which of the following 
> makes sense mathematically and does it agree with what Mathematica returns?
> 
> x==x ?
> x===x?
> x==y?
> x===y?
> x-y==0?
> x-x==0?
> 
> The answers to these depends on whether Mathematica recognizes dependent 
> intervals or not. It apparently does not.
> There is a substantial literature on "Reliable Computation"; 
> Mathematica's overloading of "==" for Interval comparison is probably 
> risky.
> 
> Oh,  Infinity==Infinity   but ...
> And perhaps I found a bug.. Max[Indeterminate,Infinity] --> ???  
> Indeterminate.

I think this rather got away from the point at hand. You stated that 
a==b should force f(a)==f(b) for numeric functions f. I had stated that 
this does not in general have to work out, and that it fails in fixed 
precision, and that you had in effect showed that type of failure in the
original problem (the one from the post that initiated this thread).

Specifically if you take the symbolic input, evaluate exactly, and then 
numericized, you get the situation I described. In detail:

e1 = FractionalPart[(6 + Sqrt[2])^20];

Now evaluate in fixed precision. In Mathematica this can be done (to 
close approximation) as below.

In[28]:= a1 = Block[{$MinPrecision=10,$MaxPrecision=10},
   e1 /. n_Integer:>SetPrecision[n,10]]
Out[28]= 0.7149336997

In[29]:= a2 = Block[{$MinPrecision,$MaxPrecision},
   e1 /. n_Integer:>SetPrecision[n,20]]
Out[29]= 0.68777819382990998756

In[30]:= a1==a2
Out[30]= False

Recall that all we are doing, in effect, is numericizing the numbers and 
evaluating in fixed precision. If this is not deemed to qualify as the 
type of situation you describe, then the more direct variant below will 
certainly suffice.

In[36]:= b1 = With[{prec=10}, Block[{$MinPrecision=prec,$MaxPrecision=prec},
   SetPrecision[-251942729309018542,prec] + (6 + Sqrt[2])^20]]
Out[36]= 0.6619679453

In[37]:= b2 = With[{prec}, Block[{$MinPrecision=prec,$MaxPrecision=prec},
   SetPrecision[-251942729309018542,prec] + (6 + Sqrt[2])^20]]
Out[37]= 0.68777820337396297032

In[38]:= b1 == b2
Out[38]= False

This is truncation at work, behind the scenes.

(DL)
>> Significance arithmetic might exacerbate this in different cases, but 
>> the issue is quite present with fixed precision.

(RJF)
> Maybe an example here of another language or system where this issue 
> occurs would help?

The above behavior will manifest in any program that uses fixed 
precision and supports bigfloats.


(DL)
>> SameQ for numeric inputs has semantics as thus: if both are 
>> approximate, compare to last bit (or possibly next-to-last, I do not 
>> recall that specific detail). You may not care for those semantics, but
>> RJFDislike =!= SemanticsAreWrongOrEvenNecessarilyBad
>> (Just evaluate that in Mathematica if you don't believe me...)

(RJF)
> It says except for the last bit.  Why?  Why not require them to be 
> identical, same type, same precision, same accuracy?

That one is not my call. I believe I once heard a good explanation for 
this but for the life of me I cannot remember what it was, let alone how 
compelling it might have been.


(RJF)
> It seems to me that Mathematica owes us some OTHER test for really 
> really identical.  In Michael Trott's Mathematica Guidebook for Numerics 
> he suggests that Experimental`$SameQTolerance would affect this, but 
> apparently not in my version of Mathematica, where it is missing. Could 
> it be that other people thought that this last-bit "semantics" was maybe 
> not something to be set in concrete, at least at some previous time?

I also wish that "good to the last bit" capability was still available. 
I do not know why it was removed.


>> (RJF)
>>> 2.000000000000000001000 == 2  is False, but
>>> 2.00000000000000000100  == 2  is True.
>>>
>>> Such rules of the road lead to situations in which a==b  but a-b is
>>> not zero,
>> True. As you are aware from numerics literature, there will always be 
>> anomalies of that sort (and I also am not thrilled with that one).
> Perhaps you could point to a situation in a major programming language 
> outside of Mathematica  in which a and  b are legitimate finite-value 
> representable floating-point scalar numbers that are equal and yet a-b 
> is not zero?

I cannot. As I stated, this I also find to be problematic.

I'm not sure if this next example is entirely in keeping with the 
question at hand, but here is a behavior I rather like.

In[9]:= 2.00000000000000000 + .000000000000000001`20
Out[9]= 2.0000000000000000

We ignore the second addend because it is outside the precision of the 
first. Does your notion of fixed point, with padding of zero bits to 
raise precision when applicable, also have this behavior? If not, then 
it should (and for reasons you well understand). If so, then it comes 
perilously close to violating some of the approximate arithmetic notions 
I believe you wish to preserve.


>>> (RJF) For
>>> disparagement, one need not look any further than the wikipedia
>>> article, at least last time I looked.

(DL)
>> Well, there you have it. Right in print, on the Internet.

(RJF)
> My apologies. You can find disparagement in other places (with authors) 
> too. The current article seems substantially less critical than some
> past versions. I wonder why :)

Not my doing. No idea who was responsible for changes, and more 
specifically, whether WRI as a company, or employees as individuals, 
were involved.


(DL)
>> On the subject of monster polynomial systems, lemme tellyaz a story. 
>> Once upon a time I was on a conference program committee. A post-doc 
>> submtted an article about some very practical tools developed to find 
>> all real solutions (thousands) for a class of polynomial systems of 
>> modest dimension (10 or so) tht arose in a very applied field in which 
>> said post-doc was working. This problem was known to be beyond the 
>> scope of all existing polynomial system solver technologies.
>>
>> The submission was not passed my way, and was eventually rejected. 
>> This was primarily due to the really negative (read: outright caustic) 
>> report of an Esteemed Member of the Computer Algebra Community (one 
>> with Specific Authority in all things Groebner-based). Best I can 
>> tell, this young researcher, whose work was quite nice, has been 
>> absent from computer algebra venues and literature ever since.
>>
>> My point, which I readily admit is distant from the remarks that lead 
>> me into this segue, is this. When someone comes along with good ideas 
>> for practical problems of the sort you may have encountered, chances 
>> are that person will not get that work into any symbolic computation 
>> literature. Me, I think we took a big loss in that particular case. As 
>> I've heard similar complaints from others (and had similar experiences 
>> though perhaps of less consequence to the field at large) I think this 
>> may be a common occurrence.


(RJF)
> This last thought -- that the computer algebra community especially  
> tramples newcomers, or applications, is hard to calibrate.  The US 
> National Science Foundation has, at least in the not-too-distant past, 
> pointed out that computer science reviewers are generally much harsher 
> on proposals than reviewers in other areas. This is a problem when 
> someone at a higher level in NSF claims that a much higher percentage of 
> proposal in (say) physics have high ratings.

Physicists don't eat their young...


(RJF)
> [...]
>    I can imagine the review though in DL's story: the reviewer simply 
> says "This could be solved more easily by using <some system known to 
> reviewer>".  In some instances this is a legitimate reason to reject a 
> paper.  Sometimes not.  I personally have had papers rejected that way, 
> and I sometimes agree.

Two reviewers I believe were softly against the paper, claiming it was 
out of scope. The third reviewer was quite harsh, and if I recall 
correctly, stated that the author never should have been born. Okay, I'm 
making that up. But the assessment amounted to saying that the author 
had no business submitting work in this field. Me, I thought the best 
reason to submit that work to a different field is that reviewers 
elsewhere might actually notice that the work had serious value. And, if 
need be, manage to state with some level of courtesy why it was outside 
the scope of their field.


(RJF)
>  I think that harsh criticisms are not reserved for newcomers. As for 
> why, I can only conjecture. Perhaps there is some alternative world view 
> that comes into play when one has committed opinions into articles and 
> even more so, into programs, and yet further, commercial programs.
>  That is,  if you/your students/ and especially your company have spent 
> X person-years into doing something in a certain way, then someone who 
> proposes a different way will be subjected to additional scrutiny (if 
> not outright scorn :) :) .

I rather doubt that. It hinges far more on personal integrity of the 
reviewer than who is paying the bills. One reason is that the bill payer 
rarely if ever knows who are the reviewers of what.


(RJF)
> Someone who writes a paper for a peer reviewed journal (not a magazine 
> dedicated to a particular computer algebra system), can expect any 
> "system" oriented paper to be attacked by advocates of any system not 
> explicitly compared.

Again, I'm not convinced this is so common, though it happens and 
sometimes needs to happen. Provided the critique has some real content, 
I see no issue. That is to say, it is not adequate to claim "You forgot 
to compare against <program xxx>". But it is fine to state "Here is what 
<program xxx> does, and one observes that it is much better than what is 
shown in this paper".


(RJF)
>  Similarly, but not as extreme for conferences.  
> This is one, though not the only, reason that the Journal of Symbolic 
> Computation cannot publish systems-y papers. (note for other readers, if 
> there are any still there... that DL and RJF have both been on the 
> editorial board of JSC at various times, and I think we both would push 
> for more systems papers.)
> 
> RJF

Certainly I've no general objection to such papers.


Daniel Lichtblau
Wolfram Research



  • Prev by Date: Re: Why Mathematica does not issue a warning when the calculations
  • Next by Date: Re: Why Mathematica does not issue a warning when the calculations
  • Previous by thread: Re: Why Mathematica does not issue a warning when the calculations
  • Next by thread: Re: Why Mathematica does not issue a warning when the calculations