MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Why Mathematica does not issue a warning when the calculations

  • To: mathgroup at smc.vnet.net
  • Subject: [mg117660] Re: Why Mathematica does not issue a warning when the calculations
  • From: Daniel Lichtblau <danl at wolfram.com>
  • Date: Tue, 29 Mar 2011 06:55:31 -0500 (EST)

Richard Fateman wrote:

(DL)
> [...] If this is not deemed to qualify as 
>> the type of situation you describe  [...]
>> , then the more direct variant below will certainly suffice.
>>
>> In[36]:= b1 = With[{prec=10}, 
>> Block[{$MinPrecision=prec,$MaxPrecision=prec},
>>   SetPrecision[-251942729309018542,prec] + (6 + Sqrt[2])^20]]
>> Out[36]= 0.6619679453
>>
>> In[37]:= b2 = With[{prec}, 
>> Block[{$MinPrecision=prec,$MaxPrecision=prec},
>>   SetPrecision[-251942729309018542,prec] + (6 + Sqrt[2])^20]]
>> Out[37]= 0.68777820337396297032
>>
>> In[38]:= b1 == b2
>> Out[38]= False
>>
>> This is truncation at work, behind the scenes.

(RJF)
> Again, these are two different functions, and there is no reason for 
> them to result in the same answer.

Heh. This is quite not the case (and I don't mean "not quite"...)

The function being applied is

f(x) = x + (6 + Sqrt[2])^20

The inputs are both equal to -251942729309018542

In[2]:= SetPrecision[-251942729309018542,10] ==
   SetPrecision[-251942729309018542,20]
Out[2]= True

This is exactly the situation you had requested. I mention this in 
careful detail in case anyone is (a) still reading this and (b) failing 
to see exactly how the example fits the bill.

To be more specific, here is what you had earlier stated.

"the fact that 2.0000==2.000000000 is True (etc) means that
a==b and both a and b are numbers, does not mean that f[a]==f[b] [in 
Mathematica]. I view this as problematical, even if most people don't 
ever even notice it."

You added shortly afterward:

"If you have a true function f, that is a procedure whose results
depend only on its argument(s), then when A is equal to B,  f(A)
should be equal to f(B).  This is fundamental to programming and to
mathematics, but is violated by Mathematica."

What I showed was an example in which it would be violated by any 
program, working in fixed precision arithmetic, that also claims 
2.0000==2.000000000. I suspect this applies to at least a few of the 
programs that implement bigfloats. You might recognize one such below.

  type(2.0000=2.000000000, 'equation' );
                                      true

Summary statement: What you wrote to indicate why my example does not 
qualify strikes me as elaborate gyration.


> Fortran 77 has two (maybe more?) sine functions for different 
> precisions. Starting with an exact integer 2, you presumably get 
> different results for  sin(1.0 * 2)   and sin(1.0d0 * 2). The intrinsic 
> functions for different precisions are different. Perhaps this is what 
> you are getting at.  If you call two different functions "the same" then 
> it would seem that you might get different answers on inputs that are 
> the same value (in Fortran or any other system).
> 
> though the situation in Mathematica is kind of the reverse.  That is, 
> you have a function, say s[x_,y_]:=x-y.  It is not really a single 
> function of two arguments. Really it is a function of 4 arguments,
> 
> s(x,y, Precision(x), Precision(y),
> 
>  and the result is computed by something like
> 
>  SetPrecision(RawSubtract(x,y),Min(prec1,prec2))
> 
> Calling a function like this requires you to distinguish x and y if they 
> have different values or different precisions.
> Mathematica hides the precision of numbers from the user during ordinary 
> arithmetic.

I might buy this argument if it were significance arithmetic under 
discussion. But then you would not be using the argument, because it 
would go in the direction of your point, to wit, that
a==b but f(a)!=f(b) is bad semantics.

Since it is just fixed precisiona rithmetic being used, and evaluations 
are arithmetically the same, I don't buy it at all.


>> (DL)
>>>> Significance arithmetic might exacerbate this in different cases, 
>>>> but the issue is quite present with fixed precision.
>>
>> (RJF)
>>> Maybe an example here of another language or system where this issue 
>>> occurs would help?
>>
>> The above behavior will manifest in any program that uses fixed 
>> precision and supports bigfloats.
> If you change the precision the function is different.

Okay...
Let me turn around the question. In the present statement of the 
problem, when is Mathematica claiming
a==b, f(a)!=f(b), AND precision is the same for both?
I am sure such examples exist. Not so sure significance arithmetic will 
play a fundamental role, though.


>> I'm not sure if this next example is entirely in keeping with the 
>> question at hand, but here is a behavior I rather like.
>>
>> In[9]:= 2.00000000000000000 + .000000000000000001`20
>> Out[9]= 2.0000000000000000
>>
>> We ignore the second addend because it is outside the precision of the 
>> first. Does your notion of fixed point, with padding of zero bits to 
>> raise precision when applicable, also have this behavior? If not, then 
>> it should (and for reasons you well understand). If so, then it comes 
>> perilously close to violating some of the approximate arithmetic 
>> notions I believe you wish to preserve.
> If the numbers are of fixed precision, e.g. 20 decimals or some 
> corresponding number of bits, then the best that you can do and maintain 
> the fixed precision is to (in principle) add the two numbers exactly in 
> an infinite-precision register, and then round to the nearest number of 
> 20 decimals (etc).  In reality you do not need an infinite-precision 
> register, but one that is only slightly larger than 20 decimals[or bits] 
> (2 more.)
> 
> I am uncomfortable with the notational convention in Mathematica that 
> 2.0000000  is different from 2.0, so I'd rather see something like    
> v=With[{Precision}, Float[ 2.12345  + 10^(-20)]]   or whatever.
> note that in this context, 2.12345  would be 2+12345/100000, and not 
> some version that has a likely error..

I think what you want is a different language...


>> (RJF)
>>> Someone who writes a paper for a peer reviewed journal (not a 
>>> magazine dedicated to a particular computer algebra system), can 
>>> expect any "system" oriented paper to be attacked by advocates of any 
>>> system not explicitly compared.

(DL)
>> Again, I'm not convinced this is so common, though it happens and 
>> sometimes needs to happen. Provided the critique has some real 
>> content, I see no issue. That is to say, it is not adequate to claim 
>> "You forgot to compare against <program xxx>". But it is fine to state 
>> "Here is what <program xxx> does, and one observes that it is much 
>> better than what is shown in this paper".

(RJF)
> Is it, though, a disqualification?  e.g. "Here is a technique that makes 
> Integrate in Mathematica run 10X faster on these problems".    vs.  Here 
> is a different program that does (some of ?) the same problems 20X 
> faster than Mathematica...)

Here I am not sure what you are asking. If a referee points out that 
some system does a much better job than that presented in the paper, it 
might be grounds for rejection. Not necessarily, though, if the method 
presented is novel and perhaps in some respects useful.

In this case I doubt it matters in general whether the referee is using 
the same program or a different one from that used in the submission. In 
some specific cases it might be pertinent either because the system in 
question is too slow for the task at hand (if referee shows examples in 
another program), or because the authors did not use it well (if referee 
shows examples in same program). But in either scenario there might be 
mitigating circumstances to blunt such critique.


> I think I would not mind seeing how to make Mathematica 10X faster even 
> if it were still not as fast as some other system on some (probably 
> subset) task.

Same here, if that subset is not entirely specialized. What I mean: if 
it arises in some real applications, that's good. If it arises in the 
setting of an author scouring for a weak spot that is not terribly 
important, and improving on it, then such work might be publishable but 
probably does not rise to the level of JSC.

Daniel Lichtblau
Wolfram Research



  • Prev by Date: Re: Problem with DateListPlot Aspect Ratio
  • Next by Date: Re: Why Mathematica does not issue a warning when the calculations
  • Previous by thread: Re: Why Mathematica does not issue a warning when the calculations
  • Next by thread: Re: Why Mathematica does not issue a warning when the calculations