MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Why Mathematica does not issue a warning when the calculations

  • To: mathgroup at smc.vnet.net
  • Subject: [mg117743] Re: Why Mathematica does not issue a warning when the calculations
  • From: Richard Fateman <fateman at eecs.berkeley.edu>
  • Date: Wed, 30 Mar 2011 04:17:52 -0500 (EST)

On 3/29/2011 3:58 PM, Daniel Lichtblau wrote:
> .. snip..

>
> (DL) Here is an example I trust you will recognize.
>
> f := x -> x + (evalf[40](6) + sqrt(evalf[40](2)))^20:
> Digits := 40:
>
> > val10 := f(evalf[10](-251942729309018542));
> val10 := 9018542.6877782033820121476772
>
> > val20 := f(evalf[20](-251942729309018542));
> val20 := .6877782033820121476772
>
> You will notice that the constants are all evaluated numerically in 
> advance, and all to 40 digits.
(RJF) Considering this other system (sorry, Steve C, will you bear with us?)

Let z:= -251942729309018542,
  a:=evalf[10])(z),
b:=evalf[20](z),
    for val10,  you computed f(a)
and for val20 you computed f(b),
and certainly the results are different.

But contrary to your contention,  this particular system does NOT 
believe that a and b are equal.  In particular,
a-b is .9018542 x10^7, and the test

evalb(a=b) returns false.

Your test

type(a=b,equation)   returns true.

But this does not test for equality. It is a kind of pattern match that 
confirms the type, namely an equation.
In particular, type(3=4,equation)  also returns true, because the TYPE 
is "equation".

Translated into Mathematica,  this is approximately  MatchQ[foo == bar , 
HoldPattern[Equal[__]]]


Some of the rest of this particular message, which draws further 
inferences from this erroneous example has simply been removed.

>
>
>> Aha, I finally see your point here.  You think that by using  With 
>> [{prec=10 }....]  you are doing what non-Mathematica programs do, 
>> silently.  That is, you believe they scan all the numbers in the 
>> expression and find the precision with which to evaluate the 
>> Sqrt[2].  Whether to use the lowest or highest precision, who knows.  
>> In this case there is only one precision and so it doesn't matter.  
>> What is essential here, and which I now see I missed, ( because it is 
>> inconsistent with how some other systems actually work), is your 
>> assumption that the precision of the input would be determined in 
>> this way.
>
> The example I show above indicates that one can set the precision of 
> that square root quite high, and still get a "bad" result. This is 
> truncation error (of the argument to f) followed by cancellation 
> error. How high precision is used for the other stuff does not really 
> matter.
No, actually the truncation did not happen as the function f was called, 
but much earlier, when two different values for a and b were produced.
>
> In an example of this sort a program COULD, in principle, raise the 
> precision of the input argument. That will only work if that input is 
> exactly representable in the underlying arithmetic.
Here is a fundamental disagreement.  When a human types 0.1 as a number 
to be represented as a hardware float (binary) it is converted to binary 
and has a certain bit representation, B.  That now is the number that 
the computer has in its electronic memory.  B is close to, but not 
exactly 0.1 decimal.  It is closer to 0.10000000000000000555  except 
rounded to "machine precision". it is 0.1.
People of a certain sophistication will realize that not all decimal 
numbers have exact binary representations. Just the fraction 1/3 does 
not have an exact decimal representation.  We must live with some 
version of this radix stuff and hope we can explain it when necessary.

According to Mathematica, B, in binary, can be written out as something like
0.0001100110011001100110011001100110011001100110011001101... base 2.
If we add binary zeros to the right of B, we do not change the value.  B 
is not muddied by adding zeros.  The number in the computer is just as 
accurate, exactly as accurate, as B, as an approximation to 0.1, even 
with more zeros.  Now you are entitled to ask for a different 
approximation to 1/10, in binary, in the computer, say to 100 bits, and 
it will be a different number B' that does not correspond to adding 
zeros to B. That's because it is a different number.  In a reasonable 
system you could even compute the difference between B and B'.  While it 
is possible to do bigfloat decimal arithmetic, the gain is mainly that 
you don't have to explain radix arithmetic to the user; the loss is 
mostly in speed.  (usually we don't see decimal but some power-of-10 radix).

> For decimals, in programs that use base 10 behind the scenes, this can 
> be done. If the input is decimal and the representation is binary and 
> the decimal is not exactly representable (e.g. 1.3 in binary) then 
> you'd be in trouble. 
Only with people who don't understand that 1.3 is not exactly 
representable in binary, and if they really want that, they could use 13/10.
....
> Though it is necessary if we want Mathematica to emulate fixed 
> precisiona rithmetic in any sensible way. I do not regard manufacture 
> of digits or bits, that is, raising precision artificially, as 
> "sensible", unless one has a priori a justification for doing so.
I, on the other hand, see no reason to assume that some number in the 
computer, is not a perfectly valid number, exactly what the user wants 
to compute with, and is accurate to all digits and can be extended by 
zeros if need be.  Your view  suggest that the system should operate 
this way...Please excuse the anthropomorphism..)

<computer speaks>  I see that you produced a number B in binary in my 
memory, perhaps by typing in something or computing something,  that (if 
we convert to a rational fraction) is exactly  Q= 
1801439850948199/18014398509481989.   I refuse to believe that you mean 
that number. In fact it is one of an infinite sets that I deny you use 
of. Why do I deny you the full use of this number? On the basis that it 
differs from 1/10  by a small epsilon= 1/180143985094819890, and so you 
must mean a fuzzy 1/10.   I therefore I will continue the computation 
only by inserting an error term with all computations having to do with 
Q.  call it Q+-epsilon. And in this way I will make your life easier.  
In particular if you subtract Q from 1/10, I will tell you the result is 
zero.
<end of computer speaks>


Executive summary of the paragraph above:
  You couldn't possibly mean THAT number, which is so close to 1/10.  So 
I will refuse to allow you to do arithmetic with it.

.. snip..
> In practice, and not just in Mathematica, a coercion model that forces 
> arguments to the lowest precision in the expression is a sensible way 
> to do computations (there are exceptions...). 
Can you cite a reference for this?  This is the essence of significance 
arithmetic, but really not prevalent.
> This is because it means we do not pollute the arithmetic with 
> manufactured digits (which might arise if we do differently and raise 
> precisions).
Most other arithmetic systems attempt to retain as much information as 
possible.  You claim that extra precision is "manufactured digits", 
though you (perhaps) allow extra digits that happen to correspond to 
decimal digits..  e.g.
0.25 decimal is  0.01 base 2.   exactly.  So 0.010000000000000 base 2 is 
exactly 0.25 decimal.  Are those digits better?
I claim that these digits (binary zero digits) can be useful. Sometimes 
they allow numbers to cancel WHEN THEY SHOULD, and leaving them out 
causes more problems.

>
>
>> For example is there  a global "precision" (this is common)  that 
>> governs the calculations?  Or is the precision is determined by the 
>> target location as for an assignment  z=f[x],  look at the precision 
>> of the "container" for z.   This is the method used by MPFR, which I 
>> think Mathematica uses internally.
>
> Mathematica uses GMP for integer arithmetic. MPIR is a fork of GMP, 
> not used in Mathematica. Neither does Mathematica use MPFR for 
> bigfloat arithmetic.
My error.  I read MPFR when I should have read MPIR.
>
> Any or all of this could change in the future. But to date neither 
> MPIR nor MPFR have been used under the hood in Mathematica.
Nevertheless, a reasonable design for a system is where one judges the 
precision of each operation by the precision of the target.  Presumably 
it has to do some work to determine how much precision is needed of the 
operands -- extended if necessary by zeros, to achieve the desired 
target precision.
... snip...

Without dwelling further on the next example, let me just note that

evalb(evalf[10](21/10) = evalf[20](21000000000001/10000000000001))   
returns false.

This system apparently coerces to highest precision to do comparison 
(which is what I expected), contrary to what you thought it was doing.
...
> To spell out my thought in more detail, it will do bad things if the 
> input was decimal and not exactly representable in binary, e.g. 1.3. 
> Likewise if the input was, say, pi and had been numerically 
> approximated. This would (often) be bad because extending with zeroes 
> would not recover the value in question.

Here's an approximation to Pi.   3.1 decimal
There is absolutely nothing wrong with taking that number (31/10)  and 
writing it out, in decimal or binary  to any number of places.
It does not become more accurate as an approximation for Pi, just more 
precise as an approximation for 31/10.

It may be possible to make a case that because a number was typed in 
using decimal notation or is declared to be decimal that it has some 
special properties.  That would be helpful, as I said previously, for 
bankers.  Probably not such a great idea as a default for scientific 
computing where such operations as multiplying by 2 or 1/2 can be done 
fast in binary, and rounding is simpler.

>
> Let me show what I mean in regard to 1.3. If we take the bit pattern 
> and recover the "exact" value, we
>
> In[10]:= approx13 = FromDigits[RealDigits[1.3, 2], 2]
> Out[10]= 5854679515581645/4503599627370496

OK, this, above, is the number that the computer knows about.
>
> In[11]:= N[approx13, 22]
> Out[11]= 1.300000000000000044409
>
> This is what would be the result, were we to add zero bits to the 
> (truncated) binary representation of 1.3.
Sure.  That's the closest binary number to the one you typed in.

>
> The only way to avoid this would be to work entirely in binary, 
> including input/output. That way you only have exactly representable 
> values. But again, numerical approximations to things like pi would 
> still not behave very well.
Disagree.
>
>
>>>   In effect you'd be putting garbage right where
>>> you are trying to avoid it.
>
>> Nope.  Consider representing the integer 10 as a float.  Do you mind 
>> adding decimal zeros or binary zeros to get 10.0000000000000000    ?  
>> Are you adding garbage?  I think not.
>
> And I agree, for this type of example. But not all numbers will have 
> exact representations in the underlying base. For those, putting zeros 
> at the end to raise precision would just (as in example above) be 
> putting junk into the number beyond its original precision.
Nope, adding zeros at the end does not change
>
>
>> Say that you have the single-float 10.0.  Do you think that it 
>> represents the same number as the integer 10?  Or do you think it 
>> represents 10.000000xyzw ..  where xyzw  could be any digits at all?  
>> Now if that 10.0 were, say, a measurement that you knew was 
>> approximate, ok.   But a computationally more useful model (in the 
>> opinion of most people) is that the single-float 10.0 is a compact 
>> representation of exactly the real number 10,  that happens to have 
>> an exact representation in decimal or binary floating-point format.
>> This model allows you to do careful analysis of errors etc.  You can 
>> even simulate the other model with xyzw fuzz, using this.
>
> Actually I take this approach for computing numeric Groebner bases. 
> But the rationale is quite different. The pretense, as it were, is 
> that I am solving a nearby problem. For most purposes, where such 
> bases are used, that is fine.
>
> I'd be a bit surprised if your model is really the preferred one in 
> either the numerical analysis or scientific computation communities. 
> But I've not done any sort of poll.
I think the dearth of papers on significance arithmetic is some 
indicator of popularity.
>
>
>> It is much harder to simulate the exact representation idea if the 
>> arithmetic you are presented with has the xyzw fuzz already, and you 
>> have to do gyrations to "get the fuzz out".
>
> Yes, maybe, I suppose...but the fuzz (or badness) is already there if 
> your input was not exactly representable. This is true for your model 
> as well as any other.
>
>
>>> I'm not sure that adding binary (as opposed to decimal) zeros
>>> would be any better an idea, in this setting.
>
>> It is the same for integers.  For fractions it is also about the 
>> same. Unless you are wedded to the necessity of exact representation 
>> of certain decimal quantities such as 0.01.  This has an advantage 
>> for bankers, and especially in inflated currencies or long-running 
>> mortgage payment calculations.
>
> I'm not. i simply point out that adding bits to 0.01 will only make it 
> look more precise. It won't be (in the standard nomenclature) more 
> accurate.
>
>
>>>> now define the function f[x,newprec]:= N[x+(6+sqrt(2))^20 ,newprec]
>>>>
>>>> f[b,n] == f[exact,n] FOR ALL n, because b==exact in this system.
>
>>> This will in general not be achievable, if n is larger than the
>>> precision of b. At least not with semantics that does comparison
>>> by raising rather than lowering precision.
>
>> In this case  b and exact represent the same integer, b in a 
>> floating-point format,  exact  is in an integer format.   This should 
>> assure that f computes the same value.
>
> Seems workable, if b is in some sense known exactly (e.g. is a numeric 
> approximation of an integer). This sort of input is seen in SOME real 
> problems. it is by no means what we have in ALL real problems.
>
>
>
>
>
>
>>   If someone raises a question about computer algebra systems in the 
>> context of Mathematica, and there seems to be a possibility that I 
>> can contribute to educating the poster and others reading the posting 
>> about that issue, then I sometimes chime in.   Sometimes the 
>> questions are "why does Mathematica do X"  and the answers from 
>> others look like "Because it does X"   or "It has to do X"  or 
>> "Everyone knows that X"  or "you should ask Y".   But my reaction 
>> might be more along the lines of "Mathematica was designed/programmed 
>> to do X, though it is not inevitable that a computer system should do 
>> this. I can understand why you might expect, from mathematical 
>> knowledge and intuition, that Mathematica should do something else, 
>> and it would be nice if it fulfilled your expectations, and here's 
>> how it might be changed to do that."
>> Or something along those lines.
>
> Ah. Though I wonder if anyone else is still awake?
>
My fans? Your fans?  people who use Google groups search for error 
analysis, significance arithmetic,  your name, my name...
>
>>> I'd be more charitably disposed, had you simply recognized from the
>>> beginning that my example met your request.
>
>> Ah, but perhaps you now understand my point -- you thought that your 
>> programs, using With [ {prec..}..]  correctly modeled the behavior of 
>> other programs.  I, on the other hand assumed quite the opposite, 
>> that you were inserting two different constants in a program, and 
>> then you contended that they were the same!   I probably had as much 
>> concern for your sanity as you for mine.  How could two textually 
>> different programs with different numbers in them be the same?   I do 
>> see (now) what you were driving at.
>
> I leave open the issue of how a variant of my fixed precision code, 
> properly ported to an arbitrary different program, might or might not 
> conform to your requested counterexample. Clearly at least one such 
> program behaves as I expected.
um, not really.
> I rather expect a program withn the semantics you have in mind would 
> fail to provide a counterexample by flunking the a==b criterion. Well, 
> depends on how equality is tested, (slightly independent of) whether 
> precision is raised or lowered.
>
> The mention of bitwise 'sameness", as opposed to some program's idea 
> of "equal", jarred my memory. Maybe you know this, but for the sake of 
> the remaining third reader (if still breathing) I will state it. One 
> can have:
>
> (1) Identical machine double input (down to the last bit)
> (2) Identical harware
> (3) An identical arithmetic computation
> yet still obtain
> (3) Different results.
>
> What's more, this is all quite within the realm of IEEE compliance.
I would require not only identical hardware and computation, but 
identical status settings for traps etc. Also, no hardware errors (such 
as might be caused by cosmic rays, radioactive decay of elements in the 
computer chips, mechanical breakage etc.).
I am not aware of a circumstance in which different results occur.  
Sounds like one of those riddles.
>
> I don't know whether or how this fits into the numerics you would like 
> to see. Just wanted to point out that consistency might be deceptively 
> hard to attain.
  Judiciously choosing among conflicting requirements is sometimes an 
important element of design.

RJF




  • Prev by Date: Re: Why Mathematica does not issue a warning when the calculations
  • Next by Date: Re: Why Mathematica does not issue a warning when the calculations
  • Previous by thread: Re: Why Mathematica does not issue a warning when the calculations
  • Next by thread: Re: Why Mathematica does not issue a warning when the calculations