MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: remarks on significance arithmetic implementation [Was: Re: Numerical accuracy/precision - this is a bug or a feature?]

  • To: mathgroup at smc.vnet.net
  • Subject: [mg120343] Re: remarks on significance arithmetic implementation [Was: Re: Numerical accuracy/precision - this is a bug or a feature?]
  • From: Richard Fateman <fateman at eecs.berkeley.edu>
  • Date: Tue, 19 Jul 2011 06:58:35 -0400 (EDT)

Ok
> What I had in mind was that it is possible, in the context of 
> significance arithmetic, to encapsulate the full error of fuzzy x 
> times fuzzy y. Consider it as (x+dx) * (y+dy) where the dx and dy 
> denote the fuzz. A first order approximation would catch the x*dy+y*dx 
> part of the error term, but miss the dx*dy part.
Hm,  doing the addition of xy+xdy+ydx one incurs 2 rounding errors which 
are likely to be many magnitudes larger than dx*dy.
e.g. assume that we have true values X and Y,  but are doing arithmetic on
(X*(1+eps1))  * (Y* (1+eps2))   where eps1,2  are like 2^(-52).
the answer X*Y*(1+eps1+eps2+eps1*eps2)(1+eps3)...  the eps1*eps2 term is 
like 2^(-104).  but the rounding in adding 1+eps1+eps2 is like 2^(-53).
For multiplication eps3 might very well be 0.

>
> I went further to state that Mathematica in effect does just this. I 
> may well have been incorrect in that assessment of the code. I have 
> requested clarification in house.
>
>
..snip...
>> So the real contribution to interval arithmetic (or significance
>> arithmetic) of Mathematica would be if it were able to reformulate some
>> subset of expressions so that they were automatically computed faster
>> and more accurately.
>
> I agree this would be a useful contribution to interval arithmetic. I 
> do not think it would be so useful in significance arithmetic.
OK, how much time do you allow to rearrange expressions?  Might you have 
your compiler find single-use-expressions??
>
> My reasoning is that it is within the charter of interval arithmetic, 
> at least as it lives in Mathematica, to be arbitrarily slow. And it 
> must fully bound error, but can do whatever it likes to minimize that 
> bound.
>
> Significance arithmetic, in contrast, needs to be reasonably fast. So 
> no rearranging of expressions should be encouraged beryond what fixed 
> precision arithmetic might do.
>
> These are just my opinions, of course.
>
>
>> (One could argue that boosting the software precision will get the
>> answer accurately, especially if one ignores exponentially longer
>> run-times.)
>
> Not necessarily exponentially longer, since the basic arithmetic 
> operations scale polynomially (as in superlinearly) with input size. 
> Put differently, if things get exponentially slower then you may be 
> doing your computations all wrong, or asking the wrong question, or 
> one that is just too hard with current technology, or...
I'll go with superlinear, just to be agreeable :)
>
>> For some discussion in the context of a (different) programming language
>> that is not much like Mathematica, see
>>
>> http://developers.sun.com/sunstudio/products/archive/whitepapers/tech-interval-final.pdf 
>>
>>
>>
>> There are other, more expensive techniques. E.g. if p(x) is a
>> polynomial, what is p[Interval[{a,b}]] ?
>> Instead of just interval-alizing the + and * and ^, find the extrema of
>> p between a and b.
>> What is Sin[Interval[...]], etc.
>
> These are, as you know and stated, not cheap in terms of speed. Which 
> is why I regard them as outside the operating sphere of significance 
> arithmetic in Mathematica. Which is not to say a user cannot rearrange 
> terms, compute extrema, etc. Just that I'd not expect such things to 
> be done internally by default.

This is an interesting issue, I think. How fast should a computer 
algebra system have to be in order to be useful for numerical 
computation too?

Some people would say this is already a non-issue. If speed is an 
important criterion, don't use Mathematica for numerics. [frankly I 
don't know if this is reasonable. After all a CAS can call the world's 
fastest library for whatever. I've never done comparison timings.]

But if we say the CAS is allowed to be slower because it is doing 
something clever, what clever thing might that be?  Arbitrary precision, 
naw, that's around other places too.  Intervals? naw.

   The efforts that the reliable computing people go through to beat 
down the loss in precision ...  or to write papers about how to beat 
down the loss in precision... is considerable. I don't know if 
Mathematica includes much of this stuff; I have only version 7 at my 
disposal, anyway.

For example, expand expressions in Taylor series.

RJF




  • Prev by Date: Re: Unexpected Behavior: SetDelayed versus Set
  • Next by Date: Re: remarks on significance arithmetic implementation [Was: Re: Numerical accuracy/precision - this is a bug or a feature?]
  • Previous by thread: Re: remarks on significance arithmetic implementation [Was: Re: Numerical accuracy/precision - this is a bug or a feature?]
  • Next by thread: Re: remarks on significance arithmetic implementation [Was: Re: Numerical accuracy/precision - this is a bug or a feature?]