Re: remarks on significance arithmetic implementation [Was: Re: Numerical accuracy/precision - this is a bug or a feature?]

*To*: mathgroup at smc.vnet.net*Subject*: [mg120344] Re: remarks on significance arithmetic implementation [Was: Re: Numerical accuracy/precision - this is a bug or a feature?]*From*: Daniel Lichtblau <danl at wolfram.com>*Date*: Tue, 19 Jul 2011 06:58:46 -0400 (EDT)

On 07/18/2011 05:56 PM, Richard Fateman wrote: > Ok >> What I had in mind was that it is possible, in the context of >> significance arithmetic, to encapsulate the full error of fuzzy x >> times fuzzy y. Consider it as (x+dx) * (y+dy) where the dx and dy >> denote the fuzz. A first order approximation would catch the x*dy+y*dx >> part of the error term, but miss the dx*dy part. > Hm, doing the addition of xy+xdy+ydx one incurs 2 rounding errors which > are likely to be many magnitudes larger than dx*dy. > e.g. assume that we have true values X and Y, but are doing arithmetic on > (X*(1+eps1)) * (Y* (1+eps2)) where eps1,2 are like 2^(-52). > the answer X*Y*(1+eps1+eps2+eps1*eps2)(1+eps3)... the eps1*eps2 term is > like 2^(-104). but the rounding in adding 1+eps1+eps2 is like 2^(-53). > For multiplication eps3 might very well be 0. Some of the "purists" might prefer that the error be fully bounded. This desire could arise, say, in a mathematical proof that uses numerical validation. A bit of a rare need, I think, though not nonexistent. in any case such situations are better handled with full-fledged interval arithmetic. > ..snip... >>> So the real contribution to interval arithmetic (or significance >>> arithmetic) of Mathematica would be if it were able to reformulate some >>> subset of expressions so that they were automatically computed faster >>> and more accurately. >> I agree this would be a useful contribution to interval arithmetic. I >> do not think it would be so useful in significance arithmetic. > OK, how much time do you allow to rearrange expressions? Might you have > your compiler find single-use-expressions?? I believe at present and for the forseeable future Mathematica will handle such expressions via its usual interpreted evaluator loop. Exceptions would be in dedicated C code e.g. for computing Groebner bases (where rearrangement probably does not enter the picture). I would surmise that the typical situation you have in mind involves substituting a bignum for an argument into a symbolic expression, or maybe evaluating a DownValue or pure function with numeric arguments passed in. Again the standard evaluation semantics will be in force. Indeed I think that much is absolutely necessary just to maintain sanity in the language semantics (try not to hurt yourself laughing). For anything more clever one would invoke Together, Simplify, Experimental`OptimizeExpression, Collect, HornerForm, or the like on the symbolic expression prior to numeric substitution. > This is an interesting issue, I think. How fast should a computer > algebra system have to be in order to be useful for numerical > computation too? > > Some people would say this is already a non-issue. If speed is an > important criterion, don't use Mathematica for numerics. [frankly I > don't know if this is reasonable. After all a CAS can call the world's > fastest library for whatever. I've never done comparison timings.] > > But if we say the CAS is allowed to be slower because it is doing > something clever, what clever thing might that be? Arbitrary precision, > naw, that's around other places too. Intervals? naw. That they exist elsewhere does not make them a bad thing to have in the Mathematica arsenal. In many respects I regard significance arithmetic as "clever" because of (or despite, if you prefer) it beings brute force. This allows it both to be fast (reasonably, for software arithmetic), and reliable within its documented scope of operation. Daniel Lichtblau Wolfram Research