Re: remarks on significance arithmetic implementation [Was: Re: Numerical accuracy/precision - this is a bug or a feature?]

*To*: mathgroup at smc.vnet.net*Subject*: [mg120342] Re: remarks on significance arithmetic implementation [Was: Re: Numerical accuracy/precision - this is a bug or a feature?]*From*: Daniel Lichtblau <danl at wolfram.com>*Date*: Tue, 19 Jul 2011 06:58:25 -0400 (EDT)

On 07/18/2011 04:33 PM, Richard Fateman wrote: > On 7/18/2011 11:54 AM, Daniel Lichtblau wrote: > ... snip ... a bunch of things I think we agree upon. >> To expand on "properly rounded floats" I will mention that >> Mathematica's significance arithmetic, in certain settings, is as >> conservative as interval arithmetic. Specifically this works out when >> the operations used are addition, multiplication, integer powering, or >> division with both operands approximate. Also required is that they >> have more significant digits than can be represented by the error >> recorder (that is to say, more than 16 or so digits when error is >> recorded as a 53-bit-mantissa float). The gist is that precision is >> altered downward in a way that will even account for second order >> error (and the higher order error terms that exist in computing x/y >> where x and y are both approximate bignums). > I am unclear on this zeroth, first, second order error categorization. What I had in mind was that it is possible, in the context of significance arithmetic, to encapsulate the full error of fuzzy x times fuzzy y. Consider it as (x+dx) * (y+dy) where the dx and dy denote the fuzz. A first order approximation would catch the x*dy+y*dx part of the error term, but miss the dx*dy part. I went further to state that Mathematica in effect does just this. I may well have been incorrect in that assessment of the code. I have requested clarification in house. > It seems to me that the critical problem, and one that is not addressed > by Mathematica, in interval arithmetic, is addressing dependence of > operands. Consider the two expressions > f[x_]:= (x-1)*(x+1) and g[x_]:=(x^2-1) > > which are analytically the same function. > > given the value v=Interval[{-1/2,1/2}] f[v] returns an interval with > width 2. g[v] returns an interval with width 1/4. > > The function g is obviously a lot better than f. > Same thing happens for v=0.1`2. f[v] is -0.99`269.. but g[v] is > -0.99`3.69... So according to Mathematica, g evaluates > to a full decimal digit higher accuracy. > > A true devotee of interval arithmetic (or of significance arithmetic) > would take the function body for f and rearrange it as g, > and also would require for comparisons such concepts as possibly-equal, > certainly-greater-than etc. And would need > to resolve meanings for empty intervals and other such things. Some of > these may in fact be in Mathematica's interval > arithmetic suite. They seem to be absent in the significance arithmetic > world, though there is a transition from one to the > other possible. > Note that Interval[0.1`2] returns the fairly nonsensical > Interval[{0.1,0.10}] but InputForm[%] shows > Interval[{0.098046875`1.9914337561691884, > 0.101953125`2.008400542026431}], maybe overkill. > Interval[{0.98,0.102}] might be OK. > > > There are ways of reducing semi-automatically, the number of repeat > occurrences of operands in such expressions, thereby improving the > interval widths. This (that is, computing "single use expressions") is > the kind of thing that would, I think, be useful and for which computer > algebra systems are especially appropriate. That, in combination with > some other tricks of interval "reliable" arithmetic would be helpful > (and for which I have written some programs, not in Mathematica though). > So the real contribution to interval arithmetic (or significance > arithmetic) of Mathematica would be if it were able to reformulate some > subset of expressions so that they were automatically computed faster > and more accurately. I agree this would be a useful contribution to interval arithmetic. I do not think it would be so useful in significance arithmetic. My reasoning is that it is within the charter of interval arithmetic, at least as it lives in Mathematica, to be arbitrarily slow. And it must fully bound error, but can do whatever it likes to minimize that bound. Significance arithmetic, in contrast, needs to be reasonably fast. So no rearranging of expressions should be encouraged beryond what fixed precision arithmetic might do. These are just my opinions, of course. > (One could argue that boosting the software precision will get the > answer accurately, especially if one ignores exponentially longer > run-times.) Not necessarily exponentially longer, since the basic arithmetic operations scale polynomially (as in superlinearly) with input size. Put differently, if things get exponentially slower then you may be doing your computations all wrong, or asking the wrong question, or one that is just too hard with current technology, or... > For some discussion in the context of a (different) programming language > that is not much like Mathematica, see > > http://developers.sun.com/sunstudio/products/archive/whitepapers/tech-interval-final.pdf > > > There are other, more expensive techniques. E.g. if p(x) is a > polynomial, what is p[Interval[{a,b}]] ? > Instead of just interval-alizing the + and * and ^, find the extrema of > p between a and b. > What is Sin[Interval[...]], etc. These are, as you know and stated, not cheap in terms of speed. Which is why I regard them as outside the operating sphere of significance arithmetic in Mathematica. Which is not to say a user cannot rearrange terms, compute extrema, etc. Just that I'd not expect such things to be done internally by default. >> It would not be hard to extend this to the taking of reciprocals and >> hence division of integer by float e.g. x --> 1/x. But as best I can >> tell from looking at the code, Mathematica does not do this, so error >> tracking in that case is purely first order. > > Maybe I'm not following here, but computing an interval for 1/x does not > seem difficult. I was a bit unclear. What I meant was that here again, as with fuzzy multiplication, one can use significance arithmetic methods relatively easily to account for the total error and not just the first order contribution that is typically used. (Long sentence, that. Maybe I'm still being unclear.) Daniel Lichtblau Wolfram Research