MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: remarks on significance arithmetic implementation [Was: Re: Numerical accuracy/precision - this is a bug or a feature?]

  • To: mathgroup at
  • Subject: [mg120369] Re: remarks on significance arithmetic implementation [Was: Re: Numerical accuracy/precision - this is a bug or a feature?]
  • From: DrMajorBob <btreat1 at>
  • Date: Wed, 20 Jul 2011 06:33:07 -0400 (EDT)

As before, I think much of this discussion is useless. But this example  
DOES give me pause:

> Consider the two expressions
> f[x_]:= (x-1)*(x+1)  and g[x_]:=(x^2-1)
> which are analytically the same function.
> given the value v=Interval[{-1/2,1/2}]    f[v] returns an interval with
> width 2.  g[v] returns an interval with width 1/4.

I don't see why we'd want significance tracking to act this way.


On Tue, 19 Jul 2011 05:58:03 -0500, Richard Fateman  
<fateman at> wrote:

> On 7/18/2011 11:54 AM, Daniel Lichtblau wrote:
>> [Caution: This may increase the entropy of the thread, and almost
>> certainly will increase the ennui.]
> Yep
> ... snip ... a bunch of things I think we agree upon.
>> To expand on "properly rounded floats" I will mention that
>> Mathematica's significance arithmetic, in certain settings, is as
>> conservative as interval arithmetic. Specifically this works out when
>> the operations used are addition, multiplication, integer powering, or
>> division with both operands approximate. Also required is that they
>> have more significant digits than can be represented by the error
>> recorder (that is to say, more than 16 or so digits when error is
>> recorded as a 53-bit-mantissa float). The gist is that precision is
>> altered downward in a way that will even account for second order
>> error (and the higher order error terms that exist in computing x/y
>> where x and y are both approximate bignums).
> I am unclear on this zeroth, first, second  order error categorization.
> It seems to me that the critical problem, and one that is not addressed
> by Mathematica, in interval arithmetic, is addressing dependence of
> operands.  Consider the two expressions
> f[x_]:= (x-1)*(x+1)  and g[x_]:=(x^2-1)
> which are analytically the same function.
> given the value v=Interval[{-1/2,1/2}]    f[v] returns an interval with
> width 2.  g[v] returns an interval with width 1/4.
> The function g is obviously a lot better than f.
> Same thing happens for v=0.1`2.  f[v] is -0.99`269..     but g[v]  is
> -0.99`3.69...  So according to Mathematica, g evaluates
> to a full decimal digit higher accuracy.
> A true devotee of interval arithmetic (or of significance arithmetic)
> would take the function body for f and rearrange it as g,
> and also would require for comparisons such concepts as possibly-equal,
> certainly-greater-than  etc.  And would need
> to resolve meanings for empty intervals and other such things.  Some of
> these may in fact be in Mathematica's interval
> arithmetic suite.  They seem to be absent in the significance arithmetic
> world, though there is a transition from one to the
> other possible.
> Note that Interval[0.1`2] returns the fairly nonsensical
> Interval[{0.1,0.10}] but InputForm[%] shows
> Interval[{0.098046875`1.9914337561691884,
>    0.101953125`2.008400542026431}], maybe overkill.
> Interval[{0.98,0.102}]  might be OK.
> There are ways of reducing semi-automatically, the number of repeat
> occurrences of operands in such expressions, thereby improving the
> interval widths.  This (that is, computing "single use expressions") is
> the kind of thing that would, I think, be useful and for which computer
> algebra systems are especially appropriate.  That, in combination with
> some other tricks of interval "reliable" arithmetic would be helpful
> (and for which I have written some programs, not in Mathematica
> though).  So the real contribution to interval arithmetic (or
> significance arithmetic) of Mathematica would be if it were able to
> reformulate some subset of expressions so that they were automatically
> computed faster and more accurately.
> (One could argue that boosting the software precision will get the
> answer accurately, especially if one ignores exponentially longer
> run-times.)
> For some discussion in the context of a (different) programming language
> that is not much like Mathematica, see
> There are other, more expensive techniques. E.g.  if p(x) is a
> polynomial, what is p[Interval[{a,b}]] ?
> Instead of just interval-alizing the + and * and ^,  find the extrema of
> p between a and b.
> What is Sin[Interval[...]], etc.
>> It would not be hard to extend this to the taking of reciprocals and
>> hence division of integer by float e.g. x --> 1/x. But as best I can
>> tell from looking at the code, Mathematica does not do this, so error
>> tracking in that case is purely first order.
> Maybe I'm not following here, but computing an interval for 1/x does not
> seem difficult.
>>> [...]
>>> My point remains:
>>>    WRI could have a much more direct notion of number, and equality,
>>> that
>>> would make it easy to implement my choice of arithmetic or theirs or
>>> yours.
>>> They didn't.  The default is not completely reliable. People write to
>>> this newsgroup, periodically, saying Huh? what's going on? I found a
>>> bug!
>> This is, I am afraid, nonsense. Nothing would make it "easy" to
>> implement significance arithmetic across the spectrum of Mathematica's
>> numerical engine. To do so efficiently, reliably, maintainably, etc.
>> is, well, a significant task. Jerry Keiper could do it. Mark Sofroniou
>> could do it. I do not think I could do it, at least not very well.
>> And, as the saying goes, "I have me doubts about thee".
> OK, I did not mean it would be easy to reproduce what Jerry Keiper did,
> but after all, he did do it using ordinary arithmetic at its base.
> What I did mean is that programs to do arithmetic "the machine way"
> would run at machine speed (= easy); the programs to do software
> bigfloats without significance tracking would run faster (=easier) than
> including significance tracking.   And that if one wished to patch-up
> such code with built-in tracking ,after the fact by re-stating the
> precision after each operation, that would be slower (= not as easy)
> than avoiding it in the first place.
> So, easy= easy for the computer.
> I do think that people write to the newsgroup reporting what they think
> of as bugs, but are actually the deliberate consequences of the design
> of arithmetic.
>>> As for the experimental and computational notions of (im)precision, I
>>> think there is an issue of using the same words with different  
>>> meanings.
>>> Similar but unfortunately different. Precision of a floating point
>>> number F is simply the number of bits in the fraction.  If you impute
>>> some uncertainty to F, you can store that in another piece of data D in
>>> the computer. If you want to assert that F and D together represent a
>>> distribution of a certain kind  you can also compute with that.
>>> RJF
>>> [...]
>> Related to the above (I think), you stated earlier in this thread that
>> the differences in terminology usage can be troublesome. I agree that
>> the meaning of Mathematica's Precision and, to a lesser extent,
>> Accuracy, might cause initial confusion to one unfamiliar with the
>> software but well versed in Numerical Analysis. I would expect such
>> people to catch on after being bitten once or twice, and either get
>> used to the distinctions, or use different software.
> yes, but there are not too many well versed in numerical analysis.
>> This strikes me as similar to the situation wherein Append and Prepend
>> are O(n) rather than O(1) complexity operations, since they rewrite
>> their List arguements in full. A CS person new to Mathematica might
>> well assume List is akin to the CS notion of a (singly or doubly)
>> linked list. After learning otherwise they'd probably not be too
>> flustered, other than perhaps to mutter that the naming of "List" was
>> problematic. Even that would perhaps abate once they get accustomed to
>> the very general Mathematica notion of a List. After all, a linked
>> list is a modestly arcane structure to those outside CS, and most
>> general software users are not from inside CS, and most CS people will
>> understand that.
> Similar, yes.  Dissimilar in the sense that the List operations return
> expected results, just slower. But arithmetic returns (on occasion)
> unexpected results.

DrMajorBob at

  • Prev by Date: Re: sorting a nested list of 4-tuples
  • Next by Date: ExampleData[{"Geometry3D", "Torus"}]
  • Previous by thread: Re: remarks on significance arithmetic implementation [Was: Re: Numerical accuracy/precision - this is a bug or a feature?]
  • Next by thread: Total plot width