MathGroup Archive 2010

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Why?

  • To: mathgroup at
  • Subject: [mg110606] Re: Why?
  • From: danl at
  • Date: Mon, 28 Jun 2010 02:27:47 -0400 (EDT)

> [...] The IEEE-754
> binary standard embodies a good deal of the well-tested wisdom of
> numerical analysis from the beginning of serious digital computing
> through the next 40 years or so.  There were debates about a few items
> that are peripheral to these decisions, such as dealing with
> over/underflow, error handling, traps, signed zeros.

To what extent is this standard really in conflict with significance
arithmetic? As best I can tell, for many purposes significance arithmetic
sits on top of an underlying "basic" arithmetic (one that does not track
precision). That part comes reasonably close to IEEE, as best I understand
(more on this below).

> Computing can model mathematics, mathematics can model reality. At
> least that is the commonly accepted reason we still run computer
> programs in applications.  Good numerical computing tools allows one to
> build specific applications.  For example, one would hope that the
> computing tools would allow an efficient implementation of (say)
> interval arithmetic. This is fairly easy with IEEE-754 arithmetic,
> but much much harder on earlier hardware designs.

I'm not convinced interval arithmetic is ever easy. But I can see that
having IEEE to handle directed rounding is certainly helpful.

>   The basic tool that Mathematica (and many other systems) provides that
> might be considered a major extension in a particular direction, is the
> arbitrary precision software.
>    Mathematica has a different take on this though, trying to maintain
> an indication of precision.  None of the other libraries or systems that
> do arbitrary precision arithmetic have adopted this, so if it is such a
> good idea, oddly no one else has taken the effort to mimic it.  And it
> is not hard to mimic, so if anyone were to care, it could be done
> easily. Apparently people do not want some heuristically determined
> "fuzz" to be mysteriously added to their arithmetic.

I'm not sure it is all that easy to code the underlying precision
tracking. But yes, it is certainly not a huge multi-person, multi-year
undertaking. I'd guess most commercial vendors and freeware implementors
of extended precision arithmetic do not see it as worth the investment. My
take is it makes a considerable amount of hybrid symbolic-numeric
technology more accessible to the implementor. But I cannot say how
important this might be in the grand scheme of things.

> I do not know how much of the code for internal arithmetic for
> evaluation of functions in Mathematica is devoted to bypassing these
> arithmetic features, but based on some examples provided in this
> newsgroup, I suspect this extra code, which is an attempt to mimic
> arithmetic without the fuzz inserted by Mathematica, becomes
> a substantial computational burden, and an intellectual burden on the
> programmer to undo the significance arithmetic fuzz.

In our internal code it is relatively straightforward to bypass.
Effectively there are "off" and "on" switches of one line each. I do not
know how much they get used but imagine it is frequent in some parts of
the numerics code. For disabling in Mathematica code we have the oft-cited
$MinPrecision and $MaxPrecision settings. I believe this is slated to
become simpler in a future release (not sure if it is the next release

>> Where I fault Mathematica's design here is that "Real" wraps two
>> rather different kinds of objects: fixed-precision machine numbers,
>> and Mathematica's approximate reals. Both are useful, but
>> understanding and controlling which kind you're using is a bit subtle.
>> "Complex" is even more troublesome.
> I know of 4 systems which provide arbitrary precision numbers that mimic
> the IEEE-754 arithmetic but with longer fraction and exponent fields.
> Perhaps that would provide the unity of design concept that you would
> prefer. One just increases by a factor of 4  (quad double), the other
> are arbitrary precision.

Mathematica with precision tracking disabled is fairly close to IEEE. I
think what is missing is directed rounding modes. There might be other
modest digressions from rounding number of bits upward to nearest word
size (that is to say, mantissas will come in chunks of 32 or 64 bits).

For many purposes, the fixed precision arithmetic you advocate is just
fine. Most numerical libraries, even in extended precision, will do things
similar to what one gets from Mathematica, with (usually) well studied
algorithms under the hood that produce results guaranteed up to, or at
least almost to, the last bit. This is good for quadrature/summation,
numerical optimization and root-finding, most numerical linear algebra,
and numerical evaluation of elementary and special functions.

Where it falls short is in usage in a symbolic programming setting, where
one might really need the precision tracking. My take is it is easier to
have that tracking, and disable it when necessary, than to not have when
you need it. Where people with only fixed extended precision at their
disposal try to emulate significance arithmetic in the literature, I
generally see only smallish examples and no indication that the emulation
is at all effective on the sort of problems one would really encounter in

Daniel Lichtblau
Wolfram Research

  • Prev by Date: Re: Absolute value
  • Next by Date: Re: Why?
  • Previous by thread: Re: Why?
  • Next by thread: Re: Why?