MathGroup Archive 2001

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Re: Zero does not Equal Zero is a feature

  • To: mathgroup at smc.vnet.net
  • Subject: [mg31540] Re: Re: Zero does not Equal Zero is a feature
  • From: Richard Fateman <fateman at cs.berkeley.edu>
  • Date: Fri, 9 Nov 2001 06:13:55 -0500 (EST)
  • Approved: Steven M. Christensen <steve@smc.vnet.net>, Moderator
  • Organization: University of California, Berkeley
  • References: <9rdg39$7ob$1@smc.vnet.net> <200111071029.FAA26256@smc.vnet.net> <3BE9BECE.C10B9BE2@wolfram.com>
  • Sender: owner-wri-mathgroup at wolfram.com


Daniel Lichtblau wrote:

> >  www.cs.berkeley.edu/~fateman
> 
> I imagine that would be 1992. 
Yep. J. Symbolic Computing vol 13, no 5, May 1992.  Also on line.

> 
> Issues regarding the models of numerical computation used in Mathematica
> have been raised several times over the years, in particular in the news
> group sci.math.symbolic. In virtually all cases we have spelled out the
> hows and whys in responses. Rather than repeat all that I will provide
> URLs I located from a google search.
Thanks. 
<snip>

I think that the conversations over the years have indeed elucidated
the rationale behind the features. In my response to this (new)
thread, my intention was to point out that the results, which
some people seemed to think were bizarre, were the consequences
of features, and would not likely be remedied.

> 
> I will address a handful of specific issues brought into question in the
> note above. First is perhaps a matter of opinion and also primarily of
> historic rather than technical interest.
> 
> "These difficulties have been in Mathematica since its first design, and
> although there have been a number of redesigns of the arithmetic...."
> 
> I am not aware of anything that might be called substantial redesign of
> Mathematica's numerical computation in the ten years I have been
> employed by Wolfram Research.

Well I am not a historian of Mathematica, but I recall that
there were substantial changes in Interval and/or RealInterval which
I assumed required substantial redesign.  The meaning of
N[expression,d] changed from "do the arithmetic using precision d"
to "give me the answer accurate to d digits".  This is a major
change in my view.
The meaning of Accuracy and Precision seems to have been altered
so that these became floating-point numbers.  Since actual
numerical analysis texts do not use these words, but use
relative and absolute error, it is in my view confusing to
someone who is trained in numerical analysis.
I think Max/Min precision were not in early Mathematicas; I
also suspect that SetAccuracy and SetPrecision have changed
but I don't recall how.

I believe that the code for numerical evaluation of nearly
anything in Mathematica to be arbitrary precision has
grown throughout this period.
> 
> The second paragraph above makes a few claims. "Careful numerical
> analysis is hindered by this design, which was intended to be used by
> non-careful sloppy people."
> 
> Numerical evaluation may be used by careless and careful people alike.
> Significance arithmetic (the arithmetic model in question) is useful,
> and used, and serves a purpose. It can be misused, as can fixed
> precision arithmetic or virtually anything else. But it works well for
> many types of computation. As mentioned in some of the above-noted
> threads, I use it to find approximate Groebner bases, used in turn by
> NSolve. I quite assure you that usage is not "sloppy".

I think the intention of a floating-point design should be
(a) minimum number of surprises
(b) ease of modeling / understanding
(c) clear definition
(d) high accuracy, wide range
(e) speed
(f) control (e.g. of exceptions)
(g) access to libraries
.. at least off the top of my head.

Mathematica's approach leads to
(a) difficult to explain transition from machine to software
floats.
(b) difficult to explain model of error transmission/control
(c) difficult implications to programming language (e.g.
 what does == or === mean)
(d) duplication of interval computation as float computation
(e) slow speed

The major advantage is the possibility that
a programmer may face the occasional situation
in which naively formulated erroneous computations
are caught ... in that the answer comes back
with "no significant digits".


> 
> There is then a claim that, I think, indicates this model will be
> problematic for "people who know what they are doing". Certainly it may
> be unfamiliar to some, especially those who do not make much use of
> Mathematica. But I find it implausible that experienced numerical
> analysts will have grave difficulties understanding the ideas behind
> significance arithmetic.

Certainly a clever person can figure out what is going
on in Mathematica, if necessary.  Someone who merely translates
an existing program from Fortran into Mathematica will
occasionally be puzzled. Or worse.

> 
> >One choice of course is not to use Mathematica for numerical calculation.
> >There are other arbitrary precision packages.
> 
> I honestly do not know how many support approximate arithmetic, or how
> heavily they are used in that realm. I would be curious to hear more
> about any that may be in widespread use for numerical computation.

I too would be interested in hearing about usage...
My guess is that no arbitrary precision packages are in widespread
use, though I think there are ones in every computer algebra system
(Maple, Macsyma, Reduce, Axiom, MuPad, Pari, ...) as well as
ones in Lisp.  There are packages MPFUN, MP, ACRITH, and others
which can be used from Fortran, C, Pascal, Lisp, ...

Arbitrary precision costs time.  People doing scientific
computing often think they are in a hurry.  

> 
> >Another choice is to test, and reset the accuracy of numbers at
> >critical points.
<snip>// I think there is no disagreement here.. it can be done..
> 
> Several times in the past you have raised issues regarding particulars
> of numerical computation in Mathematica. I would like to pose some
> questions.
> 
> Do you, in your work, encounter specific needs in numerical
> computation that you cannot meet with Mathematica?

I know of no one at Berkeley doing serious numerical computation in
Mathematica or any other computer algebra system. Serious computation
is done in conventional programming languages on conventional
computers and occasionally on parallel computers. Research on
numerical methods done by computer scientists here is largely
on how to make floating-point computations run in parallel, fast.
Computer algebra systems have nothing to contribute here as
far as running numerical computation.  They do, in principle,
have the possibility of generating programs (e.g. C or Fortran
or some other language) which can then be compiled etc to
run fast.  I am quite interested in this prospect for using
CAS.

> 
> Getting back to mention of arbitrary precision packages, are there some
> you use for computations that Mathematica cannot handle?

You are asking if Mathematica is insufficiently general
for something I compute.  I very rarely need arbitrary
precision floats (in spite of having written Macsyma's
package for this, and having interfaced lisp to mpfun).
I expect that Mathematica has sufficient generality
for me, if I can recall how to fiddle with Max/Min precision.

> 
> We are of course in the business of extending useful capabilities of
> Mathematica. Specific examples in response to the above would be of
> interest for such work.

Undoubtedly. They might also present a competitive advantage
to WRI, compared to (say) Maple.
I would hope that in such circumstances your company would
be happy to pay for consultation on such matters. :)
> 
Regards,
Richard
> Daniel Lichtblau
> Wolfram Research


  • Prev by Date: Re: Re: Zero does not Equal Zero is a feature
  • Next by Date: Built-in functions to C code
  • Previous by thread: Re: Re: Zero does not Equal Zero is a feature
  • Next by thread: RE: checking for a particular value in a matrix