MathGroup Archive 2000

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: RE:Working Precision

  • To: mathgroup at
  • Subject: [mg24095] Re: [mg23928] RE:Working Precision
  • From: Richard Fateman <fateman at>
  • Date: Tue, 27 Jun 2000 00:51:52 -0400 (EDT)
  • Organization: University of California, Berkeley
  • References: <8ihv28$> <8in7e4$> <8is6im$> <8iv1ea$>
  • Sender: owner-wri-mathgroup at

Allan Hayes wrote:
> I wonder if Richard Fateman wrote this tongue in cheek, witness
> "When you must store the result into a finite sized memory".
> Replacing inexact reals by rationals before computing often results in
> running out of memory.
> And, of course, there is the question of which fractions to use
> initially -any variation here has consequences for the answer.
> --

No tongue in cheek.  All floating point numbers in a computer ARE
numbers.  After all they are either  someinteger X 2^n  or 
someinteger /2^n .   While it is possible to use other kinds of storage
for numbers, all I am pointing out is that each and every floating point
number is also a rational number, and that pretty much tells you what
correct sum, product, etc of such numbers are.  If you can't store that
exactly because there are not enough bits in that "someinteger" 
then you use the closest one.

> "Philip C Mendelsohn" <mend0070 at> wrote in message
> news:8is6im$gtb at
> > Richard Fateman (fateman at wrote:
> >
> > : Here's an easily defended rule:  Do all arithmetic exactly. When
> > : you must store the result into a finite sized memory location,
> > : round it to the nearest representable number exactly representable
> > : in that memory location. In case of a tie, round to even.

> >
> > Numeric math is not my expertise, but is your error not limited by
> > the intervals between exactly representable numbers? 
 And, if performing
> > calculations on numbers that have been stored, is not the propagation
> > (and acculmulation) of error still present?

There are several sources of error and several ways of propagation of
error.  It is unfortunate that Mathematica uses, in this area,
nonstandard words.
Any numerical analysis textbook should, in its first 10 or 20 pages,
relative error, absolute error.
 It might also point out that
errors caused by using only an approximation to a physical situation
(e.g. a truncated taylor series)
 are different from
errors caused by inaccuracies in measurement.
Mathematica conflates these, and occasionally adds another source
of error (bugs!)

The Mathematica terms Accuracy and Precision as defined by Wolfram
are sort of logarithms of absolute and relative error, but not really.

> >
> > Pardon my ignorance, but I fail to see how this rule improves anything.

It is simple enough that you can, for example compute the amount that
have left off, by simple arithmetic.  Assume x>y>0.  Now compute z=x+y.
By how much does z differ from the true answer of x+y?  Simple:  compute
z-x-y.  Except of course, this won't work in Mathematica.
try with x=11111111111111111111.0   y=0.22222222222222222222.
(x+y)-x  gives something that looks like 0.
SetPrecision[%,20] and you'll see that it is rather close to y.

Subtract y from it and you'll find the error.  Mathematica's

> > The best thing would be to do exact arithmetic all along, and only convert
> > to the required precision at the very end, but I doubt it would be
> > the fastest or most economical approach.

It is of course not possible to compute exactly with square roots, sine,

Whether it is better or not to do exact computation is usually not the
since exact computation is so slow as to be not possible!
> >
> >

  • Prev by Date: Re: Re: RE:Working Precision
  • Next by Date: Mathematica for High School Students
  • Previous by thread: Re: Re: RE:Working Precision
  • Next by thread: Export to Word2000