Mathematica 9 is now available
Services & Resources / Wolfram Forums / MathGroup Archive
-----

MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Numerical accuracy/precision - this is a bug or a feature?

  • To: mathgroup at smc.vnet.net
  • Subject: [mg120271] Re: Numerical accuracy/precision - this is a bug or a feature?
  • From: Richard Fateman <fateman at eecs.berkeley.edu>
  • Date: Sat, 16 Jul 2011 05:40:55 -0400 (EDT)

On 7/14/2011 11:55 PM, Andrzej Kozlowski wrote:

Gee, Andrzej, all I can think of is the childhood playground chant  (I
don't know where you might have been a child, so this may not
bring back memories...)

"I'm rubber, you're glue; everything You say sticks to YOU!"
> You are clearly confusing significant digit arithmetic, which is not what=
 Mathematica uses, with significance arithmetic, which is a first order app=
roximation to interval arithmetic or distribution based approach.

I view significant digit arithmetic, if by digit you allow binary digit
or "bit", to be a crude version of interval arithmetic where the
endpoints of the interval are not numbers of (essentially) arbitrary
precision, but where the endpoints must be represented by the central
point plus or minus some power of two.   It is an approximation to
interval arithmetic.  I think it is a reasonable idea to restrict the
number of digits carried in the value of the interval endpoints  (making
them "cruder" than you might really know) in the interests of efficiency
in time and space. Thus using properly rounded machine floats is
appropriate, at least when the range is suitable and the difference
between upper and lower bounds is large.  Using just one bit makes the
width of the interval grow faster than using more bits...

Since you do not understand the context of the paragraphs you have
quoted, nor do you seem to acknowledge the difference between
theoretical analysis of the propagation of round-off error in
well-chosen standard examples, and somehow running interval arithmetic
all the time, it is hard to know how to educate you.  How comforting is
your proud quote, below?  Mathematica's arithmetic is "not ...
completely reliable".   Rasputinov says,
further,  "it is not almost always wrong".  I'm not sure why he says the
significant digit convention is almost always wrong unless he means
almost always too pessimistic.  I'm also not aware of why Rasputinov's
opinion should sway the discussion especially (sorry Oleksandr).


My point remains:
  WRI could have a much more direct notion of number, and equality, that
would make it easy to implement my choice of arithmetic or theirs or yours.
They didn't.  The default is not completely reliable. People write to
this newsgroup, periodically, saying Huh? what's going on? I found a bug!

As for the experimental and computational notions of (im)precision, I
think there is an issue of using the same words with different meanings.
Similar but unfortunately different. Precision of a floating point
number F is simply the number of bits in the fraction.  If you impute
some uncertainty to F, you can store that in another piece of data D in
the computer. If you want to assert that F and D together represent a
distribution of a certain kind  you can also compute with that.

RJF

>   Obviously you don't read the posts you reply to and confuse both the po=
sters and the contents of what they post. Here is a quote from Oleksandr Ra=
sputionov that makes this completely clear:
>
>
> Unfortunately so, given that it is severely erroneous: see e.g.
> <http://www.av8n.com/physics/uncertainty.htm>. However, Mathematica's
> approximation of how these uncertainties propagate is first-order, not
> zeroth-order. This does not make it completely reliable, of course, but
> certainly it is not almost always wrong as is the significant digits
> convention. Within the bounds of its own applicability, Mathematica's
> approximation is reasonable, although it would still be a mistake to appl=
y
> it to experimental uncertainty analysis given the much broader scope of
> the latter.
>
> Note the "first-order not zeroth-order". Also, do take a look at Sofronio=
u and Spaletta and then you may perhaps understand what "order" means and h=
ow "significance arithmetic" differs from the "significant digits" conventi=
on. Good grief, did you ever learn about the Taylor series? It must have be=
en a long time ago, I take.
>
>
> Andrzej Kozlowski
>
>
> On 14 Jul 2011, at 19:55, Richard Fateman wrote:
>
>> On 7/14/2011 6:27 AM, Andrzej Kozlowski wrote:
>>> On 14 Jul 2011, at 11:21, Richard Fateman wrote:
>>>
>>>> On 7/13/2011 12:11 AM, Noqsi wrote:
>>>> ..
>>>>
>>>>>> see e.g.
>>>>>> <http://www.av8n.com/physics/uncertainty.htm>.
>>>> ..
>>>> Learning mathematics from a physicist is hazardous.
>>>> Learning computer science from a physicist is hazardous too.
>>>> Numbers in a computer are different from experimental measurements.
>>>>
>>>>
>>>> nevertheless, I like this article.  It says, among other things,
>>>>
>>>> 	The technique of propagating the uncertainty from step to step
>>>> throughout the calculation is a very bad technique. It might sometimes
>>>> work for super-simple =84textbook=89 problems but it is unlikely to wo=
rk for
>>>> real-world problems.
>>>>
>>> Well, here is a quote from a very well known book on numerical analysis=
 by a mathematician (Henrici, "Elements of Numerical Analysis").
>>>
>>>
>>>> It is plain that, on a given machine and for a given problem, the loca=
l
>>>> rounding errors are not, in fact, random variables. If the same proble=
m
>>>> is run on the same machine a number of times, there will result always=
 the
>>>> same local rounding errors, and therefore also the same accumulated
>>>> error. We may, however, adopt a stochastic model of the propagation of
>>>> rounding error, where the local errors are treated as if they were ran=
dom
>>>> variables. This stochastic model has been applied in the literature to=
 a
>>>> number of different numerical problems and has produced results that a=
re
>>>> in complete agreement with experimentally observed results in several
>>>> important cases.
>>> The book then describes the statistical method of error propagation of =
which Mathematica's approach can be regarded as a first order approximation=
 (as pointed out by Oleksandr Rasputinov, who should not be confused with t=
he OP of this thread so:
>> So are we to conclude that Henrici recommends this as a general  numeric=
al computational method?
>>
>> I don't see that here.  I think what he is saying is that if you do some=
 mathematics (see below), then
>> you will get results consistent with what you will get if you actually r=
un the experiment on the computer.
>> This is not surprising. It is a result that says that "theoretical" nume=
rical analysis agrees with
>> "computer experiments" in arithmetic.  It doesn't say Henrici recommends=
 running a computation this way.
>>
>> When Henrici says "adopt a stochastic model ...."  he doesn't mean to wr=
ite a program. He means to
>> think about each operation like this..  (I show for multiplication of nu=
mbers P and Q with errors a b resp.)
>>
>> P*(1+a)  times Q*(1+b)  == P*Q *(1+a)*(1+b)* (1+c)   where c is a new "e=
rror" bounded by roundoff, e.g. half unit in last place.
>>
>> For each operation in the calculation, make up another error letter...  =
a,b,c,d,e,f,g...
>> assume they are uncorrelated.
>>
>> The fact that this theoretical approach and numerically running "several=
 important cases" is a statement
>> about correlation of roundoff in these cases, not a statement of advisab=
ility of whatever for a model of
>> how to write a computer system.
>>
>> By the way, I think that Henrici was an extremely fine theoretical numer=
ical analyst, and a fine writer too.
>>
>>
>>
>>>> Clearly Rasputinov thinks that if they are not equal they should not b=
e
>>>> Equal.  Thus the answer is False.
>>>   is *clearly* False. In fact Oleksander expressed something closer to =
the opposite view.)
>> This thread is too long.   I don't know at this point if you are agreein=
g that it is false or contradicting that "is False" is false.
>>> And if one quote is not enough, here is another, from another text on n=
umerical analysis. (Conte, de Boor, ""Elementary Numerical Analysis).
>>> It describes 4 approaches to error analysis, interval arithmetic, signi=
ficant-digit arithmetic, the "statistical approach" and backward error anal=
ysis. Here is what it says about the second and the third one:
>> Huh, if we are talking about the second and third one, why does he say t=
hird and fourth?
>> Are you using 0-based indexing and deBoor is using 1-based indexing????
>>
>>
>>>> A third approach is significant-digit arithmetic. As pointed out earli=
er, whenever two nearly equal machine numbers are subtracted, there is a da=
nger that some significant digits will be lost. In significant-digit arithm=
etic an attempt is made to keep track of digits so lost. In one version
>>>> only the significant digits in any number are retained, all others bei=
ng discarded. At the end of a computation we will thus be assured that all =
digits retained are significant. The main objection to this method is that =
some information is lost whenever digits are discarded, and that the result=
s obtained are likely to be much too conservative. Experimentation with thi=
s technique is still going on, although the experience to date is not too p=
romising.
>>>>
>> OK, so deBoor (who is retired and therefore not likely to revise the "ex=
perience to date")  says this method  "is not too promising".
>> This sounds to me like he is not endorsing what Mathematica does.
>>
>>>> A fourth method which gives considerable promise of providing an adequ=
ate mathematical theory of round-off-error propagation is based on a statis=
tical approach. It begins with the assumption that round-off errors are ind=
ependent. This assumption is, of course, not valid, because if the same pro=
blem is run on the same machine several times, the answers will always be t=
he same. We can, however, adopt a stochastic model of the propagation of ro=
und-off errors in which the local errors are treated as if they were random=
 variables. Thus we can assume that the local round-off errors are either u=
niformly or normally distributed between their extreme values. Using statis=
tical methods, we can then obtain the standard devia- tion, the variance of=
 distribution, and estimates of the accumulated round- off error. The stati=
stical approach is considered in some detail by Ham- ming [1] and Henrici [=
2]. The method does involve substantial analysis and additional computer ti=
me, but in the experiments conducted to date it has obtained error estimate=
s which are in remarkable agreement with experimentally available evidence.
>> deBoor is essentially quoting Henrici, and this statistical approach is =
to say that all those error terms I mentioned above,  a,b,c,d,e,f,...
>> can be chosen from some distribution   (the way I've written it,  a ,...=
.,z ....   would essentially be chosen from {-u,u} where u == 2^(-W) where =
the fraction part of the floating-point number is W bits.  )   and you can =
compute the final expression as ANSWER+<somehorrendousfunctionof>(a,b,c,...=
.).
>>
>>   What deBoor says is that this (theoretical numerical analysis) "method=
" promises to provide
>> a theory of round-off error propagation.   He is not saying this is a pr=
actical method for scientific computing.  When he uses the work "method"
>> he means a mathematical method for analyzing roundoff.
>>
>> In any case, Mathematica does not do this.  I would further argue that M=
athematica makes it hard to carry out the experiments that might be done to=
 demonstrate that  this theory applied in any particular sample computation=


  • Prev by Date: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Next by Date: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Previous by thread: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Next by thread: Re: Numerical accuracy/precision - this is a bug or a feature?