MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Numerical accuracy/precision - this is a bug or a feature?

  • To: mathgroup at smc.vnet.net
  • Subject: [mg120238] Re: Numerical accuracy/precision - this is a bug or a feature?
  • From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
  • Date: Thu, 14 Jul 2011 21:18:41 -0400 (EDT)
  • References: <ius5op$2g7$1@smc.vnet.net> <ius7b6$30t$1@smc.vnet.net> <iv9ehj$dct$1@smc.vnet.net> <ivjgfp$2b9$1@smc.vnet.net> <201107140921.FAA15620@smc.vnet.net>

On 14 Jul 2011, at 11:21, Richard Fateman wrote:

> On 7/13/2011 12:11 AM, Noqsi wrote:
> ..
>
>>> see e.g.
>>> <http://www.av8n.com/physics/uncertainty.htm>.
>>
>>
> ..
> Learning mathematics from a physicist is hazardous.
> Learning computer science from a physicist is hazardous too.
> Numbers in a computer are different from experimental measurements.
>
>
> nevertheless, I like this article.  It says, among other things,
>
> 	The technique of propagating the uncertainty from step to step
> throughout the calculation is a very bad technique. It might sometimes
> work for super-simple textbook problems but it is unlikely to work for
> real-world problems.
>

Well, here is a quote from a very well known book on numerical analysis by a mathematician (Henrici, "Elements of Numerical Analysis").


> It is plain that, on a given machine and for a given problem, the local
> rounding errors are not, in fact, random variables. If the same problem
> is run on the same machine a number of times, there will result always the
> same local rounding errors, and therefore also the same accumulated
> error. We may, however, adopt a stochastic model of the propagation of
> rounding error, where the local errors are treated as if they were random
> variables. This stochastic model has been applied in the literature to a
> number of different numerical problems and has produced results that are
> in complete agreement with experimentally observed results in several
> important cases.


The book then describes the statistical method of error propagation of which Mathematica's approach can be regarded as a first order approximation (as pointed out by Oleksandr Rasputinov, who should not be confused with the OP of this thread so:

> Clearly Rasputinov thinks that if they are not equal they should not be
> Equal.  Thus the answer is False.

 is *clearly* False. In fact Oleksander expressed something closer to the opposite view.)

And if one quote is not enough, here is another, from another text on numerical analysis. (Conte, de Boor, ""Elementary Numerical Analysis).
It describes 4 approaches to error analysis, interval arithmetic, significant-digit arithmetic, the "statistical approach" and backward error analysis. Here is what it says about the second and the third one:

> A third approach is significant-digit arithmetic. As pointed out earlier, whenever two nearly equal machine numbers are subtracted, there is a danger that some significant digits will be lost. In significant-digit arithmetic an attempt is made to keep track of digits so lost. In one version
> only the significant digits in any number are retained, all others being discarded. At the end of a computation we will thus be assured that all digits retained are significant. The main objection to this method is that some information is lost whenever digits are discarded, and that the results obtained are likely to be much too conservative. Experimentation with this technique is still going on, although the experience to date is not too promising.
>
> A fourth method which gives considerable promise of providing an adequate mathematical theory of round-off-error propagation is based on a statistical approach. It begins with the assumption that round-off errors are independent. This assumption is, of course, not valid, because if the same problem is run on the same machine several times, the answers will always be the same. We can, however, adopt a stochastic model of the propagation of round-off errors in which the local errors are treated as if they were random variables. Thus we can assume that the local round-off errors are either uniformly or normally distributed between their extreme values. Using statistical methods, we can then obtain the standard devia- tion, the variance of distribution, and estimates of the accumulated round- off error. The statistical approach is considered in some detail by Ham- ming [1] and Henrici [2]. The method does involve substantial analysis and additional computer time, but in the exp
 e!
riments conducted to date it has obtained error estimates which are in remarkable agreement with experimentally available evidence.



The fundamental paper of Mathematica's error propagation is "Precise numerical computation" by Mark Sofroniou and  Giulia Spaletta in The Journal of Logic and Algebraic Programming 64 (2005) 113=96134. This paper describes Mathematica's "significance arithmetic" as a first order approximation to Interval Arithmetic. It makes no mention of distributions. Oleksandr Rasputionov, in an earlier post here, interpreted  "significance arithmetic" as a first order approximation to the fourth method above. I have not considered this very carefully, but it seems pretty clear that he is right, and that the two "first order" approximations are in fact isomorphic. The first order approach is, of course, justified on grounds of performance. It is perfectly "rigorous" in the same sense as any "first order" approach is (i.e. taking a linear approximation to the Taylor series of a non-linear function). It works fine under certain conditions and will produce nonsense when these conditions do no
 t!
 hold.

The fact that significance arithmetic is "useful" needs no justification other than the fact that it is used successfully by NSolve and Reduce in achieving validated symbolic results by numerical methods which are vastly faster than purely symbolic ones.

It is also useful, for users such as myself, who sometimes need fast first order error anlysis.

I have lots of posts by Richard on this topic (or perhaps it was the same post lots of time, it's so hard to tell), but I have never understood what his main point is. It seems to me that is because he himself has not yet decided this, although he has been posting on this topic for over 20 years (I think).

Sometimes he seems to be disparaging significance arithmetic itself. When Daniel points out how effective it is in his implementation of numerical Groebner basis, or in Adam Strzebonski's work on Reduce, he either ignores this altogether or claims that Groebner bases, etc.  are themselves not "useful".

On other occasions he takes on the role of the defender of the interest of the "naive user" (presumably like the OP, who however would be better described as "intentionally naive") who is going to be confused by the "quirky" nature of significance arithmetic (at low precision). In doing so he conveniently ignores that fact of the existence of thousands of "naive users" who never become confused (sometimes because they always work with machine precision numbers and only use significance arrhythmic unknowingly, e.g. when using Reduce). And moreover, for those who do find a need for some sort of error analysis he offers no alternative, except perhaps to learn backward error analysis. Except, of course, that should they do so they would no longer be "naive" and thus outside of Richard's area of concern.  And in any case, anyone who needs and understands backward error analysis can use it now, and can't imagine that even Richard would claim that reducing users' options is a good t
 h!
ing.

Finally, perhaps all that Richard is so upset about is simply Mathematica's habit of defining numbers as "fuzz balls" or "distributions". In other words, if Mathematica used a more usual "definition" of number, and used significance arithmetic for "error" propagation, that would be done by applying some separate function or turning on an option,  than everything would be fine? If that is all, than it seems to me that Richard has for years been making mountains of molehills.


Andrzej Kozlowski






  • Prev by Date: Re: Enterprise level Mathematica idea
  • Next by Date: Re: I think Omitting the multiplication sign is a big mistake
  • Previous by thread: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Next by thread: Re: Numerical accuracy/precision - this is a bug or a feature?