Re: Numerical accuracy/precision - this is a bug or a feature?
- To: mathgroup at smc.vnet.net
- Subject: [mg120279] Re: Numerical accuracy/precision - this is a bug or a feature?
- From: Richard Fateman <fateman at eecs.berkeley.edu>
- Date: Sat, 16 Jul 2011 05:42:21 -0400 (EDT)
On 7/15/2011 11:54 AM, Andrzej Kozlowski wrote: > > On 15 Jul 2011, at 16:49, Richard Fateman wrote: > >> On 7/14/2011 11:55 PM, Andrzej Kozlowski wrote: >> >> Gee, Andrzej, all I can think of is the childhood playground chant (I don't know where you might have been a child, so this may not >> bring back memories...) >> >> "I'm rubber, you're glue; everything You say sticks to YOU!" > > Yes, I can also think of a few playground chants that would apply nicely, but unfortunately you would not understand them. So let's have something English instead, like "Sticks and stones...". > > Now, I really have no time or patience to go over all your misrepresentations in detail. But let me point out (or remind you perhaps) that the already mentioned paper of Sofroniou and Spaletta makes the following remark: > >> The choice of significance arithmetic as the default in Mathematica has not been universally favorable (see for example [9]) although some of these criticisms relate to early deficiencies in the implementation. > > Reference [9] is, of course: > > [9] R.J. Fateman, A review of Mathematica, J. Symbolic Comput. 13 (1992) 545=E2=80=93579. > > In fact the "early" version was much more like "significant digits convention", while the current version is quite a lot more sophisticated, as described in the article. Somehow you don't seem to have noticed it since that is the "early version" that you always seem to be describing. (Significant digits is something that used to be taught in primary school, if I recall correctly). (x = 1.00000000000000000000; While[x != 0, x = 2*x - x; Print[x]]) terminates when Equal[x,0] is True. And[x == 1, x == 0, x == -1] returns True. says it all. > As for difference between theoretical analysis of the propagation of round-off error and practical scientific computations, I am well aware of it, but was it not you who gleefully quoted the statement: > > The technique of propagating the uncertainty from step to step > throughout the calculation is a very bad technique. It might sometimes > work for super-simple textbook problems but it is unlikely to work for > real-world problems. While you claim to understand the difference, you now show you don't. The technique of replacing x*y by (x*(1+a)) * (y*(1+b))--> z*(1+c) for suitable a,b,c, for theoretical analysis of propagation of roundoff as a function of a, b, is different from doing the actual calculations with fuzz-balls. > So if agree with this (as you seem to), are you implying that the entire chapter 16 of Henrici's book is concerned with something that only applies to "super-simple text book problems"? I don't have a copy of his book handy. It is true that the theoretical analysis becomes pretty hairy, e.g. for programs with branches. > The point is, of course, that you never make it clearly what exactly you are talking about probably because you find it convenient for rhetorical purposes (90% of these discussions are really just rhetorics). > > I should add that I myself long ago and more then once wrote on this forum that I do not think Mathematica's "approximate numbers" are a good model for empirical errors, but they are very good tools for solving certain purely mathematical problems. Often it can be accomplished in 100% provably correct way (barring bugs of course). > > As for things like numerical Groebner basis etc, you keep arguing that it probably can be done in some other way (without significance arithmetic) and in a certain, rather trivial, sense you undoubtedly right. But the fact is, as Daniel has pointed out, that, in practice, nobody seems to have done it in another way, and not for want of trying. So again here we have the distinction between "theory" and "practical application" but except that this time the shoe is on the other foot. It seems to me that numerical computation of GB has a certain history. E.g. http://www.risc.jku.at/publications/download/risc_273/Nr.7_paper-revised.pdf (see references) or http://www.maths.lth.se/vision/publdb/reports/pdf/byrod-josephson-etal-iccv-07.pdf and several PhD dissertations. Given that most people messing with floating-point GB find themselves facing stability problems, the introduction of arbitrary precision numbers seems like a way out of some difficulties. Most people seem to concentrate on other ways though, because double-floats are so much faster than software floats. It is nice that Dan has done some software-float implementation and put it in Mathematica. I am not aware of others who have tried this tactic and failed. Even you seem to agree that Mathematica's particular kind of bigfloat arithmetic is not essential to this. So I don't see your point. <big snip> RJF