numeric Groebner bases et al [Was Re: Numerical accuracy/precision - this is a bug or a feature?]

*To*: mathgroup at smc.vnet.net*Subject*: [mg120295] numeric Groebner bases et al [Was Re: Numerical accuracy/precision - this is a bug or a feature?]*From*: Daniel Lichtblau <danl at wolfram.com>*Date*: Sun, 17 Jul 2011 06:03:20 -0400 (EDT)

----- Original Message ----- > From: "Richard Fateman" <fateman at eecs.berkeley.edu> > To: mathgroup at smc.vnet.net > Sent: Saturday, July 16, 2011 4:42:21 AM > Subject: Re: Numerical accuracy/precision - this is a bug or a feature? > On 7/15/2011 11:54 AM, Andrzej Kozlowski wrote: > [...] > (x = 1.00000000000000000000; While[x != 0, x = 2*x - x; Print[x]]) > > terminates when Equal[x,0] is True. > > And[x == 1, x == 0, x == -1] > > returns True. > > says it all. As I mentioned in an earlier note, approximate arithmetic cannot satisfy all the nice properties of exact arithmetic. We have taken the view that very inaccurate values will be "Equal" to anything within their range of inaccuracy. Not Greater or Less. We preserve one nice aspect of real arithmetic and give up another. That other, I think, would be more difficult to make correct and maintain. > [...] > While you claim to understand the difference, you now show you don't. > The technique of replacing x*y by (x*(1+a)) * (y*(1+b))--> > z*(1+c) for > suitable a,b,c, for theoretical analysis of propagation of roundoff > as a function of a, b, is different from doing the actual > calculations with fuzz-balls. That does not make doing an approximate interval arithmetic (uncorrelated errors uniformly distributed) intrinsically "bad". > [...] > > As for things like numerical Groebner basis etc, you keep arguing > > that it probably can be done in some other way (without significance > > arithmetic) and in a certain, rather trivial, sense you undoubtedly > > right. But the fact is, as Daniel has pointed out, that, in > > practice, nobody seems to have done it in another way, and not for > > want of trying. So again here we have the distinction between > > "theory" and "practical application" but except that this time the > > shoe is on the other foot. > > It seems to me that numerical computation of GB has a certain history. > E.g. > http://www.risc.jku.at/publications/download/risc_273/Nr.7_paper-revised.pdf > (see references) > > or > http://www.maths.lth.se/vision/publdb/reports/pdf/byrod-josephson-etal-iccv-07.pdf > > and several PhD dissertations. The theses of Aleksey Kondratyev and Alberto Zanoni are worth a look. Not sure if there are others on the topic. Certainly there are other papers though. The articles cited in the above probably do not fully capture the field. My own take on this can be found in part at the links below. http://library.wolfram.com/infocenter/Conferences/7536/ http://library.wolfram.com/infocenter/Conferences/7537/ As noted earlier in the thread, they are based on a functioning implementation of numerical Groebner bases. I am not aware of any other that is production level. > Given that most people messing with floating-point GB find themselves > facing stability problems, > the introduction of arbitrary precision numbers seems like a way out > of some difficulties. It's not the precision so much as the error bounds. Allows recognition of zero as total cancellation of significant digits. This is critical to any Groebner basis implementation. > Most > people seem to concentrate on other ways though, because double-floats > are so much faster > than software floats. I'm not sure if double floats and Groebner bases have been made to play nice in a way that is general. Also not sure how well they play in terms of speed, because the arithmetic is not the only bottleneck. If these are successfully combined with the newer Faugere methods (F4 and F5) that might be a different matter. > It is nice that Dan has done some software-float implementation and > put it in Mathematica. I am not aware of > others who have tried this tactic and failed. Even you seem to agree > that Mathematica's particular kind of > bigfloat arithmetic is not essential to this. So I don't see your > point. ??? As a small matter, I'm a Daniel that never managed to become a Dan. By way of contrast, the author of the first Mathematica implementation of GB was Dan Grayson, who is also a Daniel in full name. There are also Dans who were always Dans. Me, I'm a Danny, to answer the question "What is it when it's at home?" Much more important is that significant arithmetic was, and remains, absolutely vital to the Mathematica implemention of numeric Groebner bases. Brief history; Around 1994 Kiyoshi Shirayanaga wrote about an implementation of such inside Maple. As I'd been working on a new GB implementation for Mathematica 3, and as his article discussed various tactics that amounted to an emulation of significance tracking, I thought I would try to put this into the new release. The initial version required changing/adding maybe 50 lines of C code to what I had already done. It was fully functioning (after modest bug fixing to carefully check precision loss). It was by no means a toy, at least no more than it's exact analog. Seventeen years later it has been through some overhaul, extension, and improvement. And no doubt it could use much more of that. But I suspect it remains the only viable production implementation to date, and that's not for utter lack of interest by others. (General lack, to be sure, but not utter lack.) Numeric GB alone are not all that exciting. But they can be used in numeric solving, polynomial irreducibility testing, and probably elsewhere. It's a topic I hope to return to in the not distant future. Daniel Lichtblau Wolfram Research

**Follow-Ups**:**Re: numeric Groebner bases et al [Was Re: Numerical accuracy/precision - this is a bug or a feature?]***From:*Andrzej Kozlowski <akoz@mimuw.edu.pl>