       Re: Re: Re: Re: How to get an answer as a Root object?

```Andrzej Kozlowski wrote:
> On 20 May 2005, at 22:21, Andrzej Kozlowski wrote:
>
>
>>I am a little surprised that nobody wrote to point out that some
>>gremlins also got into method 3. In fact I like this solution best and
>> it is the only method that seems to me mathematically fully
>>satisfactory. I would dismiss method 2 as basically a hack relying on
>>the internals of the Mathematica implementation of Groebner basis: as
>>far as I know "mathematical" Groebner basis is not "supposed to" work
>>with expression of this kind.
>>Mathod 1 is of course classical (discussed for example in 1991 Henri
>>Cohen's  "A Course in Computational Algebraic Number Theory" on page
>>100) but I don't like the fact that you have to (or at least it seems
>>to me you have to) guess the degree of the polynomial used in
>>Recognize.
>
>
> I forgot to add that actually there can be no guarantee that the
> solution found with method 1 is the correct one; it could actually be a
> different algebraic number very close to the correct one. I believe
> that only the elimination method (whether accomplished by Groebner
> basis or Resultant) gives a solution that is guaranteed to be exact and
> does not rely on "accidental" properties of a particular implementation
> of Groebner basis.
>
> Andrzej
> [...]

The variable ordering underlying method 2 is undocumented but hardly
accidental. While the literature on handling (nonpolynomial) algebraics
is, I think, a bit scant, there is some folklore to this, and the
cognoscenti would not regard that approach as a hack. In essence
approach (3) is equivalent to approach (2), with (for purposes of this
problem) a small improvement. In making new variables "by hand" we can
force the ordering. This is advantageous insofar as we can eliminate
numeric algebraics as well as the other ones. I may at some point try to
remedy the obscurity of Mathematica GroebnerBasis handling of algebraics

There is a reason that approach 1, via approximate number recognition,
might be regarded as superior in many cases. Simply put, it is easy for
a Groebner basis computation of the sort used in methds (2-3) to become
intractable. The general feeling is that Dixon resultants are better
suited than Groebner bases for this sort of elimination. But they too
can be overwhelmed by the computational complexity.

What is nowadays emerging are methods that "guess" a solution and then
attempt to validate a posteriori. In a sense this is to hybrid
computation what "predictor-corrector" is to purely numeric methods. So
a reasonable question is how one might certify the result of the
approximate number recognition method.

There are a couple of methods I can think of (and I'd be curious to hear
about others). One is to construct a priori bounds on the closeness of
nearby algebraics based on the degree and coefficient sizes of the
minimal polynomials that define them. This is, to my mind, quite
difficult. But if done, certification becomes quite easy once you check
agreement to sufficiently many digits.

Another method I use for this sort of thing is to take the result and,
with the input, form a Groebner basis. So one might do

GroebnerBasis[{expr,minpoly}]

If the result is {1} then we found the wrong minpoly. If not, they are
consistent and either there is another root to expr remarkably close to
the one we sought, or else we found the correct one. The first case is
easy to check.

The obvious questions are:

(1) Why is this Groebner basis any different/preferable to that of
methods (2-3)?
(2) Why might it be any more effective than methods (2-3)?

The answer to both is that generally it is computationally more
efficient, because it has an explicit polynomial that can represent or
possibly factor the one determined by the original expression alone.
When one can find a root and putative minimal polynomial by approximate
or other means, then augmenting the original defining equation with that
will generally allow for a faster Groebner basis computation. This is
particularly true when we found the wrong value: if the algebraic system
is inconsistent, Groebner basis computations tend to be faster than
otherwise.

I discussed some ideas along these lines at

http://www.math.ufl.edu/~white/talks.html

A Mathematica notebook (~1.2 Mb) of the slideshow for that talk may be
found at

The problem at hand was to find the boundary curve of the planar map
given by

trigpoly[n_, a_, b_] = Cos[n a] + Cos[n b] + Cos[n (a - b)];

x[a_, b_] = trigpoly[1, a, b];
y[a_, b_] = trigpoly[5, a, b];

To see what this looks like, one can do as below.

pointlist[m_] := Table[2*Pi*{Random[], Random[]}, {2^m}]
points2k = pointlist;
data[{point__}] := {x[point], y[point]};
datalist2k = Map[data, points2k];
ListPlot[datalist2k];

Finding the boundary curve can be cast in a few ways involving algebraic
systems. Details depend on the trig-to-algebraic encoding and on whether
one looks at a vanishing Jacobian or a Lagrange multiplier optimization.

For n=3 several formulations of the problem are tractable using
GroebnerBasis, and one played a role in a number theory Ph.D thesis a
few years ago (not mine). For n=4 Robert Lewis' program Fermat was able
to handle one formulation using a Dixon resultant. I don't think I ever
got GroebnerBasis to handle that case but I'm not certain I exhausted
all possibilities prior to exhausting my patience. At n=5 Fermat ran out
of memory. I'm doubt it's tractable for Groebner bases, either in
Mathematica or elsewhere.

There is a paper based on this work:

D.L. (2005) Computing curves bounding trigonometric planar maps:
symbolic and hybrid methods. Submitted (LNAI publication for ADG 2004
conference proceedings).

A discussion of a posteriori verification follows figure 10. The
Mathematica notebook, at around 1.7 Mb, is not much larger than that for
the talk, because most of the size of each is in the graphics. Anyone
interested in the computational details behind this sort of work can