[Date Index]
[Thread Index]
[Author Index]
Re: Approximate/asymptotic factorization
*To*: mathgroup at smc.vnet.net
*Subject*: [mg73676] Re: Approximate/asymptotic factorization
*From*: Daniel Lichtblau <danl at wolfram.com>
*Date*: Sat, 24 Feb 2007 02:20:55 -0500 (EST)
*References*: <200702210635.BAA17453@smc.vnet.net> <ermdo5$i0d$1@smc.vnet.net> <paul-2CDC4B.22092823022007@news.uwa.edu.au>
Paul Abbott wrote:
> In article <ermdo5$i0d$1 at smc.vnet.net>,
> Daniel Lichtblau <danl at wolfram.com> wrote:
>
>
>>Paul Abbott wrote:
>>
>>>Consider the monomial H[4,4]
>>>
>>> 256*(23625 + 126000*z^4 - 7200*z^8 + 3840*z^12 + 256*z^16)
>>>
>>>where H[m,n] is a Generalized Hermite Polynomial (Noumi & Yamada 1998),
>>>a rational solution to the Painlevï¿½ equation P_IV. A plot of the roots
>>>of H[4,4] displays an interesting quasi-rectangular structure.
>>>
>>>Then consider the monomial Q[4,4]
>>>
>>> 1528899609315374375625 + 6795109374734997225000*z^4 -
>>> 560866170613047390000*z^8 + 153399294526645440000*z^12 +
>>> 2734590598399296000*z^16 - 167891551278796800*z^20 +
>>> 2948686820352000*z^24 - 40649991782400*z^28 + 277762867200*z^32 -
>>> 920125440*z^36 + 1048576*z^40
>>>
>>>where Q[m,n] is a Generalized Okamoto Polynomial, another rational
>>>solution to P_IV. A plot of the roots displays an even more interesting
>>>structure, with a quasi-rectangular core and 4 triangular "lobes"
>>>(related, in some way, to the the Yablonskiiï¿½Vorobï¿½ev Polynomials,
>>>Q[n],
>>>which are Rational Solutions of P_II)
>>>
>>>If you overlay plots of the roots of H[4,4] and Q[4,4] what is most
>>>striking is that the "common" roots are very similar. In other words,
>>>H[4,4] is "approximately" a factor of Q[4,4].
>>>
>>>Is there a "standard" way of quantifying this approximate factorization?
>>>
>>>This approximate factorization holds for any m and n and appears to
>>>improve as m and n increases. Perhaps there is a limit in which the
>>>factorization holds asymptotically?
>>>
>>>Cheers,
>>>Paul
>>>
>>>_______________________________________________________________________
>>>Paul Abbott Phone: 61 8 6488 2734
>>>School of Physics, M013 Fax: +61 8 6488 1014
>>>The University of Western Australia (CRICOS Provider No 00126G)
>>>AUSTRALIA http://physics.uwa.edu.au/~paul
>>
>>I'm not sure how to tackle the asymptotics you have in mind. For
>>particular examples this is very much the province of univariate
>>approximate gcd computation. There has been a good deal of work in this
>>area since around 1995, with some work even earlier.
>>
>>A good place to start would be the "4M" article of Corless, Gianni,
>>Trager, and Watt, "The singular value decomposition for polynomial
>>systems", ISSAC 1995. The idea of their approach is to use the SVD of
>>the Sylvester matrix in order to determine the degree of a "good" approx
>>gcd, then derive that gcd also using the SVD.
>
>
> OK, I've got that ref and also got
>
> N. Karmarkar and Y. N. Lakshman "Approximate polynomial greatest
> common divisors and nearest singular polynomials", Proceedings of the
> 1996 international symposium on Symbolic and algebraic computation,
> 35 - 39, 1996
Right. I think theirs is more the remainder sequence approach. I also
looked at it a bit yesterday, but decided 4M was simpler to demo.
>>For your example I show some quick-and-dirty code that, I think, does
>>more or less what they have in mind. For the Sylvester matrix I cribbed
>>and slightly modified code from MathWorld:
>>
>>http://mathworld.wolfram.com/SylvesterMatrix.html
>>
>>SylvesterMatrix[poly1_, poly2_, var_] := Module[
>> {coeffs1, coeffs2, l1, l2},
>> coeffs1 = Reverse[CoefficientList[poly1, var]];
>> coeffs2 = Reverse[CoefficientList[poly2, var]];
>> l1 = Length[coeffs1];
>> l2 = Length[coeffs2];
>> Join[Table[
>> Join[Table[0,{j}], coeffs1,
>> Table[0,{l2-2-j}]], {j,0,l2-2}],
>> Table[Join[Table[0,{j}], coeffs2,
>> Table[0,{l1-2-j}]], {j,0,l1-2}]]
>> ]
>
>
> I note that
>
> Join[Table[
> Join[Table[0,{j}], coeffs1,
> Table[0,{l2-2-j}]], {j,0,l2-2}],
> Table[Join[Table[0,{j}], coeffs2,
> Table[0,{l1-2-j}]], {j,0,l1-2}]]
>
> can be replaced by the more intelligible (to me)
>
> Join[
> NestList[RotateRight, PadRight[coeffs1, l1 + l2 - 2], l2 - 2],
> NestList[RotateRight, PadRight[coeffs2, l1 + l2 - 2], l1 - 2]
> ]
>
> which is a direct implementation of the description at MathWorld.
I also thought the code looked a bit bulky but did not really have time
to consider how best to recode it.
>
>>h44 = 256*(23625 + 126000*z^4 - 7200*z^8 + 3840*z^12 + 256*z^16);
>>
>>q44 = 1528899609315374375625 + 6795109374734997225000*z^4 -
>> 560866170613047390000*z^8 + 153399294526645440000*z^12 +
>> 2734590598399296000*z^16 - 167891551278796800*z^20 +
>> 2948686820352000*z^24 - 40649991782400*z^28 + 277762867200*z^32 -
>> 920125440*z^36 + 1048576*z^40;
>>
>>I'll normalize so largest coefficient is unity.
>>
>>normPoly[poly_, var_] := Expand[poly /
>> Max[Abs[CoefficientList[poly,var]]]]
>>
>>p1 = normPoly[h44,z];
>>p2 = normPoly[q44,z];
>
>
> Perhaps Decompose will also be useful here, since both p1 are
> polynomials in z^4 (which explains the observed 4-fold symmetry).
I had missed that. Yes, deflating might well be helpful in terms of
improving stability of the various algorithms.
>>I'll use an epsilon of 10^(-3) as tolerance for what singular values to
>>remove.
>>
>>eps = 10^(-3);
>>
>>We count the number of singular values that were zero.
>>
>>nulls = Select[S,Max[Abs[#]]<eps&];
>
>
> What is S and where is sylmat defined? I assume that
>
> sylmat = SylvesterMatrix[p1, p2, z]
>
> S = SingularValueList[sylmat]
Clearly I was in too much of a rush. Yes, sylmat is the Sylvester
matrix, and I used full SVD to get {U,S,V}. Sorry for the omission in
the response. Also newS is a transformation of S to remove small
singular values. Actually it was not needed. I first used it with
default SingularValueDecomposition, then changed the latter to use
Tolerance->eps which automates stripping the small values.
Below is the code in question. Again, it is a sort of agglomeration of
tactics and some is not necessary.
sylmat = SylvesterMatrix[p1, p2, z];
eps = 10^(-3);
{U,S,V} = SingularValueDecomposition[N[sylmat], Tolerance->eps];
newS = S /. aa_/;Abs[aa]<eps :> 0;
nulls = Select[S,Max[Abs[#]]<eps&];
k = Length[sylmat] - Length[nulls]
newmat = Take[newS.Transpose[V], k];
I think newS is the same as S now, since I use Tolerance->eps. So
probably this aspect of the code could be made more concise.
>>We now get the degree of our approximate gcd as below (for the epsilon
>>we chose, it is 12).
>>
>>k = Length[sylmat] - Length[nulls]
>>
>>To form that gcd we'll row reduce a certain matrix and use the final row.
>>
>>newmat = Take[newS.Transpose[V], k];
>
>
> What is newS?
>
>
>>coeffrow = Last[RowReduce[newmat]];
>>
>>gcdapprox = coeffrow . z^Reverse[Range[0,Length[coeffrow]-1]]
>>
>>So here is what I get.
>>
>>In[150]:= InputForm[gcdapprox]
>>Out[150]//InputForm=
>>6.980499621431014 + 0.000585041745605589*z + 0.0035109293315991083*z^2 +
>> 0.002602929711965681*z^3 + 33.10928622044708*z^4 +
>> 0.0019204205746833373*z^5 + 0.01152459048089943*z^6 +
>> 0.008544087788492293*z^7 - 3.49084586854123*z^8 +
>> 0.00011025782928825875*z^9 + 0.0006624721373616564*z^10 +
>> 0.0004911544753497421*z^11 + z^12
>>
>>I'll note that an epsilon of around 10^(-1) would have given a degree 16
>>approx gcd (not equal to a scalar multiple of p1, though). One can get
>>an idea of how close are the various roots by looking at them as below.
>>
>>s1 = Sort[z /. NSolve[p1==0, z]];
>>s2 = Sort[z /. NSolve[p2==0, z]];
>>sg = Sort[z /. NSolve[gcdapprox==0, z]];
>>
>>Playing tricks with Intersection and a comparison criterion might
>>automate this process.
>>
>>Note that there are other approaches to approximate univariate gcds
>>using QR decomposition, remainder sequences, approximate factoring, and
>>probably some others I am forgetting.
>
>
> Thanks for this -- all very useful and interesting.
>
> Cheers,
> Paul
>
> _______________________________________________________________________
> Paul Abbott Phone: 61 8 6488 2734
> School of Physics, M013 Fax: +61 8 6488 1014
> The University of Western Australia (CRICOS Provider No 00126G)
> AUSTRALIA http://physics.uwa.edu.au/~paul
Looks like I had several omissions and cross-purpose code hiccups. Sorry
you (PA) had to be the proof reader. I saw in my other MathGroup post
yesterday that I made a similar mistake there as well, in the sense that
what I wrote did not accurately reflect the final version of code I used
(that one was a relatively tame wording issue).
I just hope my bug fixing yesterday was more methodical.
Daniel Lichtblau
Wolfram Research
Prev by Date:
**Re: Approximate/asymptotic factorization**
Next by Date:
**Re: Diferent solution of integral in versions 4 and 5...**
Previous by thread:
**Re: Approximate/asymptotic factorization**
Next by thread:
**Re: Approximate/asymptotic factorization**
| |