[Date Index]
[Thread Index]
[Author Index]
Re: Multiplying large polynomials
*To*: mathgroup at smc.vnet.net
*Subject*: [mg14765] Re: Multiplying large polynomials
*From*: Daniel Lichtblau <danl>
*Date*: Sat, 14 Nov 1998 03:07:57 -0500
*Organization*: Wolfram Research, Inc.
*References*: <7257a7$88m@smc.vnet.net> <72e21p$of1@smc.vnet.net>
*Sender*: owner-wri-mathgroup at wolfram.com
Allan Hayes wrote:
>
> Several postings have used Series to do get the answer. In the timings
> below the use of replacement after multiplication (as in Thomas Bell's
> original posting) gives the quickest computation and it appears that
> the difference is because evaluating Series takes a long time (are
> there any memory usage compensations when using Series?)
>
> poly1 = a h^2 + b;
> poly2 = d h + e;
>
> Carl Woll and Rolf Mertig
> Do[(poly1 + O[h]^3 )(poly2 + O[h]^3) // Normal, {100}]//Timing
>
> {0.71 Second, Null}
>
> Ted Ersek
> Do[Series[poly1*poly2, {h, 0, 2}] // Normal, {100}] // Timing
>
> {0.55 Second, Null}
>
> Thomas Bell (Extended)
> Do[Expand[poly1*poly2] /. h^n_?(# > 2 &) -> 0, {100}] // Timing
>
> {0.16 Second, Null}
>
> ** And for larger polynomials**
>
> poly1 =
> Table[Random[Integer, {1, 50}]h^n, {n, 0, 100}];
>
> poly2 =
> Table[Random[Integer, {1, 50}]h^n, {n, 0, 100}];
>
> Timing[Do[(poly1 + O[h]^51)(poly2 + O[h]^51) // Normal, {10}]]
>
> {6.59 Second, Null}
>
> Do[Series[poly1*poly2, {h, 0, 50}] // Normal, {10}] // Timing
>
> {3.51 Second, Null}
>
> Do[Expand[poly1*poly2] /. h^n_?(# > 50 &) -> 0, {10}] // Timing
>
> {0.44 Second, Null}
>
> ** Separating out the expansions in the first computation shows that
> most of the time is taken up with the expansions **
>
> Do[(s1 = poly1 + O[h]^51; s2 = poly2 + O[h]^51;), {10}] // Timing
>
> {6.15 Second, Null}
>
> Timing[Do[s1 s2 // Normal, {10}]]
>
> {0.5 Second, Null}
>
> Allan
>
> ---------------------
> Allan Hayes
> Mathematica Training and Consulting
> www.haystack.demon.co.uk
> hay at haystack.demon.co.uk
> Voice: +44 (0)116 271 4198
> Fax: +44 (0)870 164 0565
>
> *******************************************************
>
> ...
This is a sort of problem where the "optimal" algorithm to use depends
alot on the flavor of input. Try out the following and you will see the
Series method dominate expansion.
randpoly[n_] := Sum[Sum[Random[Integer,{1,50}]*y^k,
{k,0,10}]*x^j, {j,0,n}]
poly1 = randpoly[40];
poly2 = randpoly[40];
In[56]:= Do[Series[poly1*poly2, {x, 0, 50}] // Normal, {10}] // Timing
Out[56]= {7.48 Second, Null}
In[58]:= Do[Expand[poly1*poly2] /. x^n_?(# > 50 &) -> 0, {10}] // Timing
Interrupt> a
Out[58]= $Aborted (* after several minutes *)
My belief is that expansion will be better when the input is sparse or
univariate, in other cases I'd expect Series to do it faster.
The Expand method above is actually not optimized for a nested
multivariate polynomial. We can speed it up by telling Expand not to
worry about anything not containing the variable of interest. So a
better way would be
In[57]:= Do[Expand[poly1*poly2, x] /. x^n_?(# > 50 &) -> 0, {10}] //
Timing
Out[57]= {18.09 Second, Null}
But this is still ~2.5 x slower than using Series.
If I first expand poly1 and poly2 then the series method gets around 3 x
slower than before, but the expansion method pretty much hangs. This is
because providing a second argument to Expand helps not at all now.
Daniel Lichtblau
Wolfram Research
Prev by Date:
**Re: algebra problem, need help fast!!!**
Next by Date:
**Re: ODEs and phase portraits**
Previous by thread:
**Re: Re: Multiplying large polynomials**
Next by thread:
** Printing Typeset Expressions on Mac TT Printer**
| |