MathGroup Archive 2001

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Why NestWhile?

  • To: mathgroup at smc.vnet.net
  • Subject: [mg27036] Re: [mg26914] Why NestWhile?
  • From: Mark Sofroniou <marks at wolfram.com>
  • Date: Thu, 1 Feb 2001 03:00:24 -0500 (EST)
  • Organization: Wolfram Research Inc
  • References: <94vrar$o8e@smc.vnet.net>
  • Sender: owner-wri-mathgroup at wolfram.com

Andrzej Kozlowski wrote:

> I am afraid I can't answer the question that is the subject of your posting.

There was a bug in NestWhile that caused it to sometimes store intermediary
values (as NestWhileList does). Thanks to Fred Simons for pointing out
the example which has been fixed in our development version.

> But I got intrigued by the idea that one should be able to do much better
> (as far as speed is concerned) by using neither recursion not pascal-style
> programming but the unique Mathematica programming style, and I came up with
> the following:
>
> fmath[n_] :=
>   Plus @@ Plus @@ Position[Reverse[IntegerDigits[n, 2]], 1, {1}, 1] - 1

The original example has worst case bit complexity O(n^2) for an n bit input.
Using IntegerDigits you are reducing this to O(log(n) M(n)), where M(n) is the
asymptotic cost of multiplying two n-bit numbers (O(n log(n) log(log(n)) for
Fast Fourier Transform based multiplication). The complexity estimate for
IntegerDigits is explained by the fact that it uses a divide and conquer
algorithm.

However there are faster ways. The function IntegerExponent (new in Version 4.0)
implements a divide and conquer algorithm to count trailing zeros in any base,
but
only counts sub-parts that contain trailing zeros. Thus IntegerExponent may not
need to work with all of the parts of the input as IntegerDigits does.

If the base is a power of 2, and since numbers are represented in binary
internally,
then the complexity estimates change and become essentially linear for both
IntegerDigits and IntegerExponent.

(* Be careful not to include the time to compute the factorial *)

In[1]:= n = 100000!;

(* This number has almost half a million decimal digits *)

In[2]:= N[n]

                            456573
Out[2]= 2.824229407960348 10

(* The first evaluation loads a StartUp file since IntegerExponent is implemented

in top-level code. This takes a little bit of time *)

In[3]:= IntegerExponent[n, 2];//Timing

Out[3]= {0.15 Second, 9995}

(* This gives the number of trailing binary zeros and the time taken *)

In[4]:= IntegerExponent[n, 2]//Timing

Out[4]= {0.03 Second, 99994}

In the best case (no trailing zeros) the iterative algorithm fpas will work in
O(n)
time which may indeed be faster than IntegerExponent, but the average case
complexity is much worse.

Mark Sofroniou,
Research and Development,
Wolfram Research Inc.


> Compared with fpas it appears quite a lot faster:
>
> fpas[n_] := Block[{m = n, res = 0}, While[EvenQ[m], m = m/2; res = res + 1];
>     res]
>
> In[3]:=
> fmath[5000!] // Timing
>
> Out[3]=
> {0.0833333 Second, 4995}
>
> while
>
> In[4]:=
> fpas[5000!] // Timing
>
> Out[4]=
> {1.58333 Second, 4995}
>
> --
> Andrzej Kozlowski
> Toyama International University
> JAPAN
>
> http://platon.c.u-tokyo.ac.jp/andrzej/
> http://sigma.tuins.ac.jp/
>
> on 01.1.27 1:29 PM, F.H.Simons at tue.nl at F.H.Simons at tue.nl wrote:
>
> > Finding the number of factors 2 of a positive integer is one of my favourite
> > examples in courses in Mathematica. The recursive solution is very elegant.
> >
> > frec[n_] := If[EvenQ[n], 1+frec[n/2], 0]
> >
> > My general advice is to avoid recursion unless it is very undeep. Here is a
> > straightforward Pascal-style solution without recursion:
> >
> > fpas[n_] := Block[ {m=n, res = 0},
> > While[EvenQ[m],
> > m=m/2; res=res+1];
> > res]
> >
> > fpas[2^15000] // Timing
> >
> > {4.45 Second,15000}
> >
> > Two variables are involved and during this loop 30000 assignments are done.
> > We might expect that iteration will be faster: no variables, no assignments.
> >
> > fnest[n_] := NestWhile[{#[[1]]/2, #[[2]]+1}&,{n, 0}, EvenQ[#[[1]]]&, 1][[2]]
> >
> > fnest[2^15000] // Timing
> >
> > {8.35 Second,15000}
> >
> > But it turns out to be slower! Let us look at the old FixedPoint
> > construction:
> >
> > ffix[n_] := FixedPoint[ {#[[1]]/2, #[[2]]+1}&, {n,
> > 0},SameTest->(OddQ[#2[[1]]]&) ] [[2]]
> >
> > ffix[2^15000] // Timing
> >
> > {5.16 Second,15000}
> >
> > Not quite so fast as the Pascal implementation, but faster than the
> > relatively new NestWhile. Why do we have NestWhile when it is slower (and
> > more complicated) than While?
> >
> > Fred Simons
> > Eindhoven University of Technology
> >
> >


  • Prev by Date: Re: Problem with the GaussianQuadratureWeights[n, a, b]
  • Next by Date: Re: Problem with the GaussianQuadratureWeights[n, a, b]
  • Previous by thread: Re: Problem with the GaussianQuadratureWeights[n, a, b]
  • Next by thread: NetCDF Files in Mathematica?