Re: Speed Up of Calculations on Large Lists

*To*: mathgroup at smc.vnet.net*Subject*: [mg108961] Re: Speed Up of Calculations on Large Lists*From*: Bill Rowe <readnews at sbcglobal.net>*Date*: Thu, 8 Apr 2010 08:02:28 -0400 (EDT)

On 4/7/10 at 7:26 AM, karg.stefan at googlemail.com (sheaven) wrote: >Ray is absolutely correct (Compile does not speed up the calculation >based on my function movAverageOwn2FC). I used Compile with a >function similar to variation#2 from Zach with Partition where >Compile resulted in a speed up of c. 50%. Therefore, I thought that >Compile is always faster. Obviously, this is a mistake :-) Definitely. Compile does improve execution speed of some code. But my experience is spending a bit more time understanding the problem and avoiding the use of things like For is more effective at improving execution speed than simply using Compile. And as you've note, Compile will degrade the execution speed of some code. Compile is definitely something that needs to be used intelligently. <snip> >3. Test for equality > >In[15]:= maMathematica[data, 30, 250, 10] == >maConvolve[data, 30, 250, 10] == maSpan[data, 30, 250, 10] == >maDrop[data, 30, 250, 10] >Out[15]= False >This is very strange to me. Equality is only given based on >Precision of 8: >In[16]:= SetPrecision[maMathematica[data, 30, 250, 10], 8] == >SetPrecision[maConvolve[data, 30, 250, 10], 8] == >SetPrecision[maSpan[data, 30, 250, 10], 8] == >SetPrecision[maDrop[data, 30, 250, 10], 8] >Out[16]= True >Any ideas why this is happening? Numbers of the different functions >start to differ at the 10th decimal. My understanding was that >Mathematica was only testing equality up to the precision it knows >to be correct!? Per the documentation, Equal treats to machine precision values as being equal when they match except for the last 7 binary digits (~2 decimal digits). The reason you are not getting True for the comparison above is each method is doing a different set of operations on the data which lead to differences due to the limits of machine precision. You can gain a bit more insight as follows: In[27]:= data = 100 + Accumulate[RandomReal[{-1, 1}, {10000}]]; In[28]:= a = maMathematica[data, 30, 250, 10]; b = maConvolve[data, 30, 250, 10]; c = maSpan[data, 30, 250, 10]; d = maDrop[data, 30, 250, 10]; In[29]:= Outer[Equal, {a, b, c, d}, {a, b, c, d}, 1] Out[29]= {{True, True, True, False}, {True, True, True, False}, {True, True, True, False}, {False, False, False, True}} which shows the method based on Drop is the one causing your comparison to yield False rather than True. In[30]:= Outer[ Union[Flatten@Chop[#1 - #2]] == {0} &, {a, b, c, d}, {a, b, c, d}, 1] Out[30]= {{True, True, True, True}, {True, True, True, True}, {True, True, True, True}, {True, True, True, True}} which shows each method does return the same result for within the limitations of machine precision computations. Note, In[34]:= data = 100 + Accumulate[RandomReal[{-1, 1}, {500}]]; In[35]:= a = maMathematica[data, 30, 250, 10]; b = maConvolve[data, 30, 250, 10]; c = maSpan[data, 30, 250, 10]; d = maDrop[data, 30, 250, 10]; In[36]:= Outer[Equal, {a, b, c, d}, {a, b, c, d}, 1] Out[36]= {{True, True, True, True}, {True, True, True, True}, {True, True, True, True}, {True, True, True, True}} showing when the number of operations is reduced all methods yield True when compared using Equal. So, the issue is clearly loss of significance as computations are done with machine precision values. One other thought you may want to consider. You tested for execution times only. Although, I've not looked at each of the methods in detail, I suspect the memory requirements for each will also differ. When working with large data sets, this might be a more significant factor than execution time.