MathGroup Archive 2010

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Speeding up InterpolatingFunction

  • To: mathgroup at smc.vnet.net
  • Subject: [mg113322] Re: Speeding up InterpolatingFunction
  • From: Ray Koopman <koopman at sfu.ca>
  • Date: Sat, 23 Oct 2010 07:07:35 -0400 (EDT)
  • References: <i9r82q$hqe$1@smc.vnet.net>

On Oct 21, 10:39 pm, Thomas M=FCnch <thomas.mue... at gmail.com> wrote:
> Dear group,
>
> I am looking for a way to speed up the execution of an
> InterpolatingFunction in a particular scenario.
>
> This is what I have:
> - a single Real-valued InterpolatingFunction f, defined on the domain
> between 0 and 300,000 (representing time in ms)
> - a list of roughly 1000 different time-points events= {t1, t2, t3, ....}
>
> What I want:
> - I want to evaluate f in the 600 ms preceeding each of the events
> - I then want to average all these 600ms-periods
> I other words, I want the average time-course of f preceeding the events
> (in case you are curious: this is to calculate the spike-triggered
> average of neuronal spiking responses)
>
> (* Here is a toy function to try it out: *)
>
> f = Interpolation[{10 Range[0, 30000 - 1], RandomInteger[200,
> 30000]}\[Transpose], InterpolationOrder -> 0];
> events = Sort[RandomSample[Range[300000], 1000]];
>
> (* This code does what I want at a 1-ms resolution, which is enough for
> me: *)
>
> result=Mean[Table[f[t],{t,# - 600,#}]&/@events];
> ListLinePlot[result]
>
> It requires to call f 600.000 times, which takes about 4 seconds on my
> machine. But it scales linearly with the number of events (which could
> approach 10000 in some cases).
>
> It takes about the same time as this loop: Do[f[1],{600000}]
> In other words, there is nothing inherently inefficient in the code.
>
> However, I need to perform this operation very often (with different f
> and different t), so wondered if the particular layout of the problem
> allows for a speedup of the calculation.
>
> I tried:
> -  to costruct a single InterpolatingFunction from the 1000 time-shifted
> versions of f, and evaluate this single function 600 times around time 0
> - Parallel evaluation in local kernels
> - Working with a Compiled form of f
>
> Everything led to a slow-down, rather than a speed-up.
>
> Any ideas from the experts?
>
> In[251]:= $Version
> Out[251]= "7.0 for Microsoft Windows (32-bit) (February 18, 2009)"
>
> Thank you!
> Thomas

Convert everything to packed Reals, even though the values may be
integers. Everyting will run faster. Times in the following table
are in seconds (on an old machine), for the toy problem you posted.

        InterpolationOrder -> 0    InterpolationOrder -> 1
          Construct  Evaluate        Construct  Evaluate
Integers     .13       6.29             .09       9.78
Reals        .02       4.60             .02       6.06

InterpolationOrder -> 1 gives a much smoother plot.


  • Prev by Date: Strange Behaviour of ReplaceAll with Subscript (A bug or what?)
  • Next by Date: Re: Working with ppatterns
  • Previous by thread: Speeding up InterpolatingFunction
  • Next by thread: Non-parametric regression and density estimation package posted to