Mathematica 9 is now available
Services & Resources / Wolfram Forums
-----
 /
MathGroup Archive
2004
*January
*February
*March
*April
*May
*June
*July
*August
*September
*October
*November
*December
*Archive Index
*Ask about this page
*Print this page
*Give us feedback
*Sign up for the Wolfram Insider

MathGroup Archive 2004

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: convolution vs. NMinimize

  • To: mathgroup at smc.vnet.net
  • Subject: [mg52412] Re: convolution vs. NMinimize
  • From: lupos at cheerful.com (robert)
  • Date: Fri, 26 Nov 2004 01:04:47 -0500 (EST)
  • References: <co4fcd$lj6$1@smc.vnet.net>
  • Sender: owner-wri-mathgroup at wolfram.com

hi julia,

especialy (but not only) with samplelegths which are powers of 2
convolution can be done efficiently by FFT (Mathematica Fourier[])
the "convolution theorem" states that to perform convolution(f,g) you
may take the the fourier transforms F=Fourier[f] and G=Fourier[g] and
get the convolution by calculating InverseFourier[F*G]

for a sample length n

Fourier[] calculates in n*Log[n] time (because of FFT)
ListConvolve[] calculates in n*n time

the differnece is enormous for large n.


As Fourier assumes periodic data, you will have to pad your data and
your convolution kernel with zeros if you need same behavior like
ListConvolve[]


could be possible that ListConvolve[] allready uses FFT methods in
this case the statement that ListConvolve[] calculates in n*n time
would be false and you wold not gain any speedup.

regards robert

db at ict.fhg.de (julia) wrote in message news:<co4fcd$lj6$1 at smc.vnet.net>...
> Hello,
> 
> I've had some problems with the nonlinear fit of my measuresd data
> before.
> Now, i have worked it out. I have to use a numerical global
> optimization.
> The fit with NMinimize leads to very good results.
> The problem now is, that the fit needs about 6 hours computation time.
> I'm sure this could be faster, but i don't have an idea how...
> 
> i've generated a sum of squares from the measured data and the model.
> The model consists of the actual model and a numerical convolution of
> the
> model with a measured input signal. The convolution should be the the
> time-consuming
> step. I don't know what mathematica is doing exactly (e.g., which
> steps are calculated
> symbolical or numerical). The optimization should be fast, if the
> model with the actual
> parameters, and the convolution would be evaluated numerically.
> I've attached the code for the optimization.
> 
> In[11]:=
> pred[Pe_,tau_]:=Module[{model,pred1,falt},
>     model=(Pe*tau/(4*&#960;*t^3))^0.5*Exp[-Pe/(4*t/tau)*(1-t/tau)^2];
>     pred1=Map[model/.{t->#}&,time];falt=ListConvolve[inp,pred1,1];falt
>     ]
> 
> 
> In[15]:=soln=NMinimize[Plus@@Table[((yc[Pe,tau][[i]])-respconvdata[[i,2]])^2,
> 	{i,Length[time]}],{{tau,15,20},{Pe,95,110}},MaxIterations->50,Method->"DifferentialEvolution"]//Timing
> 
> 
> "inp" is the input signal
> "pred" is the predicted Convolution product
> "respconvdata" is the measured curve
> 
> Does anybody have an idea?
> (e.g., how to apply the convolution on a different way..)
> 
> Thanks,
> 
> julia


  • Prev by Date: Speeding UP Indexing and Joining of Different Size Rectangular Matrixes
  • Next by Date: Re: tetrahedral Siegel Disk Julia map
  • Previous by thread: Re: convolution vs. NMinimize
  • Next by thread: Re: convolution vs. NMinimize