Re: Re: Is it possible to make NIntegrate
- To: mathgroup at smc.vnet.net
- Subject: [mg105280] Re: [mg105230] Re: [mg105163] Is it possible to make NIntegrate
- From: DrMajorBob <btreat1 at austin.rr.com>
- Date: Wed, 25 Nov 2009 23:03:00 -0500 (EST)
- References: <200911231150.GAA21785@smc.vnet.net>
- Reply-to: drmajorbob at yahoo.com
Oh well. I've seen cases where separating the branches WAS worthwhile, but it's not surprising to find cases where it isn't. The dissected version might be more readable in some situations, too. Bobby On Wed, 25 Nov 2009 01:31:29 -0600, Leo Alekseyev <dnquark at gmail.com> wrote: > Apologies for missing some definitions in the code; they are > eps=3.*^-16; > xvec = Table[x, {x, 0.5, 1, 0.5/10}] (* not linspace *) > > Thanks DrMajorBob for the suggestion, but implementing the integrands > as you said (creating branches with a Set symbolic result) and thus > removing every If[] statement inside the body of the function results > in absolutely no measurable performance gain... This might actually > be a good thing -- this means that Mathematica is working at the > optimal level without the need for extensive "manual labor" in terms > of carefully defining the function. > > --Leo > > On Tue, Nov 24, 2009 at 1:42 AM, DrMajorBob <btreat1 at austin.rr.com> > wrote: >> The complete version leaves "eps" and "Linspace" undefined, so I still >> can't >> test it much. Perhaps "eps" should be a parameter in >> ZDipoleFieldTotalIntegrand2? >> >> I'd also define branches of the function separately, remove N from the >> body >> of it, and Set a symbolic result for each branch (perhaps after >> Simplify). >> That could greatly reduce redundancy in applying the same logic on every >> call. In the first set of brackets defining the function, I'd place the >> variables that determine a logical branch, so that there's no logic on >> the >> right hand side. >> >> For instance, for rsc == True, x == y == 0, z < 0 (with r taking the >> place >> of "eps"): >> >> ZDipoleFieldTotalIntegrand2[True, 0 | 0., 0 | 0., z_?Negative, r_] > >> Function[{k, h, k0, e1, ex2, e2perp}, >> Evaluate@Simplify@Module[{qz1, qz2, rr, tt}, >> qz1 = Sqrt[e1 k0^2 - k^2]; >> qz2 = Sqrt[ex2 (k0^2 - k^2/e2perp)]; >> rr = -((-ex2 qz1 + e1 qz2)/(ex2 qz1 + e1 qz2)); >> tt = (2 ex2 qz1)/(ex2 qz1 + e1 qz2); >> (I/(2 Pi)* >> tt (k^2*(k*BesselJ[0, k r] {0, 0, 1/e2perp}) Exp[-I qz2 z] Exp[ >> I qz1 h]/(2 qz1)))] >> ] >> >> I eliminated some extraneous variables and algebraic terms in the >> Module, >> but it wouldn't matter if you didn't bother, since the logic and >> substitutions only execute once... rather than every time the case is >> encountered. >> >> N can be applied in the function that calls this one, but NIntegrate >> will >> pass Real variables anyway, so there's probably no need. >> >> Along these lines, I'm betting you could reach timings a lot closer to >> those >> of the alternative system. >> >> Bobby >> >> On Mon, 23 Nov 2009 23:18:54 -0600, Leo Alekseyev <dnquark at gmail.com> >> wrote: >> >>> Good point, DrMajorBob; I seem to have underestimated the telepathic >>> powers of Mathgroup. As you will see below, the problem with my >>> NIntegrate was that Mathematica tried to do SymbolicProcessing where >>> it didn't need to. >>> >>> Andrew Moylan writes: >>>> >>>> You integrand contains a Bessel function over an infinite range. >>>> Specialized >>>> methods exist for numerical integrals of that sort, and NIntegrate >>>> automatically >>>> selects one. However, since your integral decays very fast, the >>>> oscillatory >>>> nature of the integrand is not very important. Therefore the default >>>> methods are >>>> quite adequate (and faster). You can choose to use only default >>>> methods >>>> with >>>> this option: Method -> {Automatic, SymbolicProcessing -> 0} Of course >>>> if, >>>> for >>>> some other parameters, your integral doesn't decay as fast and the >>>> rapidly >>>> oscillatory nature becomes important, then you may still wish to use >>>> the >>>> specialized method for Bessel integrands. >>> >>> Using this suggestion, and playing with AccuracyGoal settings, I was >>> able to get a factor of 30-40 performance improvement. This is still >>> about 5 times slower compared to an alternative system (but fast >>> enough so that I no longer feel compelled to rewrite my Mathematica >>> code). I will include the integrands below, and if off the top of >>> your head you can suggest any other optimizations, please do. >>> Personally, I'm out of ideas -- everything is wrapped in N[] and no >>> SetDelayed is used anywhere. >>> >>> Here is the short version, to illustrate the sort of integral I'm >>> working >>> with: >>> >>> ZDipoleFieldTIntegrandRSC[xi_][x_, y_, z_, h_, k0_, e1_, e2par_, >>> e2perp_] := >>> Block[{r = Sqrt[x^2 + y^2], m1 = 1, m2 = 1, >>> qz1, qz2, rr, tt, ex1 = e1, ex2 = e2par, >>> nintOpts}, qz1 = Sqrt[e1 m1 k0^2 - (xi/r)^2]; >>> qz2 = Sqrt[e2par (m2 - (xi/r)^2/e2perp)]; >>> tt = (2 ex2 qz1)/(ex2 qz1 + ex1 qz2); >>> I/(2 Pi)* >>> tt ((xi/r)^2*((xi/r)*BesselJ[0, xi] {0, 0, 1/e2perp} + >>> qz2 {x/r/ex2, y/r/ex2, 0}*I*BesselJ[1, xi]) Exp[-I qz2 z] Exp[ >>> I qz1 h]/(2 r qz1))] >>> >>> Module[{xi, nintOpts = {MaxRecursion -> 400, AccuracyGoal -> 5}}, >>> With[{ >>> z = -0.2, x = 0.1, y = 0, k0 = 1, h = 0.5, e1 = 1, e2par = 1, >>> e2perp = 1 >>> }, >>> NIntegrate[ >>> ZDipoleFieldTIntegrandRSC[xi][x, y, z, h, k0, e1, e2par, >>> e2perp], {xi, N[Sqrt[x^2 + y^2] Sqrt[e1] k0], Infinity}, >>> Evaluate[Sequence @@ nintOpts]]] >>> ] >>> >>> >>> And here's the complete version (involves sums of a couple of terms >>> like the one above and a condition that changes the integrand based on >>> whether or not z>0) >>> >>> >>> ZDipoleFieldTotalIntegrand2[kkk_, rsc_][x_, y_, z_, h_, k0_, e1_, >>> e2par_, e2perp_] := >>> Module[{r, intlim = Infinity, sgn, m1 = 1, m2 = 1, qz1, qz2, >>> ex1 = e1, ex2 = e2par, nintOpts, qz1fn, qz2fn, rr, tt, k, corr}, >>> r = Sqrt[x^2 + y^2]; If[r == 0, r = eps]; >>> sgn = If[z < h, 1., -1.]; >>> If[rsc == False, k = kkk; corr = 1., k = kkk/r; corr = 1. /r]; >>> qz1 = Sqrt[e1 m1 k0^2 - k^2] // N; >>> qz2 = Sqrt[e2par (m2 k0^2 - k^2/e2perp)] // N; >>> rr = -((-ex2 qz1 + ex1 qz2)/(ex2 qz1 + ex1 qz2)) // N; >>> tt = (2 ex2 qz1)/(ex2 qz1 + ex1 qz2) // N; >>> corr*If[z >= 0, >>> N[I/(2 Pi) 1/ >>> e1 (k^2*(k*BesselJ[0, k r] {0, 0, 1} + >>> sgn*qz1 {x/r, y/r, 0}*I*BesselJ[1, k r]) Exp[ >>> I qz1 Abs[z - h]]/(2 qz1))] + >>> N[I/(2 Pi e1)* >>> rr (k^2*(k*BesselJ[0, k r] {0, 0, 1} - >>> qz1 {x/r, y/r, 0}*I*BesselJ[1, k r]) Exp[ >>> I qz1 (z + h)]/(2 qz1))], >>> N[I/(2 Pi)* >>> tt (k^2*(k*BesselJ[0, k r] {0, 0, 1/e2perp} + >>> qz2 {x/r/ex2, y/r/ex2, 0}*I* >>> BesselJ[1, k r]) Exp[-I qz2 z] Exp[I qz1 h]/(2 qz1))]]] >>> >>> ZDipoleFieldTotal2[x_?NumericQ, y_?NumericQ, z_?NumericQ, h_?NumericQ, >>> k0_?NumericQ, e1_?NumericQ, e2par_?NumericQ, e2perp_?NumericQ, >>> opts___?OptionQ] := >>> Module[{r = Sqrt[x^2 + y^2], intlim = Infinity, >>> sgn = If[z < h, 1, -1], k, nintOpts = Flatten[{opts}]}, >>> NIntegrate[ >>> ZDipoleFieldTotalIntegrand2[k, False][x, y, z, h, k0, e1, e2par, >>> e2perp], {k, 0, Sqrt[e1] k0}, Evaluate[Sequence @@ nintOpts]] + >>> NIntegrate[ >>> ZDipoleFieldTotalIntegrand2[k, True][x, y, z, h, k0, e1, e2par, >>> e2perp], {k, N[r Sqrt[e1] k0], Infinity}, >>> Evaluate[Sequence @@ nintOpts]]] >>> >>> With[{ >>> z = -0.2, >>> x = 0.5, >>> xvec = Linspace[.5, 1, 11], >>> y = 0, k0 = 1, h = 0.5, e1 = 1, e2par = 1, e2perp = 1}, >>> ZDipoleFieldTotal2[x, y, z, h, k0, e1, e2par, e2perp] >>> ]: >>> >>> >>> On Mon, Nov 23, 2009 at 10:19 PM, DrMajorBob <btreat1 at austin.rr.com> >>> wrote: >>>> >>>> You told us everything but what we need to know: the integrand. >>>> >>>> It may be defined with SetDelayed, where Set could be much faster. >>>> There >>>> may >>>> be other issues. >>>> >>>> But in a vacuum... who knows? >>>> >>>> Bobby >>>> >>>> On Mon, 23 Nov 2009 05:50:40 -0600, Leo Alekseyev <dnquark at gmail.com> >>>> wrote: >>>> >>>>> Dear Mathgroup, >>>>> >>>>> Recently I have been using NIntegrate fairly extensively. I am >>>>> dealing with an oscillatory integral that has a singularity. >>>>> NIntegrate is able to treat it reasonably well -- the only default I >>>>> had to change was increasing MaxRecursion. However, it is slow. 10 >>>>> points of my integrand take about 40 seconds to evaluate. After I >>>>> ported my code to another system, this same integral took about a >>>>> second >>>>> using >>>>> the Gauss-Kronrod method (quadgk in the other system). Furthermore, >>>>> by >>>>> increasing the absolute and relative tolerance values, I could >>>>> improve >>>>> the speed without losing too much precision, so currently the >>>>> integrals evaluate in 0.4 seconds. >>>>> >>>>> I have been playing with various NIntegrate parameters to try to >>>>> improve the speed, to no effect. My integrands are straightforward >>>>> (although long) algebraic expressions involving a few Bessel >>>>> functions >>>>> and exponentials, wrapped inside a Module; all subexpressions use N[] >>>>> so that nothing should be symbolic... Ideally I hoped to find some >>>>> sort of a speed/accuracy tradeoff, but that hasn't happened. >>>>> >>>>> I read the numerical integration tutorial in the docs, but am finding >>>>> it hard to figure out how to improve the efficiency of my >>>>> integration. >>>>> I would expect Mathematica to get to at least within an order of >>>>> magnitude of the other system using the same integration strategy. >>>>> The >>>>> current >>>>> performance isn't satisfactory -- but neither is the solution of >>>>> porting perfectly good Mathematica code to the other system... >>>>> >>>>> I would much appreciate any suggestions. >>>>> >>>>> Thanks, >>>>> --Leo >>>>> >>>> >>>> >>>> -- >>>> DrMajorBob at yahoo.com >>>> >> >> >> -- >> DrMajorBob at yahoo.com >> > -- DrMajorBob at yahoo.com
- References:
- Is it possible to make NIntegrate faster?
- From: Leo Alekseyev <dnquark@gmail.com>
- Is it possible to make NIntegrate faster?