Re: Why is Mathematica incredible slow?
- To: mathgroup at smc.vnet.net
- Subject: [mg38873] Re: [mg38806] Why is Mathematica incredible slow?
- From: Selwyn Hollis <selwynh at earthlink.net>
- Date: Thu, 16 Jan 2003 03:21:57 -0500 (EST)
- Sender: owner-wri-mathgroup at wolfram.com
Johannes, I'm surprised no one answered already... Here are my initial thoughts: I would not be so quick to blame FindRoot. Your gradient involves both PDF and CDF, which are contained in one of the standard packages, not implemented in the kernel. I suspect that is where the slowness arises. "Ordinary Mathematica code can be called from within compiled code", but I don't think such code is actually compiled. (??) Perhaps creating a compiled, specialized version of PDF/CDF would bring Mathematica up to speed. ----- Selwyn Hollis http://www.appliedsymbols.com On Tuesday, January 14, 2003, at 06:10 AM, Johannes Ludsteck wrote: > Der MathGroup members, > I want to find a maximum likelihood estimator using FindRoot. > Since the problem seems not be related to specialities of the > estimation procedure, I report it here. > I know that the maximand is globally concave. Therefore, the > maximum can be found be searching the root of the gradient of > the maximand. > > The problem is that Mathematica is slower by a factor of 20- > 100 than other comparable packages, though > the maximand is compiled and I provide the gradient. An even > more amazing finding is that providing the (compiled) Jacobian > makes the algorithm even slower that using the default version > or QuasiNewton. > > By the way: Searching for the Maximum of the maximand with > FindfMinimum is much slower, even if I provide the gradient > function. Find my code below if you are interested in the > details or want to play around with it. > > My questions: > [1] Why are FindMinimum and FindRoot so slow. If I compare the > Timings of other number crunching functions, e.g. > Inverse with the corresponding ones in Gauss and Stata, I find > only small differences. > > [2] Are there any clear inefficiencies in my code below? > > Here is the Gradient function: > > LogLikGrad[y_, x_, b_] := > With[{s = Map[If[# == 1, 1, \(-1\)] &, y], > d = NormalDistribution[0. , 1. ]}, > Tr /@ Transpose[s (PDF[d, #]/CDF[d, #]) x &[s x . b]]] > > I do not add the compiled version of LogLikGrad, since it does > not speed up things considerably and is more complex. > > and here is the estimation function: > Probit[y_,x_,bStart_]:=With[ > {b0=If[#==0.0,{-0.1,0.1},{0.9,#,1.1,#}]&/@bStart}, > #1/.FindRoot[LogLikGrad[y,x,#1], > ##2,Compiled->False]&[ > #,Sequence@@Transpose[{#,b0}]]&[ > Table[Unique[],{Length[b0]}]]] > > Probit[y_,x_]:=Probit[y,x,Table[0.0,{Dimensions[x][[2]]}]] > > In case you want to play around with the stuff, here are data > generation commands and a call to Probit: > > <<Statistics`ContinuousDistributions`; > x=Table[2(Random[]-0.5),{10000},{5}]; > b={0.1,0.2,0.3,0.4,0.5}; > y=Map[If[#>0,1,0]&, x.b+ > Table[Random[NormalDistribution[0,1]],{Length[x]}]]; > > Probit[y,x] > > In case you are interested in having a look at the jacobian, > here it is: > > pdf = Function[{x}, > Exp[-(x\^2/2]/Sqrt[2 \[Pi]]; > cdf = Function[{x}, 0.5 (1. + Erf[x/2])]; > SetAttributes[pdf, Listable]; > SetAttributes[cdf, Listable]; > > \!\(LogLikJacC = > Compile[{{y, _Integer, 1}, {x, _Real, 2}, {b, _Real, 1}}, > Module[{s = > Map[If[# \[Equal] 1, 1, \(-1\)] &, y]}, > \[IndentingNewLine]Apply[ > Plus, \(-Map[Outer[Times, #, #] &, > x]\)*\((\(\(\(pdf[#] cdf[#]\ > #\)\(+\)\(pdf[#]\^2\)\(\ \ > \)\)\/cdf[#]\^2\ &\)[s\ x . b])\)]], > \[IndentingNewLine]{{pdf[_], _Real, > 1}, {cdf[_], _Real, 1}, {s, _Integer, 1}}]\) > > Best regards, > Johannes Ludsteck > > > <><><><><><><><><><><><> > Johannes Ludsteck > Economics Department > University of Regensburg > Universitaetsstrasse 31 > 93053 Regensburg > Phone +49/0941/943-2741 > >