MathGroup Archive 2004

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: parallel NMinimize[]

  • To: mathgroup at smc.vnet.net
  • Subject: [mg50579] Re: parallel NMinimize[]
  • From: "Michal Kvasnicka" <Anti at Spam.net>
  • Date: Sat, 11 Sep 2004 06:44:29 -0400 (EDT)
  • References: <chmova$a4j$1@smc.vnet.net> <chroam$50c$1@smc.vnet.net>
  • Sender: owner-wri-mathgroup at wolfram.com

Mark,

your suggestion is simple and easy to test, but this is really "brute-force"
approach. Its effectivity rapidly decrease for high-dimensional optimization
problems, due to the high level of necessary subdomain overlaping.

Michal
"Mark Westwood [EPCC]" <markw at epcc.ed.ac.uk> pí¹e v diskusním pøíspìvku
news:chroam$50c$1 at smc.vnet.net...
> Robert,
>
> Forgive me if I've misunderstood your problem or differential evolution,
> but here's a suggestion.
>
> You want to search for the global minimum of a function on some domain ?
>
> Define a form of the NMinimize function you wish to evaluate which is
> parameterised in terms of p, where p = 1, 2, 3, ... is the number of the
> processor on which the function will run.
>
> For example, if your whole domain was [0,1) (yes, I know, very simple
> example, but work with me on this !), if you have P processors and let p
> be the identifier of the processor, then your function might be:
>
>   NMinimize[ { f[x], (p-1)/P <= x < (p/P) }, {x}, Method ->
> "Differential Evolution" ]
>
> Use the parallel toolkit to send this function to each of the worker
> processors on your cluster.  Each will then evaluate the function over
> 1/p of the whole domain.  Bring the answers together and choose the
> global minimum from them.  You'll have to figure out how each processor
> knows its own identity yourself, I don't have the parallel toolkit.
>
> I guess that this approach might break down at the boundaries between
> sub-domains, so you might want to send overlapping domains out to avoid
> that problem.
>
> I don't know what InitialPoints and SearchPoints are, but from the
> documentation I think that they are:
>
> InitialPoints - a guess of where to start searching, you would want to
> ensure that this was within the subdomain passed to each processor.
>
> SearchPoints - how many points within the domain to examine, more points
> -> more time, more accuracy and less chance of returning a local minimum.
>
> Hope this is of some use
>
> Regards
> Mark Westwood
> Edinburgh Parallel Computing Centre
>
> robert wrote:
> > dear all,
> >
> > I am trying to use NMinimize[f[x],x] from a windows based mathematica
> > to minimize the function f[x] which evaluates via RUN[] and rsh remote
> > shell
> > commands on a linux cluster.
> > this works fine so far.
> >
> > Now I would like to exploit the parallel capabilities of the linux
> > cluster.
> >
> > usually NMinimize[] calls f[x] for a single value x at one step.
> > the nature of  NMinimize[f[x],x, Method->"DifferentialEvolution"] as I
> > understand it causes a whole population of x values to be evaluated
> > before
> > minimization progresses.
> > In order to profit from the cluster I would need  NMinimize[] to
> > evaluate
> > f[] for the whole populatuion in a single call.
> > e.g. f[{x1,x2,x3,x4,....xn}] with n beeing the size of the actual
> > popolation such that NMinimize[] would call the function to be
> > minimized with a wole list of x values instead of calling it with just
> > a single value. this way the cluster could evaluate all the (time
> > expense) function calls in parallel and return a list of results
> > {f[x1],f[x2],f[x3],    ...  f[xn]} to NMinimize[]
> >
> > is there any chance to achieve this ?
> >
> > does anyone know what for the following "DifferentialEvolution"
> > options are used ?
> >
> > "InitialPoints"  set of initial points
> > "SearchPoints" size of the population used for evolution
> >
> > note i dont ask for any parallel aktion of mathematica itself.
> >
> > thanks robert
> >
>



  • Prev by Date: Re: Smalest enclosing circle
  • Next by Date: Re: Exact real numbers
  • Previous by thread: Re: parallel NMinimize[]
  • Next by thread: Re: parallel NMinimize[]