MathGroup Archive 2012

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: NMinimize Method suboptions

  • To: mathgroup at smc.vnet.net
  • Subject: [mg124156] Re: NMinimize Method suboptions
  • From: "Oleksandr Rasputinov" <oleksandr_rasputinov at hmamail.com>
  • Date: Wed, 11 Jan 2012 04:24:00 -0500 (EST)
  • Delivered-to: l-mathgroup@mail-archive0.wolfram.com
  • References: <op.v7v2htwiqcgwdu@core2.lan>

It seems that after some investigation I am in a position to answer my own  
questions, so I am following up my earlier message in case anyone is  
interested.

On Tue, 10 Jan 2012 22:09:19 -0000, Oleksandr Rasputinov  
<oleksandr_rasputinov at hmamail.com> wrote:

> 1. We know that some possible values for the "PostProcess" option are  
> "KKT", "InteriorPoint", and FindMinimum (with the undocumented Method  
> suboption giving access to a rather wide range of possibilities). Are  
> there any others?

No. (Except Automatic, which means either "KKT" or "InteriorPoint", or  
both, in either order, depending on the value of  
Optimization`NMinimizeDump`$PostprocessOrder.)

> And is there any difference between the option values "InteriorPoint"  
> and {FindMinimum, Method -> "InteriorPoint"}?

Different code paths are followed in setting up the minimization, but the  
same core algorithm is used in both cases.

> 2. For the "DifferentialEvolution" method, the documentation states that  
> recombination is according to Storn and Price's rand/1 scheme. Are any  
> other schemes implemented?

No. Actually even the rand/1 implementation is not done "correctly" (i.e.  
per the description given by Storn and Price) as the vectors chosen for  
recombination are picked completely at random, whereas strictly speaking  
differential evolution requires them to be mutually different as well as  
different from the base vector (i.e. the one they compete against in the  
selection tournament). The difference will not matter for a large enough  
number of search points, but may lead to misconvergence in some cases.

Also, the documentation states that differential evolution is  
computationally intensive, which (as the algorithm is very simple and can  
be expressed efficiently in terms of array operations) could be taken to  
refer to the fact that it requires a large number of function evaluations  
compared to local search methods. However, finding on closer inspection  
that the implementation is done in an (in my opinion, inappropriate)  
procedural style and with little attention to efficiency, I now suspect  
that this is the real reason for its relatively poor performance. This is  
unfortunate given the strong potential of differential evolution for  
heavy-duty minimization problems, but perhaps it will be addressed in the  
future.



  • Prev by Date: Re: MatrixPower problem
  • Next by Date: Re: Memory usage of a Sierpinski triangle algorithm
  • Previous by thread: NMinimize Method suboptions
  • Next by thread: How to create palette buttons with tooltips?