MathGroup Archive 2001

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: looking inside FindMininum

  • To: mathgroup at smc.vnet.net
  • Subject: [mg29319] Re: [mg29304] looking inside FindMininum
  • From: "Mark Harder" <harderm at ucs.orst.edu>
  • Date: Wed, 13 Jun 2001 03:10:49 -0400 (EDT)
  • Sender: owner-wri-mathgroup at wolfram.com

Johannes,
    If I understand you correctly, what you are trying to do is use an
unconstrained minimization procedure with a penalty function to accomplish a
constrained (nonlinear?)minimization.  There is a method in the literature,
called Sequential Unconstrained Minimization Technique (SUMT, see various
books on optimization).  The algorithm is really quite simple; implemented
with FindMinimum, it looks like this:

1,)    lamda=1;
        assign del     (*     0<del<=1    *)
        assign outerTolerance    (* stopping criterion in
orm[ new{x,y}-old{x,y} ]   *)
2.)    minOld=FindMinimum[ f[x,y] + lamda*barrier[x,y], x,y, etc...]
3.)    lamda=lamda*del       (*  decrement the barrier function multiplier
*)
4.)    g[x,y]=f[x,y] + lamda*barrier[x,y]
5.)    minNew=FindMinimum[ g[x,y], x,y, etc...  ]
6.)    If[ Abs[ minNew-minOld] >outerTolerance,
            Then  GoTo 3.)
            Else, Stop.
7.)    End
With each reiteration of the SUMT loop (steps 3-7), the barrier penalty gets
smaller and the solution vector settles into a local minimum that reflects
the underlying objective function more accurately.  I have successfully used
this method in a Fortran program with an IMSL unconstrained nonlinear
routine.  Upon reading the Mathematica docs ,it seems you could implement
the outer loop (steps 3-7 ) using NestWhile[], which does much of the work
for you.  Perhaps this is less efficient than the method you had in mind,
but I think it will work with FM in its standard form (using FM without
pushing its limits can be enough of a headache; best leave it alone, I
think). Of course, the tolerances in FM should be finer than what you pick
for outerTolerance in SUMT.
-mark harder

-----Original Message-----
From: Johannes Ludsteck <johannes.ludsteck at wiwi.uni-regensburg.de>
To: mathgroup at smc.vnet.net
Subject: [mg29319] [mg29304] looking inside FindMininum


>Dear Mathematica Gurus,
>I would like to minimize a function f[x,y]with
>respect to the arguments (x,y). The special
>problem ist that the simple y = 1-x.x > 0 where x
>is (x1,x2...,xn) must hold.
>I do this by minimizing
>f[x,y] + barrier[1-x.x]
>where barrier is a linear function of 1-x.x and
>grows if y approaches 0 or becomes < 0:
>barrier[x_,z_]:=If[#<z,10(1-#/z),0.0]&[1-x.x];
>I. e. barrier > 0 if 1-x.x < z. Where z is a
>small positive number. z is set to a small number
>when the optimization start, but should approach
>0 in the course of the Optimization routine in
>order to avoid distortions of the search.
>I would like to decrease z after every
>optimization step. But in order to do this I have
>to ask FindMininum for the number of steps.
>Until now I do this by
>z=0.1
>FindMinimum[f[y,x]+barrier[1x.x,z*=decrement],...]
>but this is very unsatisfactory, because z is
>decremented every time when f[]+barrier[] is
>evaluated.
>I would like to adjust z in a more sophisticated
>way. But this is possible only if there is any
>way to get more information out of FindMininum,
>for example number of iterations etc.
>Is there any way to milk FindMiniumum?
>By the way: derivatives of f[y,x] cannot be
>computed. Thus I provide two starting values for
>every x and FindMinimum use a derivative-free
>method.
>
>Thank you and best regards,
> Johannes Ludsteck
>
>
>
><><><><><><><><><><><><><><><><><><>
>Johannes Ludsteck
>Institut fuer Volkswirtschaftslehre
>Universitaet Regensburg
>Universitaetsstrasse 31
>93053 Regensburg
>Tel +49/0941/943-2741
>



  • Prev by Date: Re: Restting Probabilities
  • Next by Date: Re: how to get an assignment from a rule ?
  • Previous by thread: looking inside FindMininum
  • Next by thread: Re: Need help writing geology software using cirlces