MathGroup Archive 1999

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Constrained optimization.

  • To: mathgroup at smc.vnet.net
  • Subject: [mg20734] Re: [mg20708] Constrained optimization.
  • From: "Mark E. Harder" <harderm at ucs.orst.edu>
  • Date: Wed, 10 Nov 1999 00:17:47 -0500
  • Sender: owner-wri-mathgroup at wolfram.com

John,
    A very qualified yes, you can use NonLinearRegress (& I assume, ...Fit
as well) with *boundary* constraints on the range of parameter values.  Look
under StandardPackages-> Statistics -> NonlinearFit in the Help menu, or in
that chapter in the documentation, and you will see the following
description:  "A parameter can be specified to lie within a prescribed range
using {symbol, start, min, max} or {symbol, min, max}"  in their description
of the parameter entry syntax.  HOWEVER, I've tried this and in my
experience, when the gradient of the error surface points out-of-bounds, the
program will simply emit an error message and return with an out-of-bounds
parameter vector!!  Evidently, the algorithm does not use some sort of
penalty function to enforce the constraints; instead it merely halts when
its out of bounds.  I should note that
1.)  I've only tried this with the default Marquardt-Levenburg procedure,
not with the alternative steepest-descent method ("FindMinimum").  The
documentation describes a boundary constraint method there, too.
2.) I am still using vsn. 3.0.x .  Maybe there has been an improvement in
v.4.0?

    Does anyone else have anything enlightening to say about the limitations
w.r.t. constraints?  It seems to me it might be possible to implement a SUMT
method using a penalty function which gets relaxed with each iteration of
the ML search?  I have been successful doing this in FORTRAN, but I haven't
yet tried it with Mathematica.  The idea is that you would enclose your
model function together with a weighted penalty function, the weighting
factor being passed through the enclosing function call.  The first time
NLRegress is called, the weight is large enough that the combined
error+penalty surface can't be escaped during the search, which will take
place in the center (more or less) of the feasible range of parameter space.
The NLRegress call is made repeatedly, each time with a smaller penalty
weighting, so that the error surface resembles the unaltered problem's
surface with increasing accuracy.  When the parameter estimate no longer
changes significantly, the penalty is dropped entirely, and, hopefully, the
last estimate is still in-bounds & surrounded by higher error values so that
the final search will converge on the local minimum.
-mark

-----Original Message-----
From: I. Ioannou <iioannou at u.washington.edu>
To: mathgroup at smc.vnet.net
Subject: [mg20734] [mg20708] Constrained optimization.


>Hi all,
>
>I am trying to run a nonlinear fit with constraints. I can't seem to find
>a package in mathematica to do this, allthough I may have missed it. There
>is a reference in Mathsource (0207-289, multiplier method) but I was
>wondering if one of the available packages would work also.
>
>I have been workging with NonlinearFit and NonlinearRegress for example.
>So I was wondering if I could use them for fits with constraints.
>
>Thanks for any pointers.
>
>John
>
>
>--
>      Ioannis   I    Ioannou                   phone: (206)-543-1372
>      g-2 group, Atomic Physics                fax:   (206)-685-0635
>      Department of Physics
>      University of Washington        e-mail: iioannou at u.washington.edu
>



  • Prev by Date: Graphics output
  • Next by Date: Crypto under mathematica
  • Previous by thread: Constrained optimization.
  • Next by thread: Re: Constrained optimization.