MathGroup Archive 2012

[Date Index] [Thread Index] [Author Index]

Search the Archive

Mathematica's FindMinimum does not solve trivial optimization problems

I am trying to solve a simple optimization problem (nonlinear constrained) derived from the finite difference (backward differences, implicit Euler) discretization of a very simple Optimal Control problem.

The derived problem is essentially linear with only bilinearities in the constraints.

I have tried to use FindMinimum as I have been developing code in Mathematica for very general optimal control problems, and this was a test of the solver routine.

To my surprise, and despite its being advertised in the document on "Constrained Optimization" as "industrial strength interior point method" it cannot solve something that other solvers like CONOPT (which I tested via GAMS) does in milliseconds.

The observations are twofold:

(a).  I am using version of Mathematica and I have found that if you have a variable equated to a constant in the constraints section in FindMinimum the solver gives you a ... division by zero.  Here is a sample code for this:

=== Begin sample code (a) ===============

FindMinimum[{(x - 1.0)^2 + (y - 2.0)^2,
  x == 2.0}, {{x, 1.0}, {y, 1.0}}]

=== Output ===================

FindMinimum[{(x - 1.0)^2 + (y - 2.0)^2,
  x == 2.0}, {{x, 1.0}, {y, 1.0}}]

Power::infy: Infinite expression 1/0. encountered. >>

Power::infy: Infinite expression 1/0. encountered. >>

Infinity::indet: Indeterminate expression ComplexInfinity+ComplexInfinity encountered. >>

Power::infy: Infinite expression 1/0. encountered. >>

General::stop: Further output of Power::infy will be suppressed during this calculation. >>

Infinity::indet: Indeterminate expression 0. ComplexInfinity encountered. > >

Infinity::indet: Indeterminate expression 0. ComplexInfinity encountered. > >

General::stop: Further output of Infinity::indet will be suppressed during this calculation. >>

LinearAlgebra`LAPACK`SYTRF::mindet: Input matrix contains an indeterminate entry. >>

FindMinimum::conv: Interior point method fails to converge. >>

=== End Output =========

=== End sample code (a) ================

(b) The optimal control problem in question is the following:

minimize tf.
dv/dt = u
dx/dt = v
x(0)=0, v(0)=0,
x(tf)=300, v(tf)=0,

The input code for it is as follows:

=== Being sample code (b) ===============

(* try to define the car problem directly below *)
(* v is velocity and x is distance, h is the stepsizes, u are the \
controls *)

nsteps = 13;

vlist = Table[v[i], {i, 0, nsteps}];

xlist = Table[x[i], {i, 0, nsteps}];

hlist = Table[h[i], {i, 1, nsteps}];

ulist = Table[u[i], {i, 1, nsteps}];

varslist = Join[vlist, xlist, hlist, ulist, {eps}];

obj0 = Sum[h[i], {i, 1, nsteps}] + 10.^3*(eps - 1.0)^2;

odes = Flatten[
   Table[{x[i] - x[i - 1] == v[i]*h[i],
     v[i] - v[i - 1] == h[i]*u[i]}, {i, 1, nsteps}]];

ubounds = Flatten[Table[{u[i] <= 1.0, u[i] >= -2.0}, {i, 1, nsteps}]];

hbounds = Flatten[Table[{h[i] <= 100.0, h[i] >= 1.0}, {i, 1, nsteps}]];

(* these boundary conditions have a problem... equality of a single \
varialbe to a fixed value does not seem to work with FindMinimum as a \
valid constraint for some reason; have used an extra dummy variable to manage\
to run the problem... *)
boundary = {x[0]*eps == 0.0, v[0]*eps == 0.0, x[nsteps]*eps == 300.0,
   v[nsteps]*eps == 0.0} ;

constr0 =
  Join[odes, ubounds, hbounds, boundary, {eps <= 1.000001, 0 <= eps}];

nvars0 = Dimensions[varslist][[1]];

vars0 = Transpose[{varslist, Table[1.0, {nvars0}]}]; vars0 = Join[Take[vars0, nvars0 - 1], {{eps, 1.0}}];

  {obj0, constr0}, vars0, Method -> "InteriorPoint"

============ End of sample code (b) ======

The output from running this code is as follows:

===== Begin sample output (b) ============

FindMinimum::eit: The algorithm does not converge to the tolerance of 4.806217383937354`*^-6 in 500 iterations. The best estimated solution, with feasibility residual, KKT residual, or complementary residual of {0.000289013,0.0284091,2.28413*10^-11}, is returned. >>

== End sample output (b) ===============

I note that the problem is clearly nonconvex and I wonder if the implementation of the interior point method takes this into account.  The Hessian of the Lagrangian has to be modified in order to produce a descent direction, but this would be a trivial thing to have been overlooked in a professional code provided through a commercial package such as Mathematica.

I also note that a GAMS Nonlinear Programming solver (CONOPT) does not have a problem with this model.  Different starting points give different solutions (as expected due to the nonconvexity of the model) but they never fail to converge the KKT conditions and feasibility.

I note that if problems like this are encountered with FindMinimum for such basic models in nonlinear programming, then it is not a reliable part of the software at all.

Having a native solver for large scale nonlinear programs in Mathematica is really essential as one can develop fairly sophisticated models and input interfaces with the basic Mathematica facilities, and then call the solvers repeatedly (often implementations of certain optimization problems require nested solution of optimization subproblems, such as in decomposition).  It would be very hard to build such loops if external software is to be called, let alone the generation of gradients and Hessians becoming a problem for them.

I hope that someone may have an answer that does not require to resort to an external solver or to be forced to write an interior point solver of our own in Mathematica as that would be very inefficient.

Sincere regards,

  • Prev by Date: Re: FindMinimum function does not seem to work for simple problems
  • Next by Date: Re: coloring individual hexagons in a grid...
  • Previous by thread: Re: Export
  • Next by thread: Re: Mathematica's FindMinimum does not solve trivial optimization problems