MathGroup Archive 1999

[Date Index] [Thread Index] [Author Index]

Search the Archive

RE: RE: How can I control FindMinimum's behavior?

  • To: mathgroup at
  • Subject: [mg19137] RE: RE: [mg19037] How can I control FindMinimum's behavior?
  • From: Shiraz Markarian <shmarkar at>
  • Date: Thu, 5 Aug 1999 23:58:40 -0400
  • Organization: Northwestern University
  • Sender: owner-wri-mathgroup at

Hi Ted and everyone,

Let me first thank you for your prompt reply. I have been following the
discussion group for sometime and I really appreciate your and
others in this group's willingness to help so many people with their

I have slowly begun to realize that the QuasiNewton method requires
analytic derivatives. The problem I sent the group was perhaps an
oversimplification of what I have on my hands. In the real case, the
target function is the product of a known matrix A and a trial matrix X
(w = A.X where A is a 73x3 matrix and X is a 3x3 matrix, initially a
matrix). I then pick out all the negative values from the w matrix and
assign a
penalty function, 50*Sum(w(i,j)^2) for all negative w(i,j). It is this
penalty function that I want to minimize with respect to the matrix
elements of X. I wonder if GlobalOptimization is more suited for this
problem? Are there other algorithms that I should/could use? I looked in

MathSource and found some annealing/genetic algorithms.

You might wonder why I am insisting on QuasiNewton. There are two
One is that the original reference uses the BFGS algorithm and secondly,
find that when I set the Method->Automatic, the elements of X are
one at a time (stiff equation?) and minimization inevitably returns the
input values back. Apparently the BFGS algorithm worked in the original
reference for an identical problem. Is there any way to input a gradient

for the above problem so as to get the QuasiNewton method to work? I am
using Mathematica for Students 3.0.1, if that is relevant.

Thanks for your help, rajdeep kalgutkar

> Rajdeep Kalgutkar wrote:
> ---------------------------
> I have been using FindMinimum for a multidimensional minimization
problem. I
> find that when Method->Automatic is used, my input parameters values
> passed on to my target function immediately, but when I use
> Method->QuasiNewton, FindMinimum insists on symbolic input during the
> minimization cycle. This happens even though I specify 2 input values.
> target function does not have symbolic derivatives and therefore it
> minimization procedure crashes when using the QuasiNewton method. Is
> anyway to prevent FindMinimum from insisting on symbolic input
> ----------------------------
> Apparently the QuasiNewton method needs to have the symbolic
derivative.  In
> your example the function does have a symbolic derivative.  The only
> is that Mathematica can't figure it out because of the way (func) is
> defined.  What you need to do is use the Gradient option to tell the
> what the derivative is.  It's called Gradient because that's what you
> for multi-dimensional problems.  Also the Quasi-Newton method only
needs one
> starting point.
> My solution is below.
> --------------------------
> In[1]:=func[x_]:=Module[{w},
>              w=Sin[x]^2+0.2*x;
>              Print[w," ",x];
>              w
>            ]
> In[2]:=
> FindMinimum[func[x], {x,0.},
>   Method->QuasiNewton,
>   Gradient->{2 Sin[x]Cos[x]+0.2}
> ]
> 0.  0.
> -0.000530497   -0.2
> -0.0100297     -0.102717
> -0.0100337     -0.100623
> -0.0100337     -0.100679
> Out[2]=
> {-0.0100337,{x->-0.100679}}
> -----------------
> Regards,
> Ted Ersek

  • Prev by Date: Re: Re: equaltity of lists
  • Next by Date: Re: DSolve Bessels
  • Previous by thread: RE: How can I control FindMinimum's behavior?
  • Next by thread: Re: Clippling of polygons within a defined region