MathGroup Archive 1997

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: method used in NonlinearFit

  • To: mathgroup at smc.vnet.net
  • Subject: [mg8346] Re: [mg8183] method used in NonlinearFit
  • From: seanross at worldnet.att.net
  • Date: Tue, 26 Aug 1997 02:22:40 -0400
  • Sender: owner-wri-mathgroup at wolfram.com

Laura Thompson wrote:
> 
> Where can I find more detail about the method used in NonlinearFit?
> I would like to know more than what the online book (or ??) gives me.
> I am interested in how Mma, specifically, solves the problem, w/o having to
> examine code.  But, maybe that would be the most helpful.
> 
> thank you.

To fit a function to data, somehow a guess at the proper fit must be
converted into a single number output that can be minimized.  This is
usually done by calculating the guess at the same points as the data,
then taking the root mean square deviation and returning it.

The book says NonlinearFit uses the LevenbergMarquardt method and offers
choices of the FindMinimum and Gauss-Newton methods.  I am unfamiliar
with the L--M-- method, but I can offer some general comments about
minimization routines in general.  They fall into two varieties:  slope
followers and minimum followers.  The slope following routines calculate
partial derivatives either numerically or symbolically and use that
information to make the next guess.  The minimum followers calculate
points with no derivatives and go towards the minimum points.  If your
function is relatively cheap to calculate or if symbolic derivatives are
available, then slope following is the best way to go.  This is because
numeric partial derivatives are very expensive in terms of functional
evaluations.  The slope following routines tend to reach the minimum in
the fewest possible "iterations", though many functional evaluations may
be required in each iteration to calculate partial derivatives.  The
minimum followers are less efficient unless your function is very time
consuming to calculate in which case they become less inefficient that
the slope.  Minimum followers are also often subject to false
converges.  The better algorithms start the routine over a couple of
times just to make sure they weren't faked out.

Now to mathematica.  The FindMinimum routine uses slope following
exclusively.  The NonlinearFit routine under the L--M-- technique
according to page 462 of the guide to add on packages says that it
gradually shifts from method of steepest descent(slope following) to
quadratic minimization(minimum following), so it must try and achieve
the best of both worlds.  If you want to look at some code, find a copy
of "Numerical Recipes in ..." (Fortran, Pascal, C).  They have an entire
chapter devoted to minimization routines.  Their code is not all that
readable as it is written in the sphaghetti code style that was common
just before the death of the GOTO statement and before Whiles and
Untilss became common features in high level languages.  The authors do
a good job of conceptually explaining their methods, so that should make
up for the awful code.

Good Luck.  I hope that was what you were looking for.


  • Prev by Date: Front end quirks
  • Next by Date: RE: Domain of a function
  • Previous by thread: method used in NonlinearFit
  • Next by thread: Floor