MathGroup Archive 1995

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: How to formulate this while- loop in mathematica ?

  • To: mathgroup at christensen.cybernetics.net
  • Subject: [mg1516] Re: How to formulate this while- loop in mathematica ?
  • From: villegas (Robert Villegas)
  • Date: Wed, 21 Jun 1995 21:04:08 -0400
  • Organization: Wolfram Research, Inc.

> i would like to implement a newton-like approximation in mathematica, 
> 
> which involves the following program fragment in C-like notation:
> 
> 		while( (condition1(a,b) > eps) && condition2(a,b) > eps ) {
> 		   
> 		 /* some numerical computations */
> 		
> 	          a = ..... ; 
> 		  b = ..... ;
> 		}
> 
> eps gives the tolerance when the loop can terminate.
> 
> My problem is to how to formulate this algorithmic skeleton in mathematica  
?


================
Method 1:  While
================

You could map this directly to Mathematica using While:

In[1]:= ? While
While[test, body] evaluates test, then body, repetitively, until test first
   fails to give True.

Something like


{a, b} = {a0, b0};

While[condition1[a, b] > eps  &&  condition2[a, b] > eps,
  (* some numerical computations *)
  a = aNew;
  b = bNew;
]

{a, b}


=====================
Method 2:  FixedPoint
=====================

A different implementation can be gotten from FixedPoint.

First, formulate the body of the 'while' loop as a function that
takes an ordered pair {a, b}, does the computations, and returns
an ordered pair containing the new values of a and b:

successor[{a_, b_}] := {aNew[...], bNew[...]}

Do not assign new values to a and b, just formulate the new pair in
terms of the previous pair {a, b} and let it be the return value of
'successor'.  Really all you're doing is extracting the body of the
'while' loop into an ordered pair formula.


Then formulate the conditions as functions that take an ordered pair
{a, b} and return True or False:

condition1[{a_, b_}] := <some testing>

condition2[{a_, b_}] := <more testing>


With the functions defined, the procedure is a one-liner:

FixedPoint[successor, {a0, b0},
  SameTest->(condition1[#2] <= eps || condition2[#2] <= eps &) ]


That's all!


As for how to use SameTest, and why I chose the function I did, read
on if you are interested in details.

The spirit of FixedPoint is to iterate a function until the result
no longer changes.  More precisely, what it does is iterate until
two consecutive terms satisfy some condition, which is usually a
function to decide if the two are close enough to be considered
equivalent.

     go until test[ x(n-1), x(n) ] returns True

SameTest specifies this comparison function (if you don't want to
use the default one, which is essentially SameQ).  The comparison
function will be given two arguments:  the current term and the
previous term, or x(n-1) and x(n) as I wrote them above.  Very
often the most convenient way to express the comparison is as a
pure function, using #1 and #2 for the variables (see p. 207 of
the Mathematica book).  For instance,

   FixedPoint[f, x, SameTest -> (Abs[#1 - #2] < 10^-4 &) ]

would go until two consecutive terms were less than 10^-4 distant.


But the function doesn't have to perform a comparison of the two
arguments; i.e. it isn't required to use both arguments that are
given to it.  It could simply test the second argument (= most recent
term) for some property.  That is what I did using just #2 above:
#2 will represent {a, b} at the current step, and will be run through
both conditions.  FixedPoint used this way becomes an Until loop,
in effect.


===============================
Newton's method with FixedPoint
===============================

Here's a quick and dirty implementation of a minimum-finder using
FixedPoint (this doesn't use any special SameTest)


(* This has a local minimum at x = Pi, y = E. *)

In[1]:= g[x_, y_] =(x - Pi)^2 + (y - E)^2 + 1

                     2           2
Out[1]= 1 + (-Pi + x)  + (-E + y)


(* We want to know where the gradient vanishes: *)

In[2]:= grad[x_, y_] = {D[g[x, y], x], D[g[x, y], y]}

Out[2]= {2 (-Pi + x), 2 (-E + y)}

In[3]:= jacob[x_, y_] = Together @ Outer[D, grad[x, y], {x, y}]

Out[3]= {{2, 0}, {0, 2}}


(* This finds the next approximation of the root of the gradient: *)

In[4]:= next[{x_, y_}] := {x, y} + LinearSolve[jacob[x, y], -grad[x, y]]


(* Iterate this approximation until it stops moving: *)

In[5]:= FixedPoint[next, {1., 1.3}]

Out[5]= {3.14159, 2.71828}


(* Compare to the result of FindMinimum: *)

In[6]:= FindMinimum[g[x, y], {x, 1.}, {y, 1.3}]

Out[6]= {1., {x -> 3.14159, y -> 2.71828}}



Robby Villegas


  • Prev by Date: FNN with Mathematica
  • Next by Date: A/S Wavefunction calcs: more
  • Previous by thread: FNN with Mathematica
  • Next by thread: Re: help