MathGroup Archive 2010

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Root Finding Methods Gaurenteed to Find All Root Between (xmin, xmax)

  • To: mathgroup at smc.vnet.net
  • Subject: [mg112752] Re: Root Finding Methods Gaurenteed to Find All Root Between (xmin, xmax)
  • From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
  • Date: Wed, 29 Sep 2010 04:14:27 -0400 (EDT)

Daniel's replies are much better replies than my (hurried) ones and I am relieved that he found the time to write them.  I should have read Ingolf's post much more carefully as it was entirely correct but being currently under rather a lot of work pressure I hastily assumed it was all based on misunderstanding or not having read Semenev's paper.
Sorry about that.

Andrzej



On 28 Sep 2010, at 19:53, Daniel Lichtblau wrote:

> [Finally got sick of subject misspelling; apologies if this messes up mes=
sage threading. --dl]
>
> Ingolf Dahl wrote:
>> It would be very good if the Semenov method can be transferred into a
>> program that it able to find all roots to multi-dimensional non-linear
>> equation systems. I have now also looked a little into this, but now hav=
e
>> three questions/statements:
>> 1. The test T1 checks if the rectangle is free of roots. This is impleme=
nted
>> by comparing the center function values in the rectangle with upper limi=
ts
>> for the spatial derivatives times half the length of the rectangle. The
>> upper limits of the spatial derivatives are calculated by interval analy=
sis.
>> Why is it done in that way? Why not apply interval analysis directly on =
the
>> functions as such? If this derivative method gives better estimates than=
 an
>> direct interval analysis, then I think it would be a good idea to
>> incorporate such a formula in the interval analysis of functions.
>
> Short answer is you are on target. In more detail, this is a case where t=
he underlying math is fine but the implementation might be sloppy. That is =
to say, if one had a simple way to get tight bounds on the derivatives with=
out resorting to interval methods, then that is what one should do. Using i=
ntervals to bound the derivatives, and hence the function values, makes not=
 so much sense because one could (as you observe) just use intervals to bou=
nd the functions to begin with.
>
> There is a different but related situation in which it might pay to do bo=
th. This is when one looks for global optima. In such cases you want interv=
als where function value might be small, and moreover derivatives might be =
zero. Stan Wagon used this idea in a chapter for a SIAM book about Nick Tre=
fethen's 100 digits challenge of several years ago. (I think it was problem=
 4, if memory serves.)
>
>> 2. The corresponding question is relevant for the T2 test. Here the task
>> seems to be to check if the Jacobian determinant is non-zero, and this t=
est
>> is done by comparing the center value in the rectangle with upper limits=
 for
>> the spatial derivatives of the Jacobian determinant times half the lengt=
h of
>> the rectangle. The upper limits of the spatial derivatives are calculate=
d by
>> interval analysis. Why is it done in that way? Why not apply interval
>> analysis directly on the Jacobian determinant as such? Then the program =
does
>> not need to evaluate the second-order derivatives of the functions in th=
e
>> equations.
>
> Same answer. But with a twist. As best I can tell, doing the bounding in =
this (overly conservative) manner makes it very unlikely that one will get =
an incorrect result. Indeed, if you compute derivatives symbolically, then =
substitute intervals, then evaluate a determinant, you won't get a failure.=
 This is because such a numerical test will in fact bound what Andrzej refe=
rs to as a generalized determinant (more on that below).
>
>
>> 3. The reason to check if the Jacobian determinant is non-zero is that t=
his
>> is said to check if there is at most a single root in the rectangle.
>> However, I looked in my textbook from my first year at the university
>> ("Matematisk analys II" by Hylt=E9n-Cavallius and Sandgren, Lund 1965, i=
n
>> Swedish), and found discussion about this and a counterexample.
>
> Yes, this was the part that was in error.
>
>
>> The authors
>> emphasize that a non-zero Jacobian implies that the function values arou=
nd a
>> point are unique in an environment, which is small enough, but we might =
have
>> several identical function values in a larger area. Then we have several
>> roots to the corresponding equation. I will repeat the counterexample he=
re,
>> slightly modified, since I think that it is of interest to several of th=
e
>> readers of MathGroup.
>> Suppose that we have -1/4 <== x<== 1/4 and 0 <== y <== 5*Pi/2 and the eq=
uations Exp[x]*Sin[y] - 1/Sqrt[2] ==== 0
>> Exp[x]*Cos[y] - 1/Sqrt[2] ==== 0
>> Then the Jacobian is Exp[2x], which is larger than zero. The rectangle a=
lso
>> passes Test2, in spite of the two roots {x -> 0, y -> Pi/4} and {x -> 0,=
 y
>> -> 9 Pi/4}. Maybe this is the error Daniel Lichtblau has noticed, but
>> anyhow, the paper by Semenov seems now a bit less impressive. And a plus
>> score for the mathematicians of the previous century.
>
> I used the function of a complex variable z*(z-1)*(1+I*z). Then split int=
o x and y terms.
>
> ff == z*(z-1)*(1+I*z) /. z->x+I*y;
> {u,v} == ComplexExpand[{Re[ff],Im[ff]}];
>
> This gives {u,v} a function of {x,y}, so we go from R^2 to R^2. There are=
 roots at {0,0} and {1,0}. Say we restrict interest to the segment between =
them.
>
> jac == {{D[u,x],D[u,y]}, {D[v,x],D[v,y]}};
>
> Solve[{Det[jac]====0,0<==x<==1,y====0}, {x,y}]
> Out[132]== {}
>
> This means the test will claim there is at most one root in that interval=


  • Prev by Date: Re: Question on Solve
  • Next by Date: Re: Question on Solve
  • Previous by thread: Re: Root Finding Methods Gaurenteed to Find All Root Between (xmin, xmax)
  • Next by thread: Re: Root Finding Methods Gaurenteed to Find All Root Between (xmin, xmax)