losing unconvergent roots in FindRoot
- To: mathgroup at smc.vnet.net
- Subject: [mg5430] losing unconvergent roots in FindRoot
- From: gt1824a at prism.gatech.edu (Heather Mary Hauser)
- Date: Sat, 7 Dec 1996 00:26:04 -0500
- Organization: Georgia Institute of Technology
- Sender: owner-wri-mathgroup at wolfram.com
I have written a program in Mathematica that evaluates a (complicated)
function at several values of a variable and then gives me pairs of
values that probably bracket a zero of the function, i.e. the real
and imaginary parts of the function change sign over these intervals. I
then use FindRoot with each pair to get at the actual zero.
I know that there is only one zero and the rest of the intervals
are where the function goes off to infinity one way or
another and FindRoot will not be able to converge. Fine, but I want it to
stop giving me its last guess after it exceeds the maximum number of
iterations in these cases and just print out the one actual root that
it can find.
Is there any way I can do this?
heather at eas.gatech.edu
Prev by Date:
Next by Date:
Re: How can I Flatten from the inside out, not the outside in?
Previous by thread:
Re: FindRoot question
Next by thread:
apparant bug in Win95 Mathematica 3.0 graphics