MathGroup Archive 2004

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Solve Feature?

  • To: mathgroup at smc.vnet.net
  • Subject: [mg52843] Re: Solve Feature?
  • From: Daniel Lichtblau <danl at wolfram.com>
  • Date: Tue, 14 Dec 2004 06:00:02 -0500 (EST)
  • References: <200412100123.UAA18967@smc.vnet.net> <opsir6lsn2iz9bcq@monster.ma.dl.cox.net> <002401c4dfb5$809b6a00$1802a8c0@Pentium4> <opsiu9j4nmiz9bcq@monster.ma.dl.cox.net> <opsiu90xstiz9bcq@monster.ma.dl.cox.net>
  • Sender: owner-wri-mathgroup at wolfram.com

DrBob wrote:
> By the way, it gets even worse if I use NSolve:
> 
> solution = NSolve[{c1, c5}]
> {c1, c5} /. solution
> Apply[Subtract, {c1, c5}, {1}] /. solution
> 
> {{x -> -4.27759132*^8,
>    y -> 43.74999927054117},
>   {x -> 4.2775924*^8,
>    y -> 43.75000072945884}}
> {{False, False}, {False, False}}
> {{1.82977917785309*^17,
>    1.82977917785309*^17},
>   {1.8297792462945597*^17,
>    1.8297792462945597*^17}}
> 
> These are simple polynomial equations, so this is really puzzling.
> 
> Bobby
> 
> On Sat, 11 Dec 2004 17:15:30 -0600, DrBob <drbob at bigfoot.com> wrote:
> 
>> HELP!! Here are two very simple quadratic equations (circles):
>>
>> {c1, c5} = {(-50 + x)^2 + (-50 + y)^2 == 156.25,
>>   (-50.00000000000002 + x)^2 +
>>     (-37.49999999999999 + y)^2 == 156.25};
>>
>> If we rationalize before solving, we get accurate solutions:
>>
>> solution = Solve@Rationalize@{c1, c5}
>> solution // N
>> {c1, c5} /. solution
>> Subtract @@@ {c1, c5} /. solution
>>
>> {{x -> (25/4)*(8 - Sqrt[3]), y -> 175/4},
>>      {x -> (25/4)*(8 + Sqrt[3]), y -> 175/4}}
>> {{x -> 39.17468245269452, y -> 43.75},
>>    {x -> 60.82531754730548,  y -> 43.75}}
>> {{True, True}, {True, True}}
>> {{-4.263256414560601*^-14, 4.973799150320701*^-13},
>>    {-4.263256414560601*^-14, -4.263256414560601*^-13}}
>>
>> You can check this visually with ImplicitPlot:
>>
>> ImplicitPlot[{c1, c5}, {x, -40, 70}]
>>
>> But if we solve without Rationalize, we get wildly inaccurate results:
>>
>> solution = Solve@{c1, c5}
>> {c1, c5} /. solution
>> Subtract @@@ {c1, c5} /. solution
>>
>> {{x -> 16., y -> 43.74999999999993},
>>    {x -> 84., y -> 43.75000000000004}}
>> {{False, False}, {False, False}}
>> {{1038.812500000001, 1038.8125000000005},
>>   {1038.8124999999995, 1038.8124999999993}}
>>
>> I'm using version 5.1.
>>
>> Bobby


The behavior of Solve with these approximate inputs is subject to the 
vagaries of how it creates an infinite precision problem. As this one is 
poorly conditioned it is not too surprising that it does not give a good 
result. I'll point out that with VerifySolutions->True Solve will use 
significance arithmetic throughout, and this is sufficient (in this 
case) to get a viable result.

The NSolve result is more troublesome. I tracked it to some difficulty 
in assessing either of two important facts.

(1) The precision used in an eigen decomposition is not adequate for the 
problem at hand.

(2) The result gives a large residual for the inputs when normalized in 
what I believe to be a reasonable way (obviously we can make the 
residual as large or small as we please by suitably rescaling the input).

It is not too hard to fix the code for either of these but there are 
drawbacks.

The second issue came about from a poor choice on my part of what to 
check: I used a numerical Groebner basis instead of the original inputs. 
  So this can (and will) be addressed. In those cases where it catches 
trouble (as in this example) the fix is to redo certain computations at 
higher precision. As the code for this is already present the only issue 
is in the detection itself. The down side is that I am not certain it 
will be adequate to assess poor solutions in general, though it works 
reasonably well for this example.

For the first issue I am right now at a loss. It seems that the numeric 
EigenXXX code gives a very poor result for a particular 2x2 matrix that 
appears in the course of the NSolve code. It is not too hard to figure 
out for a given matrix that the numeric eigen decomposition might be 
troublesome. Unfortunately that involves finding numeric singular 
values, and I believe that tends to be typically more expensive than the 
EigenXXX computation itself. So the problem becomes one of efficiency: 
detection of the problem cases could make all NSolve calls slower just 
to find the rare bad apples.

I'll give this some more thought.

Another possible issue is in how the numeric Groebner basis gets 
computed. This is something I also need to look at more closely. It 
appears that for one variable order the result is quite tame, but not 
for the other (NSolve[polys, {y,x}] works fine). I'm not yet certain but 
I think the nuemric instability at this step is essentially unavoidable. 
I believe this to be the case because I can find "nearby" polynomial 
systems for which a problematic Groebner basis will emerge (and in fact 
that happens in this example). It is not actually incorrect, but causes 
numeric problems at low precision for the eigen decomposition on a 
matrix formed from that Groebner basis.

Whatever the outcome of investigating that numeric Groebner basis part, 
EigenXXX computation needs to sort out how to handle the resulting 
matrix. So this part at least is a bug to be fixed.


Daniel Lichtblau
Wolfram Research


  • References:
  • Prev by Date: Re: Solve Feature?
  • Next by Date: Re: multiple outputs from a function
  • Previous by thread: Re: Re: Solve Feature?
  • Next by thread: Re: Re: Solve Feature?