MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: non linear system with 8 equations

  • To: mathgroup at smc.vnet.net
  • Subject: [mg117266] Re: non linear system with 8 equations
  • From: Daniel Lichtblau <danl at wolfram.com>
  • Date: Sun, 13 Mar 2011 05:27:03 -0500 (EST)

Ruggero wrote:
> 2011/3/11 Daniel Lichtblau <danl at wolfram.com>:
>> wiso wrote:
>>> I've a non linear system with 8 equation:
>>>
>>> solution = Solve[{n == ng + nn,
>>>     na == eag ng + ean nn,
>>>    nb == ebg ng + ebn nn,
>>>    nc == ecg ng + ecn nn,
>>>    nab == kabg eag ebg ng + kabn ean ebn nn,
>>>    nac == kacg eag ecg ng + kacn ean ecn nn,
>>>    nbc == kbcg ebg ecg ng + kbcn ebn ecn nn,
>>>    nabc == kabcg eag ebg ecg ng + kabcn ean ebn ecn nn }, {ng, nn,
>>>    eag, ebg, ecg, ean, ebn, ecn}];
>>>
>>> my pc is cogitating since more than one hour. Is there some method to
>>> speed it up? All variables are real, n* are positive integer, e* are
>>> real 0<e*<1. k* are known parameters
>> I doubt Solve will have any chance with this. It would need to compute some
>> flavor of GroebnerBasis, and that seems to be a difficult undertaking for
>> this system.
>>
>> If you provide explicit numeric values for those parameters then NSolve will
>> handle it. Below is some code for this. It takes the polynomials, variables
>> of interest, parameter substitutions, and constraints entered as a list.
>>
>> In[160]:=
>> constrainedSolutions[polys_, vars_, paramsubs_, constraints_] :=
>>  Module[
>>  {solns},
>>  solns = NSolve[polys /. paramsubs, vars];
>>  Pick[solns, Quiet[And @@@ Release[constraints /. solns]]
>>   ]]
>>
>> Here is your example. I changed the constraints so as to find parameter
>> substitutions that might yield a few valid solutions. I show one such run
>> below.
>>
>> In[161]:=
>> eqns = {n == ng + nn, na == eag ng + ean nn, nb == ebg ng + ebn nn,
>>   nc == ecg ng + ecn nn, nab == kabg eag ebg ng + kabn ean ebn nn,
>>   nac == kacg eag ecg ng + kacn ean ecn nn,
>>   nbc == kbcg ebg ecg ng + kbcn ebn ecn nn,
>>   nabc == kabcg eag ebg ecg ng + kabcn ean ebn ecn nn};
>> polys = Apply[Subtract, eqns, 1];
>> vars = {ng, nn, eag, ebg, ecg, ean, ebn, ecn};
>> params = Complement[Variables[polys], vars];
>> paramsubs = Thread[params -> RandomReal[{-10, 10}, Length[params]]];
>> SeedRandom[11111];
>> constraints =
>>  Flatten[{Map[Hold[Head[#] === Real] && # >= 0 &, {ng, nn}],
>>    Map[Hold[Head[#] === Real] && -100 <= # <= 100 &, {eag, ebg, ecg,
>>      ean, ebn, ecn}]}];
>>
>> In[168]:= constrainedSolutions[polys, vars, paramsubs, constraints]
>>
>> Out[168]= {{ng -> 7.30836, nn -> 0.254874, eag -> 0.186697,
>>  ebg -> -0.90457, ecg -> -0.0648619, ean -> 2.44744, ebn -> 0.726247,
>>   ecn -> -2.57797}, {ng -> 4.8318, nn -> 2.73144, eag -> -0.428736,
>>  ebg -> -1.02281, ecg -> -0.46866, ean -> 1.48632, ebn -> -0.543236,
>>  ecn -> 0.414936}, {ng -> 1.00957, nn -> 6.55367, eag -> -0.174131,
>>  ebg -> -3.37666, ecg -> -4.50395, ean -> 0.330202, ebn -> -0.460332,
>>   ecn -> 0.521227}}
>>
>> If you know a priori that your system will have exactly one solution
>> satisfying the constraints, then you might do better using FindRoot with
>> appropriate initial values for the variables. But for this particular system
>> of polynomials NSolve will be fast (again, provided you give numerical
>> values up front for the parameters).
>>
>> Daniel Lichtblau
>> Wolfram Research
>>
> 
> Thank you for the reply (why didn't you post it on the ng?). Well, I
> know I can find a numberical solution very quickly, but I need to find
> a closed formula, even expressed as root of a n degree polynomial. My
> big problem is that every know number is known with an error, as n +/-
> delta n, and I want to propagate every error in the solution.

I just posted what I think is a reasonable way to go about assessing 
rough bounds on roots, if given reasonable tolerance ranges on 
coefficients. I wanted to outline another way about it. I do so only 
because it has interesting computational math behind it; I do not think 
this approach will have any real advantage over the statistical sampling 
I last mentioned.

I suspect there are shortcuts one might take that would make for a more 
concise exposition. My apologies for not having stumbled across them. 
But if you like, skip to the "two baby steps" method at the end, as it 
offers a possibility; I simply have not yet figured out if it can be 
made to provide correct bounding information.

For the full method...

Step I. We need to compute a Groebner basis for the numeric system, with 
some indication of the ranges of coefficient values. While Mathematica's 
GroebnerBasis works with coefficients of finite precision, it will not 
manage this task directly when they are of low precision. The reason is 
that the code relies on precision tracking, and the error grows, and 
precision loss kills the algorithm. So if you only have a few digits to 
begin with, it is not likely to succeed.

There are two options.

Method (1)

     (i) Compute a numeric basis for the system with coefficients at 
high precision, set to the centers of their tolerance ranges.

     (ii) Use e.g. the method I indicated recently in a different thread 
(URL below) to find also the conversion matrix that maps input to 
Groebner basis.

http://forums.wolfram.com/mathgroup/archive/2011/Mar/msg00362.html

    (iii) Now set the initial system to the low precision values and 
expand conversion matrix time input system (expressed as a vector). 
Unless the system is near a multiple root, or the Groebner basis is at 
or near a singularity (I won't go into details of what this means, 
suffice it to say these are measure zero events in the parameter space 
of coefficient values), this expansion will remain a Groebner basis for 
all coefficient values in the initial tolerance ranges.

Method (2)

     (i) Treat the coefficients of the input as high precision values, 
each in the center of its allowed range. To each, however, add a unique 
symbolic epsilon. (So we have one epsilon per coefficient of the input 
system).

     (ii) Compute a Groebner basis with respect to an order tha has the 
regular variables as before, but treats the epsilons as "small", 
specifically, smaller than numbers. (This is called a mixed order, 
global with respect to some variables and local with respect to others.) 
One method to effect this ordering is by homogenizing with respect to 
the epsilons, then treating the homogenizing variable as larger than teh 
epsilon variables.

     (iii) For purposes of speed, and in general without doing 
substantial numerical harm to the result, you might also stipulate that 
products of two (or perhaps three) or more epsilons are zero. This leads 
to a first or second order approximation in the resulting Groebner basis.

[Interlude: Okay, wake up everybody. If I had to write this much, no 
reason you shouldn't have to read it.]

Notice that method (1) gives a basis comprised of polynomials with 
(very) approximate coefficients. Method (2), in contrast, has 
coefficients comprised of numbers plus symbolic epsilon offsets.

Step II. A possible next step is to go from Groebner basis with 
approximate coefficients to univariate polynomials of same, in each of 
the variables. This is useful if we are to assess root ranges.

There are various ways to get such polynomials. One is by linear 
algebra, using a method by Buchberger (inventor of Groebner bases) from 
the early 80's. It is a forerunner to what is called the FGLM basis 
conversion method.

Another possibility is to do something like what NSolve does internally, 
which is to formulate certain eigenvalue problems (a method that goes 
back to Hans Stetter and others). One would use the matrices, which are 
constructed from the basis polynomials and some other work, to then get 
the univariate polynomials.

My suspicion is that the first of these approaches will do best with a 
Groebner basis obtained from method (1) above, whereas the second may 
prefer that of method (2). have not ever tried this, so I'm far from 
sure. Actually I'm far from sure it is realistic we'd even get this far 
in actual computations.

Yet another possibility at this point is to work directly with the 
eigenproblem matrices, rather than finding the univariate polynomials. 
See next step.

Step III. We now want to assess conditioning of either a univariate root 
finding problem or else an eigenproblem. I will not discuss the latter 
other than to remark that (1) there are ways to do this that are 
described in numeric linear algebra texts and literature, and (2) 
possibly bounds provided by Gershgorin circles or other device might be 
useful.

For the former, a reasonable thing is to find roots for the polynomials 
with coefficients (as ever, chosen as the center values of their ranges. 
If we used the method (2) Groebner basis then we'd simply be setting all 
epsilons to zero. Call this "nice" polynomial p(x).

Now consider any allowable perturbed polynomial p(x) + p_epsilon(x), 
where p_epsilon is given either by symbolic coefficients, or smallish 
ranges of numeric values around zero (well, we hope smallish, but for 
ill conditioned systems this may not be the case). Start with a root x 
to p(x). The corresponding root for the perturbed polynomial (if we knew 
its coefficients) could be obtained by Newton's method. I think a 
reasonable measure of the conditioning is then the size of one Newton 
step. Again, if perturbations are large, this will not work at all well.

That step, if I did my scrawling correctly, is, to good approximation, 
of size p_epsilon(x)/p"(x). Remark: I believe I have seen this formula 
elsewhere as a measure of root conditioning, so it has some hope of 
actually being correct.

To approximate a worst case error it suffices to maximize the size of 
p_epsilon(x) for coefficients in their allowable ranges. Even cruder, 
but guaranteed, is to take absolute values of all coefficients, with 
epsilons set to their maximal allowed values (which depend on the 
tolerances of coefficients in the original system that spawned the 
epsilons).

So there you have it. Several paths to get approximate bounds on root 
ranges. I'm not sure this would be bad in practice, although one would 
need to experiment to find out which of these might actually be workable 
in practice. And at the end of the day (or month...) we'd probably have 
nothing that a statistical sampling of the ranges did not predict.

Anyway...as step I Groebner basis computations go, I have in past work 
done both of methods (1) and (2). But that is no guarantee they are 
tractable for the sorts of problem that are presented in this thread. I 
think either direction noted for step III would be straightforward. I do 
not recall ever attacking the various approaches indicated in step II, 
and those I suspect may harbor malevolent sprites, or worse.

What would be useful, but has not come to me thus far, would be a more 
direct route from step I (or prior) to the roots. Here is a possibility, 
though I do not see any way to guarantee it really approximates the full 
range.

Baby step I. Find a set of roots for the system with coefficients at 
centers of their ranges. Either of NSolve or FindRoot might serve for this.

Baby step II. Use the Newton iteration way of approximating the 
conditioning. This time we have a matrix equation. But all the same it 
might yield useful bounds on how far roots might stray if coefficients 
have ranges.


Daniel Lichtblau
Wolfram Research








  • Prev by Date: Re: what's new in 8.0.1?
  • Next by Date: Re: Wolfram, meet Stefan and Boltzmann
  • Previous by thread: Re: non linear system with 8 equations
  • Next by thread: Re: CDF player and opening non-Mathematica files