MathGroup Archive 2006

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: RE: Re: RE: RE: Functional decomposition (solving f[f[x]] = g[x] for given g)

  • To: mathgroup at smc.vnet.net
  • Subject: [mg72012] [mg71996] Re: [mg71972] RE: [mg71885] Re: [mg71872] RE: [mg71839] RE: [mg71764] Functional decomposition (solving f[f[x]] = g[x] for given g)
  • From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
  • Date: Fri, 8 Dec 2006 06:18:02 -0500 (EST)
  • References: <200612061104.GAA02807@smc.vnet.net> <B2493AAB-0463-49C0-9E90-118BB276E47F@mimuw.edu.pl>

I meant "class C1", of course: differentiable, with continuous  
derivative.

Andrzej Kozlowski


On 7 Dec 2006, at 17:58, Andrzej Kozlowski wrote:

> To my mind an equally interesting problem (though admittedly not if  
> one wishes to apply Mathematica) is to find the widest class of  
> functions g for which the equation admits no solutions. There is  
> one case which is immediately obvious: the functional equation f(f 
> (x))==g(x) cannot have any solutions f (of class C2) for smooth  
> functions g whose derivative is g'(x) <0 everywhere; g(x) = exp(-x)  
> is such a function. The proof is completely trivial, but as far as  
> I can tell no-one seems to have mentioned it explicitly so here it is:
> if f(f(x)) = g(x) then f'(f(x)) f'(x)) = g'(x) is negative. But  
> that means that f' has opposite sign at x and f(x). That in turn  
> implies (if f is of class C2)  that f'(t) = 0 for some t between x  
> and f(x). But that would  mean that g'(t) = 0, contradicting the  
> fact that g'(x)<0 everywhere.
> I have given very little thought to this but it seems to me that  
> the assumption g'(x)>0 for all x should suffice for the existence  
> of f (though may not suffice for the existence of a nice enough f,  
> e.g. analytic). I would expect also that examples can be  
> constructed where g'(x)>0 for some x and g'(y)< 0 for some why and  
> where no f of class C2 can exists, but I can't find any obvious ones.
>
> Andrzej Kozlowski
>
> On 6 Dec 2006, at 20:04, Ingolf Dahl wrote:
>
>> Hi again dear MathGroup,
>> I have checked the suggested procedure below for solving the equation
>> f(f(x))=exp(x), and it works smoothly.
>> Maybe this is already well-known. Maybe it is known, but not  
>> implemented in
>> Mathematica code. Maybe it is included in the book by Aczel, I  
>> have not been
>> able to check that yet, since I have not the book available. If  
>> you know
>> where to find an already published solution, please inform me.
>> But anyway, I have not spent too many hours on it, and it is a nice
>> demonstration of the powers of Mathematica. Thus it is relevant to  
>> publish
>> it here.
>> Moreover, the solution I obtain is characterized by some mathematical
>> constants, which should be thumbprints for uniqueness. I have not  
>> been able
>> to localize these constants anywhere else.
>>
>> But what I have found is that the solution is related to half- 
>> integer values
>> of "tetration" in Wikipedia and PowerTower in MathWorld (naturally  
>> extending
>> into other fractional values). Wikipedia states; "At this time  
>> there is no
>> commonly accepted solution to the general problem of extending  
>> tetration to
>> the real or complex numbers, although it is an active area of  
>> research."
>>
>> What I claim is that (in the Mathematica notation from  
>> FurtherExamples to
>> "Power" in Help)
>>
>> PowerTower[E,-1/2] = "HalfwayFromZero" =
>> 0.4987433645316708791828373750416860523968424`43.
>>
>> PowerTower[E,-3/2] = "HalfwayToZero" =
>> -0.6956636150637165750917227913152630811483903`42.
>>
>> (PowerTower[E,-2] = -Infinity, PowerTower[E,-1] = 0, PowerTower[E, 
>> 0] = 1 by
>> natural extension)
>>
>> Here is my solution:
>> I started by doing a routine for decomposition of some functions g 
>> (x) into q
>> equal functions. Thus f(f(f...(f(x))...))=g(x), with q f:s. First  
>> the series
>> expansion to the power n is derived. Then the function is composed  
>> p times
>> with itself. We thus get the series expansion of the function to the
>> decompositional power p/q. The requirement on g is that g[0)=0, with
>> positive derivative at origin and with a Maclaurin expansion  
>> around origin.
>>
>> Attributes[FractionalSeries]= {HoldFirst};
>> FractionalSeries[ func_/; And[ func[0.]==0., SeriesCoefficient 
>> [ Series[
>> func[var], { var,0,1}],1]>0],x_,
>> { p_Integer/; p>0, q_Integer/; q>0}, n_Integer/; n>0]:=
>> Module[ { a,f1,fn,f1s},
>> f1[var_]= Sum[ var^i* a[i], { i,1,n}];
>> fn[var_]:=
>> Nest[ f1, var,q];
>> a[1]= a[1] /. Solve[ Reduce[ { SeriesCoefficient[ Series[ fn[var], {
>> var,0,1}],1]== SeriesCoefficient[ Series[ func[var], { var,0,1}],1],
>> a[1]>0}, a[1]], a[1]][ [1]];
>> Do[ a[i]= a[i] /. Solve[ { SeriesCoefficient[ Series[ fn[var], {
>> var,0,i}],i]== SeriesCoefficient[ Series[ func[var], { var, 
>> 0,i}],i]}, a[i]][
>> [1]], { i,2,n}];
>> Series[ f1[x], { x,0,n}];
>>  ComposeSeries@@ Table[ Series[ f1[x], { x,0,n}], {p}]
>>
>>
>> Example of use: Sinh is such a function, and let us take it to the
>> decompositional power 2/3. Define
>>
>> ftwothirds[y_] := Normal[FractionalSeries[Sinh[#1] & , var, {2,  
>> 3}, 29]] /.
>> {var -> y};
>>
>> Then
>>
>> ftwothirds[ftwothirds[ftwothirds[0.3]]] - Sinh[Sinh[0.3]]
>>
>> evaluates to
>>
>> 5.551115123125783*^-17
>>
>> and
>>
>> ftwothirds[ftwothirds[ftwothirds[SetAccuracy[0.3, 60]]]] -
>> Sinh[Sinh[SetAccuracy[0.3, 60]]]
>>
>> evaluates approx to
>>
>> 1.65*^-24
>>
>> The problem is that Exp is not such a function, since Exp[0} == 1.  
>> But Exp
>> behaves asymptotically as 2*Sinh. Let us find the series expansion  
>> of 2*Sinh
>> to the decompositional power 1/2. (There is a more complicated  
>> relation
>> between the series expansions for 2*Sinh and Sinh than I thought  
>> from the
>> beginning.)
>>
>> Simplify[FractionalSeries[2*Sinh[#1] & , y, {1, 2}, 29]]
>>
>> (returns long expression)
>>
>> If we use this expression, we might build a function f2sinh to  
>> extend the
>> series expansion to whole real axis. f2sinh should now be THE smooth
>> solution to the equation f2sinh[f2sinh[x]] == Sinh[x] .
>>
>> f2sinh[x_Real] := Module[{y, steps},
>> y = If[MachineNumberQ[x], SetAccuracy[x, 24], x]; steps = 0;
>> While[Abs[y] > 0.07, steps = steps + 1; y = ArcSinh[y/2]];
>> y = fpoly[y];
>> (If[MachineNumberQ[x], N[#1], #1] & )[Nest[2*Sinh[#1] & , y, steps]]]
>>
>> where
>>
>> fpoly[y_] := Sqrt[2]*y + y^3/(9*Sqrt[2]) - (11*y^5)/(2700*Sqrt[2]) +
>> (479*y^7)/(1020600*Sqrt[2]) - (426679*y^9)/(6246072000*Sqrt[2]) +
>> (85589473*y^11)/(7557747120000*Sqrt[2]) -
>> (1403953837837*y^13)/(689720002171200000*Sqrt[2]) +
>> (1031743413846247*y^15)/(2669216408402544000000*Sqrt[2]) -
>> (1496209724379058102331*y^17)/(19591834900362000756480000000*Sqrt 
>> [2]) +
>> (17812978070670318115902941*y^19)/ 
>> (1145769688642970528240463360000000*Sqrt[2
>> ]) -
>> (7533260191446785723514332836231*y^21)/ 
>> (232533958310090868706402038912000000
>> 0000*Sqrt[2]) +
>> (136561976266545063524556174223350449*y^23)/ 
>> (1985445516947137017537762080849
>> 52576000000000*Sqrt[2]) -
>> (719728529960035731908565151209614126425839277*y^25)/ 
>> (4853778747825775229320
>> 514450035010550004224000000000000*Sqrt[2]) +
>> (100427683629367623630378641852188624798406912708777*y^27)/ 
>> (3101826723913052
>> 963398192041352673632022399364096000000000000*Sqrt[2]) -
>> (230559003784630013844879523283194881704293155047113459273*y^29)/ 
>> (3226456204
>> 1869955579438812968736894041769959385166794752000000000000*Sqrt[2]) +
>> SetAccuracy[0., 44]*(y/SetAccuracy[0.07, 60])^29;
>>
>> The limit 0.07 in this definition is a bit arbitrarily chosen, and  
>> you might
>> decrease it to increase accuracy, which anyway is good along the  
>> whole real
>> axis. I have estimated the truncation error to be as large as the  
>> last term
>> in the series.
>> Now we can assume that Exp to a fractional decompositional power
>> asymptotically should be identical to 2*Sinh to the same fractional
>> decompositional power, with a relative error that is less than the  
>> relative
>> error of Exp compared to 2*Sinh, if the decompositional power is  
>> less than
>> one. Thus for any x, we might climb up to a large value of Exp... 
>> [Exp[x]]
>> (say x>60), fetch a value using f2sinh[x], and then climb down  
>> again. Thus
>> we might calculate the (smooth) solution to the equation fexp[fexp 
>> [x]]==
>> Exp[x] as
>>
>> fexp[x_Real] :=
>> Module[{y, steps},
>> y = If[MachineNumberQ[x], SetAccuracy[x, 24], x]; steps = 0;
>> While[y < 60, steps = steps + 1; y = Exp[y]];
>> y = f2sinh[y]*(1 + SetAccuracy[0., y/1.15]);
>> (If[MachineNumberQ[x], N[#1], #1] & )[Nest[Log[#1] & , y, steps]]]
>>
>> The limit of fexp[x] when x->-Infinity is the constant  
>> HalfwayToZero defined
>> above.
>> And fexp[HalfwayToZero] = 0,
>> fexp[0]= HalfwayFromZero,
>> fexp[HalfwayFromZero] = 1,
>> fexp[1] = Exp[HalfwayFromZero],
>> fexp[Exp[HalfwayFromZero] = E und so weiter.
>>
>> Any devils hiding anywhere?
>> One little devil is that fexp[x] approaches HalfwayToZero fast  
>> when z ->
>> -Infinity, so there easily is a loss of accuracy. For large  
>> negative values
>> of x, fexp will approximately follow the formula
>>
>> fexp[x] ~ HalfwayToZero + Exp[x]/f´[HalfwayToZero]
>>
>> where
>>
>> f´[HalfwayToZero] = 0.5688037784984972069377219276490924736217`40.
>>
>> For calculations of derivatives etc. of f(x) in the region around  
>> x = 0, a
>> slightly simpler formula might be used instead of f(x):
>>
>> fexpsimpl[x_Real] := Nest[Log, Nest[2Sinh[#] &,
>>     fpoly[Nest[ArcSinh[#/2] &, Nest[Exp, x, 4], 8]], 8], 4]
>>
>> Thus we can obtain the zeroth, first second and third derivatives  
>> of fexp(x)
>> at x=0:
>>
>> {0.498743364531670879182837375041686052396842401697695,
>> 0.87682850111901739234161348908444926737095344685410,
>> 0.489578037438247862041903165991088781934758253721,
>> 0.1222616742183662064640316453590024071635142693}
>>
>> It should be possible to calculate Exp to other fractional  
>> decompositional
>> powers, and to calculate PowerTower for other first arguments than  
>> E and for
>> fractional second argument, following the general scheme here.
>>
>> That all for today!
>>
>> Ingolf Dahl
>>
>>> -----Original Message-----
>>> From: Gregory Duncan [mailto:gmduncan at gmail.com]
>>> Sent: den 2 december 2006 11:11
>>> To: mathgroup at smc.vnet.net
>>> Subject: [mg71885] Re: [mg71872] RE: [mg71839] RE: [mg71764]
>>> Functional decomposition (solving f[f[x]] = g[x] for given g)
>>>
>>> The problem falls generally into the the area of functional  
>>> equations.
>>> See J. Aczel
>>> Functional Equations: History, Applications and Theory or any
>>> of his other books. One can also try to find a fixed point in
>>> a function space.
>>>
>>>
>>> On 12/1/06, Ingolf Dahl <ingolf.dahl at telia.com> wrote:
>>>> Some further comments on the equation f(f(x)) = g(x):
>>>> For the case g(x) = exp(x) some work have been done, see
>>>> http://www.math.niu.edu/~rusin/known-math/index/39-XX.html  ,
>>>> http://www.math.niu.edu/~rusin/known-math/99/sqrt_exp
>>>> and
>>> http://www.newton.dep.anl.gov/newton/askasci/1993/math/MATH023.HTM .
>>>> (found by a Google search for "f(f(x))")
>>>>
>>>> However, I have not penetrated deeply enough into all references to
>>>> really see if there are good methods to find (precise) numerical
>>>> values for at least one solution of the equation. Or a plot...
>>>>
>>>> But some thoughts further, somewhat fishing on deep water:
>>> If we have
>>>> a function g(x) that is such that g(0)=0 (or g(x0)=x0 for
>>> some x0) and
>>>> where
>>>> g(x) also has a Maclaurin expansion with a positive first
>>> coefficient,
>>>> it should be easy to find a Maclaurin series which composed
>>> to itself
>>>> is identical to that of g(x). Examples of such functions g(x) are
>>>> sin(x) and sinh(x). Then there is the question if this series
>>>> converges in some range, I have not tested yet. (In the links above
>>>> they refer to a theorem by Polya, stating that convergence
>>> range must
>>>> be limited.) For some other functions, we could also think
>>> of expansions in x^(Sqrt(2)).
>>>> Suppose that we find such a function f(x) for g(x) =
>>> sinh(x), name it
>>>> fsinh(x), with convergence in some region near origo. Then
>>> it might be
>>>> possible to extend that to the whole axis if we for any x
>>> apply ginv
>>>> (the inverse of g) a number of times until we reach the convergence
>>>> region. (we might have to use extended precision.) There we
>>> apply f,
>>>> and then apply g the same number of times as we did with
>>> ginv. Then we
>>>> should have obtained f(x).
>>>>
>>>> If we now look at the case g(x) = exp(x), we can note that
>>> if x > 20,
>>>> exp and 2*sinh are essentially identical (to machine
>>> precision). Thus
>>>> we could assume that fsinh(x)*Sqrt[2] is a good
>>> approximation of the
>>>> solution f(x) for g(x)=exp(x) and x>20, and then we might
>>> use the same
>>>> procedure as for
>>>> g(x)=x^2+1 to extend this to zero. With the power of Mathematica
>>>> available, just a few lines of code maybe are all that is
>>> needed. But
>>>> maybe some devils are hiding in the details?
>>>>
>>>> Is there anyone who knows more about this?
>>>>
>>>> Ingolf Dahl
>>>>
>>>>
>>>>> -----Original Message-----
>>>>> From: Ingolf Dahl [mailto:ingolf.dahl at telia.com]
>>>>> Sent: den 30 november 2006 12:06
>>>>> To: mathgroup at smc.vnet.net
>>>>> Subject: [mg71839] RE: [mg71764] Functional decomposition
>>> (solving
>>>>> f[f[x]] = g[x] for given g)
>>>>>
>>>>> Hi Kelly,
>>>>> Here is a rapid amateur answer (but without scary rhymes,
>>> as in my
>>>>> last answer). If you are satisfied with the numerical
>>> values of the
>>>>> function, you can investigate
>>>>>
>>>>> g[x_]:=1+x^2;
>>>>> ginv[x_]:=Sqrt[x-1];
>>>>> f[x_Real]:=Nest[ginv,Power[Nest[g,x,9],Sqrt[2]],9]
>>>>>
>>>>> f[x_Real] appears to be a good solution to f[f[x]] == g[x] to
>>>>> approximate machine precision. The solution is based on two
>>>>> observations:
>>>>> 1) Nest[g,x,9] is equivalent to Nest[f,x,18], and we reach very
>>>>> large values when this is calculated. There g[x] is approximately
>>>>> equal to x^2 (to machine precision), and f[x] can be
>>> approximated by
>>>>> x^Sqrt[2]. Thus we might reach a good approximation of
>>> Nest[f,x,19]
>>>>> by Power[Nest[g,x,9],Sqrt[2]].
>>>>> 2) We can then apply the inverse function ginv of g 9
>>> times to reach
>>>>> f[x].
>>>>> The relative error is reduced by the application of the inverse
>>>>> function.
>>>>>
>>>>> We can note that f[0.] is 0.6420945043908285` which do
>>> not appear to
>>>>> be a known mathematical constant. But now it is.
>>>>> (If someone can check... I cannot reach the "The Inverse Symbolic
>>>>> Calculator" at CESM. Is it down?)
>>>>>
>>>>> Generalize and write  f[x] as  DecompositionPower[x,1/2] and the
>>>>> solution to f[f[f[x]]]\[Equal]g[x] as
>>> DecompositionPower[x,1/3] For
>>>>> the given function g then we can try
>>>>>
>>>>>
>>> DecompositionPower[x_,p_]:=Nest[ginv,Power[Nest[g,x,9],Power[2,p]],9
>>>>> ]
>>>>>
>>>>> and we might for instance plot
>>>>>
>>>>> Plot[DecompositionPower[0.,p],{p,0,1}]
>>>>>
>>>>> The function DecompositionPower[x_,p_] should be quite easy to
>>>>> generalize to a large set of polynomial functions g.
>>> Maybe it then
>>>>> should have the name FractionalNest?
>>>>>
>>>>> Best regards
>>>>>
>>>>> Ingolf Dahl
>>>>>
>>>>> -----Original Message-----
>>>>> From: Kelly Jones [mailto:kelly.terry.jones at gmail.com]
>>>>> Sent: 28 November 2006 12:04
>>>>> To: mathgroup at smc.vnet.net
>>>>> Subject: [mg71839] [mg71764] Functional decomposition (solving
>>>>> f[f[x]] = g[x] for given g)
>>>>>
>>>>> Given a function g[x], is there a standard methodology to find a
>>>>> function f[x] such that:
>>>>>
>>>>> f[f[x]] == g[x]
>>>>>
>>>>> A "simple" example (that doesn't appear to have a closed form
>>>>> solution):
>>>>>
>>>>> f[f[x]] == x^2+1
>>>>>
>>>>> There are probably several (approximable?
>>>>> power-series-expressible?) answers, but the most natural answer
>>>>> would be everywhere positive and monotone increasing for x>0.
>>>>>
>>>>> For x^2, there are at least two continuous solutions:
>>>>> x^Sqrt[2] and x^-Sqrt[2], the former being more "natural".
>>>>> It's somewhat amazing that x^2+1 is so much harder.
>>>>>
>>>>> Of course, the next step would be to find f[x] such that:
>>>>>
>>>>> f[f[f[x]]] == g[x]
>>>>>
>>>>> for given g[x], and so on.
>>>>>
>>>>> Thought: is there any operator that converts functional
>>> composition
>>>>> to multiplication or something similar? That would reduce this
>>>>> problem to find nth roots and applying the inverse operator?
>>>>>
>>>>> Other thought: For some reason, taking the geometric
>>> average of the
>>>>> identity function and g, and then iterating, seems like a
>>> good idea.
>>>>> EG, Sqrt[x*g[x]], Sqrt[x*Sqrt[x*g[x]]], and so on.
>>>>>
>>>>> --
>>>>> We're just a Bunch Of Regular Guys, a collective group
>>> that's trying
>>>>> to understand and assimilate technology. We feel that
>>> resistance to
>>>>> new ideas and technology is unwise and ultimately futile.
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>
>>
>


  • Prev by Date: Re: [TS 31612]--Re:something is "wrong" with the Fills option in InequalityGraphics
  • Next by Date: Re: Re: RE: Re: RE: RE: Functional decomposition (solving f[f[x]] = g[x] for given g)
  • Previous by thread: Re: Re: RE: Re: RE: RE: Functional decomposition (solving f[f[x]] = g[x] for given g)
  • Next by thread: Solving the Cubic