Re: a tricky limit
- To: mathgroup at smc.vnet.net
- Subject: [mg15821] Re: a tricky limit
- From: Daniel Lichtblau <danl>
- Date: Mon, 8 Feb 1999 03:25:46 -0500 (EST)
- Organization: Wolfram Research, Inc.
- References: <199901080915.EAA03988@smc.vnet.net.> <79e810$96b@smc.vnet.net>
- Sender: owner-wri-mathgroup at wolfram.com
Jurgen Tischer wrote: > > Arnold, > I think this is the last time I comment about your tricky limit. This > nice David Lichtblau read all my code and comments and pointed out to > me some errors I made and showed me that my upper limit in fact should > be much better as I thought. In the end there was little I had to > correct, essentially it amounts to that my corrected lower bound is > much better so here is the result: > > upper bound: 2.292173695248657289096168980403 (as before) > > lower bound: 2.292173695221057149738189560492 (much better) > > this makes it ten digits and an error less than 3 10^-11. I wonder what > you need that thing for, but it was lots of fun. > > Jurgen > > If someone wants the notebook, please email me. > > Arnold Knopfmacher wrote: > > > > I wish to obtain a numerical estimate (say 8 decimal digits) of the > > limit as x tends to 1 from below of the function > > h[x]=(Product[(1-fm[x]/(m+1)),{m,2,Infinity}])/(1-x) where > > fm[x]=x^(m-m/d) and d is the smallest divisor of m that is greater than > > 1. The problem is that when I replace Infinity by say 1000 as the upper > > limit of the product, the function blows up near 1. Visual inspection > > of the graph of h[x] for 0<x<0.9 say, suggests that the limit should > > have a value around 2.1. Can anyone help? > > > > Thanks > > Arnold Knopfmacher > > Dept of Computational and Applied Math Witwatersrand University > > South Africa I have a few comments. First let me congratulate Jurgen Tischer on finding a much closer approximation to the limit than I could find, or had even thought would be computationally feasible. He did this by judiciously rewriting the series for the logarithm, collecting terms with equal denominators, and then carefully formulating limit results for such terms. What I consider to be a tricky part of the job is to approximate the sum below. Sum[c[k]/b[k]*Log[1-1/Prime[k]], {k,1,Infinity}] The auxiliary functions c[] and b[] are defined, as per JT post, as b[k_] := Product[Prime[j], {j,1,k}] c[k_] := Product[Prime[j]-1, {j,1,k-1}] These were correct as stated by JT, proof or lack thereof notwithstanding. He and I had some differences as to how one might bound this. I am not sure whether we reached an agreement on what he termed the lower bound, but I think I can provide a simple explanation which will yield an excellent approximation for the original limit. We will estimate the sum above by truncating it, and then bound the magnitude of the error. estimate[n_] := Sum[c[k]/b[k]*Log[1-1/Prime[k]], {k,1,n}] error[n_] := Sum[c[k]/b[k]*Log[1-1/Prime[k]], {k,n+1,Infinity}] As I recall, my estimate[n] corresponds to JT's ub[] function. He did the calculation for n somewhere larger than 10^9, so I will defer to his result for that computation and just show the error. To get this error bounded, recall that Prime[k] is asymptotically approximated by k*Log[k] and |Log[1-1/m]| is asymptotically approximated by 1/m. Also note that the products c[k]/b[k] are conveniently bounded by 1/Prime[k] (use a telescoping argument to see this). In fact, for k>20000, one can check that they are bounded by .049/Prime[k] just using a numerical computation. So Abs[error[10^9]] is bounded by Sum[(.049/Prime[k])*(1.00001/Prime[k]), {k,10^9,Infinity}] which in turn is bounded by Sum[1.0001^2*(.049/(k*Log[k]))*(1.00001/(k*Log[k])), {k,10^9,Infinity}] obtained by throwing in factors of 1.0001 to acount for error in the asymptotic approximations. This summand is seen to be less than .05 * 1/(k*Log[k])^2. We now use the fact that the sum is dominated by an integral from one less than 10^9. In[33]:= N[1/20 * Integrate[1/(k*Log[k])^2, {k,10^9-1,Infinity}]] -13 Out[33]= 1.06562 10 I threw in a fair amount of slop factors so one can actually conclude the error is less than 10^(-13). Since JT computed a value that is near unity, even after exponentiating he has a result that is correct to twelve places. This I find to be quite impressive, as I had doubts as to whether one could even get six. ----------------------------- I had tried to tangle with this using truncated power series without rearranging terms by denominators. One can see why this is inferior, as a lower bound on the error from truncating after n terms is O(1/n). I will show why this is the case below. Again, the messy part is a sum of the form (using a meta-notation for the inner sum) Sum[x^n * (1/n-Sum[1/k+1, {f[k]==n}]] where f[k_] := k-k/Divisors[k][[2]] This basically corresponds to a power series for -Log[1-x] added to Sum[-x^f[k]/(k+1), {k,3,Infinity}] that is, I am dropping terms after first order contributed from logs in the original log-of-product. I did this because bounding the contributions from these later terms is feasible. In fact, JT found an exact limit for them. It is easy to see that the convergence, on truncation, cannot be better than O(1/n). This is because for odd k the coefficients are (k+1)/(2*k^2+k) which is around 1/(2*k). Actually from this we cannot even conclude the infinite sum converges, but let us acceept that it does (as we already know to be the case anyway). SO what more can we say about the coefficients? I can show that the kth coefficient is always bounded by log(log(k))/k (logs taken base 2) but can they be shown to be O(1/k)? Another curious point is that the signs seem to alternate positive/negative/positive/negative several times, followed sporadically by three negatives in a row. Is this always the case? Is there a pattern as to how many +/- sequences will precede a triple negative? Daniel Lichtblau Wolfram Research