Services & Resources / Wolfram Forums / MathGroup Archive
-----

MathGroup Archive 2008

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Re: Re: Re: smallest fraction


For matemathical problem see example
http://www.research.att.com/~njas/sequences/A138343

MATHEMATICA PROBLEM

1) N[Pi,n+1] approximate to n decimal digits (not truncate)

2) We do set of lower approximations taken from reversed Continued 
fractions (even are lower odd are upper limits Pi)
a = {}; Do[AppendTo[a, FromContinuedFraction[ContinuedFraction[Pi, n]]], 
{n, 2, 100, 2}]; a

3) we do LLL set (particular because we use decimal systam) by
<< NumberTheory`Recognize`
c = {}; b = {}; a = {}; Do[k = Recognize[N[Pi, n + 1], 1, x]; 
If[MemberQ[a, k], AppendTo[b, n], w = Solve[k == 0, x]; AppendTo[c, x /. 
w[[1]]]; AppendTo[a, k]], {n, 2, 100}]; c

4) Now we compare
Complement[a,c]
and
Complement[c,a]

and we can  observe differences between these two methods

5) Not all recognize results are lower limits
but all are continued fractions what we can check
d = {}; Do[AppendTo[d, FromContinuedFraction[ContinuedFraction[Pi, n]]], 
{n, 2, 100}]; d
and now Complement[c,d]={}
what mean that Recognize uses similar alhorhitms as ContinuedFractions 
as  Daniel say.

Artur  Jasinski



BEST WISHES
ARTUR




Andrzej Kozlowski pisze:
> On 27 Mar 2008, at 15:59, Daniel Lichtblau wrote:
>   
>> Andrzej Kozlowski wrote:
>>     
>>> I note however that:
>>> a = RootApproximant[Pi, 1]
>>> 80143857/25510582
>>> and
>>>  Rationalize[Pi, 10^(-15)]
>>> 80143857/25510582
>>> which are exactly the same. As far as I know Rationalize does not  
>>> use  lattice reduction (?) but a different algorithm, related to  
>>> the way  one obtains a continuous fraction expansion of Pi (at  
>>> least I blieve  that this is what Rationalze does). I am not sure  
>>> if the fact that the  same answer is returned by the above is a  
>>> coincidence or it means that  in fact the two algorithms amount to  
>>> the same thing in this particular  case ?
>>> Andrzej Kozlowski
>>> [...]
>>>       
>> They are different. The LLL (or PSLQ) based aproach is sometimes  
>> regarded as a higher dimensional analog to continued fraction  
>> approximation. I believe tehre is a way of casting it that shows a  
>> relation between the methods. But I am not even certain of this, let  
>> alone know how to show it myself.
>>
>> The operation of the LLL approach, in the degree one case, is this.  
>> We pick some digit size, say 16 (so as to be around  
>> $MachinePrecision, just for illustration purposes). We multiply our  
>> goal, Pi, by that number, and round off. Call this goal. We then  
>> form the lattice lat=
>>
>> {{1,0,goal},
>> { 0,1,10^16}}
>>
>> We reduce this to reducedlat, and look at the shortest vector, call  
>> it {a,b,c}. The identity matrix at the left of 'lat' has in effect  
>> recorded row operations needed to form this vector. In nice cases, c  
>> will be small, indicating that we obtain an approximate zero as  
>> a*goal+b*10^16. But this simply means that goal is approximately - 
>> b*10^16/a, hence Pi is approximately -b/a.
>>
>> In this example:
>>
>> In[101]:= InputForm[-redlat[[1,2]]/redlat[[1,1]]]
>> Out[101]//InputForm= 80143857/25510582
>>
>> It is reasonable to ask whether this approach, if used at the same  
>> precision/#digits as Rationalize, will always give the same result.  
>> I do not know. But I would expect them to be close, and usually  
>> identical. Reason being, both tend to find a result with small  
>> numerator and denominator, with "small" typically being around half  
>> the number of digits we use in our approximation. There are not  
>> likely to be too many contenders.
>>
>> Daniel Lichtblau
>> Wolfram Research
>>
>>
>>
>>     
>
>
> I realize now (I think I noticed this already one a long time ago but  
> forgot) that this way of using LLL seems rather different from the one  
> I once learned and tried to (roughly ) describe in this thread. In  
> particular, the approach to "recognizing algebraics" you describe only  
> uses lattice reduction for the standard euclidean inner product (or  
> the standard norm) whereas the approach that I had in mind (described,  
> for example, in Henri Cohen's "Course in Computational Number Theory",  
> page 100),  uses lattice reduction for a specially constructed  
> positive definite quadratic form. The two approaches appear to be  
> equivalent but I can't at this time clarify the relationship between  
> them. Is there something obvious that I am missing?
>
> Furthermore, I have notice that
>
>   Developer`SystemOptions["LatticeReduceOptions"]
> {"LatticeReduceOptions" ->
>       {"LatticeReduceArithmetic" -> Integers,
>         "LatticeReduceInnerProduct" -> Automatic,
>         "LatticeReduceRatioParameter" -> Automatic}}
>
> makes it possible to use a non-standard inner product in  
> LatticeReduction. Presumably a custom inner product should be  
> specified as a pure function?
> I guess that should make it possible to implement the alternative  
> approach described in Cohen's book?. (Of course if they are clearly  
> equivalent there would be not much point doing this but if is not  
> obvious it might be interesting to compare the answers one gets from  
> each of them).
>
> Andrzej Kozlowski
>
>
>
>
> __________ NOD32 Informacje 2701 (20071204) __________
>
> Wiadomosc zostala sprawdzona przez System Antywirusowy NOD32
> http://www.nod32.com lub http://www.nod32.pl 
>
>
>
>   


  • Prev by Date: Re: submission of a new problem; NDSolve artifacts; version 6.0.1
  • Next by Date: Re: Problems with differentiating Piecewise functions
  • Previous by thread: Re: Re: Re: Re: smallest fraction
  • Next by thread: Re: Re: smallest fraction