Re: Re: Re: smallest fraction
- To: mathgroup at smc.vnet.net
- Subject: [mg86997] Re: [mg86960] Re: [mg86911] Re: smallest fraction
- From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
- Date: Fri, 28 Mar 2008 03:17:29 -0500 (EST)
- References: <200803200757.CAA29500@smc.vnet.net> <200803210653.BAA18315@smc.vnet.net> <200803220554.AAA00496@smc.vnet.net> <200803230600.BAA25688@smc.vnet.net> <47E65E16.4010608@wolfram.com> <200803240645.BAA17083@smc.vnet.net> <200803250613.BAA10410@smc.vnet.net> <200803260952.EAA09438@smc.vnet.net> <C1C00C86-39C6-4A42-9556-9F63A4E0A6F6@mimuw.edu.pl> <8599CE30-9D67-4966-BA80-4CA993FBC944@mimuw.edu.pl> <200803271318.IAA19801@smc.vnet.net> <47EBB64F.8060608@wolfram.com>
On 27 Mar 2008, at 15:59, Daniel Lichtblau wrote: > Andrzej Kozlowski wrote: >> I note however that: >> a = RootApproximant[Pi, 1] >> 80143857/25510582 >> and >> Rationalize[Pi, 10^(-15)] >> 80143857/25510582 >> which are exactly the same. As far as I know Rationalize does not >> use lattice reduction (?) but a different algorithm, related to >> the way one obtains a continuous fraction expansion of Pi (at >> least I blieve that this is what Rationalze does). I am not sure >> if the fact that the same answer is returned by the above is a >> coincidence or it means that in fact the two algorithms amount to >> the same thing in this particular case ? >> Andrzej Kozlowski >> [...] > > They are different. The LLL (or PSLQ) based aproach is sometimes > regarded as a higher dimensional analog to continued fraction > approximation. I believe tehre is a way of casting it that shows a > relation between the methods. But I am not even certain of this, let > alone know how to show it myself. > > The operation of the LLL approach, in the degree one case, is this. > We pick some digit size, say 16 (so as to be around > $MachinePrecision, just for illustration purposes). We multiply our > goal, Pi, by that number, and round off. Call this goal. We then > form the lattice lat= > > {{1,0,goal}, > { 0,1,10^16}} > > We reduce this to reducedlat, and look at the shortest vector, call > it {a,b,c}. The identity matrix at the left of 'lat' has in effect > recorded row operations needed to form this vector. In nice cases, c > will be small, indicating that we obtain an approximate zero as > a*goal+b*10^16. But this simply means that goal is approximately - > b*10^16/a, hence Pi is approximately -b/a. > > In this example: > > In[101]:= InputForm[-redlat[[1,2]]/redlat[[1,1]]] > Out[101]//InputForm= 80143857/25510582 > > It is reasonable to ask whether this approach, if used at the same > precision/#digits as Rationalize, will always give the same result. > I do not know. But I would expect them to be close, and usually > identical. Reason being, both tend to find a result with small > numerator and denominator, with "small" typically being around half > the number of digits we use in our approximation. There are not > likely to be too many contenders. > > Daniel Lichtblau > Wolfram Research > > > I realize now (I think I noticed this already one a long time ago but forgot) that this way of using LLL seems rather different from the one I once learned and tried to (roughly ) describe in this thread. In particular, the approach to "recognizing algebraics" you describe only uses lattice reduction for the standard euclidean inner product (or the standard norm) whereas the approach that I had in mind (described, for example, in Henri Cohen's "Course in Computational Number Theory", page 100), uses lattice reduction for a specially constructed positive definite quadratic form. The two approaches appear to be equivalent but I can't at this time clarify the relationship between them. Is there something obvious that I am missing? Furthermore, I have notice that Developer`SystemOptions["LatticeReduceOptions"] {"LatticeReduceOptions" -> {"LatticeReduceArithmetic" -> Integers, "LatticeReduceInnerProduct" -> Automatic, "LatticeReduceRatioParameter" -> Automatic}} makes it possible to use a non-standard inner product in LatticeReduction. Presumably a custom inner product should be specified as a pure function? I guess that should make it possible to implement the alternative approach described in Cohen's book?. (Of course if they are clearly equivalent there would be not much point doing this but if is not obvious it might be interesting to compare the answers one gets from each of them). Andrzej Kozlowski
- Follow-Ups:
- Re: Re: Re: Re: smallest fraction
- From: Artur <grafix@csl.pl>
- Re: Re: Re: Re: smallest fraction
- From: Daniel Lichtblau <danl@wolfram.com>
- Re: Re: Re: Re: smallest fraction
- References:
- smallest fraction
- From: masmoudi <mas_atef@yahoo.fr>
- Re: smallest fraction
- From: Curtis Osterhoudt <cfo@lanl.gov>
- Re: Re: smallest fraction
- From: Artur <grafix@csl.pl>
- Re: Re: Re: smallest fraction
- From: Artur <grafix@csl.pl>
- Re: Re: Re: Re: smallest
- From: Carl Woll <carlw@wolfram.com>
- Re: Re: Re: Re: Re: smallest
- From: Andrzej Kozlowski <akoz@mimuw.edu.pl>
- Re: smallest fraction
- From: Artur <grafix@csl.pl>
- Re: Re: smallest fraction
- From: Andrzej Kozlowski <akoz@mimuw.edu.pl>
- smallest fraction