MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: why extending numbers by zeros instead of dropping precision

  • To: mathgroup at smc.vnet.net
  • Subject: [mg118024] Re: why extending numbers by zeros instead of dropping precision
  • From: DrMajorBob <btreat1 at austin.rr.com>
  • Date: Sat, 9 Apr 2011 07:11:47 -0400 (EDT)

Reading further in the Kahan paper, we see that either answer can be  
"correct", if we consider that reals are fuzzy.

The exact calculation is very fast, too:

Timing@First@Nest[e, {4 + 1/4, 4}, 79]

{0.00099,
  206795153138256918939565417139009598365577843034794672964/
   41359030627651383817474849310671104336332210648235594113}

Bobby

On Fri, 08 Apr 2011 20:12:05 -0500, DrMajorBob <btreat1 at austin.rr.com>  
wrote:

> Here's a result in Mathematica for J-M. Muller's Recurrence, section 5  
> of the Kahan paper:
>
> Clear[e]
> e[y_, z_] := 108 - (815 - 1500/z)/y
> e[{last_, previous_}] := {e[last, previous], last}
>
> digits = 80;
> Block[{$MinPrecision = digits, $MaxPricision = digits},
>   N@First@Nest[e, {SetPrecision[4.25, digits], 4}, 80]
>   ]
>
> 5.
>
> (Correct.)
>
> With just ONE less digit, however:
>
> digits = 79;
> Block[{$MinPrecision = digits, $MaxPricision = digits},
>   N@First@Nest[e, {SetPrecision[4.25, digits], 4}, 80]
>   ]
>
> 100.005
>
> (The spurious "attractor".)
>
> Bobby
>
> On Fri, 08 Apr 2011 03:14:02 -0500, Richard Fateman  
> <fateman at cs.berkeley.edu> wrote:
>
>> On 4/7/2011 5:05 AM, Noqsi wrote:
>>> On Apr 6, 3:12 am, Richard Fateman<fate... at cs.berkeley.edu>  wrote:
>>>> On 4/4/2011 3:30 AM, Noqsi wrote:
>>>>
>>>>> On Mar 31, 3:06 am, Richard Fateman<fate... at eecs.berkeley.edu>    
>>>>> wrote:
>>>>>> It is occasionally stated that subtracting nearly equal quantities  
>>>>>> from
>>>>>> each other is a bad idea and somehow unstable or results in noise.  
>>>>>> (JT
>>>>>> Sardus said it on 3/29/2011, for example.)
>>>>
>>>>>>     This is not always true; in fact it may be true hardly ever.
>>>>
>>>>> Hardly ever? What a silly assertion. This has been a major concern
>>>>> since the dawn of automatic numerical analysis.
>>>>
>>>> When was this dawn?
>>>
>>> Oh, if you want a date, February 14, 1946 will do, although anyone who
>>> knows this history can argue for earlier or later as they please.
>>
>> Oh, I see, by "automatic numerical analysis"  you mean "numerical
>> computation by some mechanical calculation apparatus [automaton??]"
>>
>> I was assuming you had in mind some kind of "automatic" analysis as
>> opposed to the "human" analysis which is commonly used in order to
>> invent algorithms and write programs which then do numerical  
>> calculation.
>>>
>>>> and where has it taken us to date?
>>>
>>> Lots of places. The Moon, for example.
>>
>> Oh, if you mean scientific numerical digital computing, sure.  Though
>> you've got to wonder how the Russians, probably WITHOUT much in the way
>> of computers, put up an artificial satellite.
>>>
>>>> Do you perhaps mean "automatic ERROR analysis"?
>>>
>>> No. I mean automatic numerical analysis, as opposed to manual methods
>>> (Gauss, Adams, Richardson, ...). But I assume these folks were aware
>>> of significance issues, and handled them in informal intelligent human
>>> ways.
>> Oh, so they weren't automatic, but handled in some human way...
>>
>> Try Wilkinson?
>>   Computers are stupider, and more capable of propagating error to
>>> the point of absurdity, so they require more care from their human
>>> tenders.
>> more care than what?   I think that large numbers of calculations done
>> by "hand" or by mechanical calculators (as in pre-electronic days) would
>> propagate errors too;  they would also include, to the extent that
>> humans were involved in transferring numbers to/from paper, and keying
>> them in, more checking for blunders.
>>
>>>
>>>>
>>>> See, for example http://www.cs.berkeley.edu/~wkahan/Mindless.pdf
>>
>> I highly recommend you look at the above reference.
>>
>>
>>>>
>>>> or if you can find it,
>>>> W.M. Kahan, "The Regrettable Failure of Automated Error Analysis,"
>>>> mini-course,<i>Conf. Computers and Mathematics,</i>  Massachusetts  
>>>> Inst.
>>>> of Technology, 1989.
>>>
>>> I was thinking more of
>>>
>>> H. H. Goldstine and J. von Neumann, "Numerical inverting of matrices
>>> of high order", Amer. Math. Soc. Bull. 53 (1947), 1021-1099
>>
>> This was "automatic numerical analysis"?
>>>
>>> although appreciation of the problem goes back farther (why did the
>>> ENIAC have ten digit accumulators?).
>>
>> You might refer to MANIAC, which had significance arithmetic, and also
>> lost out.
>>>
>>>>
>>>> Oh, as for subtraction being a problem... Typically if you subtract  
>>>> two
>>>> nearly equal items you get a small quantity.  The small quantity is  
>>>> not
>>>> troublesome if you add it to something that is not so small.  What
>>>> sometimes happens is that you do something ELSE.  Like divide by it.
>>>> That is more likely to cause problems.
>>>
>>> Exactly. That's a very common issue in numerical analysis. And that's
>>> why your "hardly ever" assertion is silly.
>>
>> I recommend you look at the Kahan paper, which would explain to you this
>> issue.
>>
>>
>>>
>>
>>
>
>


-- 
DrMajorBob at yahoo.com


  • Prev by Date: Re: why extending numbers by zeros instead of dropping precision
  • Next by Date: Re: time series
  • Previous by thread: Re: why extending numbers by zeros instead of dropping precision
  • Next by thread: fitting a parametric curve