MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: why extending numbers by zeros instead of dropping precision

  • To: mathgroup at smc.vnet.net
  • Subject: [mg118003] Re: why extending numbers by zeros instead of dropping precision
  • From: Richard Fateman <fateman at cs.berkeley.edu>
  • Date: Fri, 8 Apr 2011 04:14:02 -0400 (EDT)
  • References: <ink99t$g94$1@smc.vnet.net>

On 4/7/2011 5:05 AM, Noqsi wrote:
> On Apr 6, 3:12 am, Richard Fateman<fate... at cs.berkeley.edu>  wrote:
>> On 4/4/2011 3:30 AM, Noqsi wrote:
>>
>>> On Mar 31, 3:06 am, Richard Fateman<fate... at eecs.berkeley.edu>   wrote:
>>>> It is occasionally stated that subtracting nearly equal quantities from
>>>> each other is a bad idea and somehow unstable or results in noise. (JT
>>>> Sardus said it on 3/29/2011, for example.)
>>
>>>>     This is not always true; in fact it may be true hardly ever.
>>
>>> Hardly ever? What a silly assertion. This has been a major concern
>>> since the dawn of automatic numerical analysis.
>>
>> When was this dawn?
>
> Oh, if you want a date, February 14, 1946 will do, although anyone who
> knows this history can argue for earlier or later as they please.

Oh, I see, by "automatic numerical analysis"  you mean "numerical 
computation by some mechanical calculation apparatus [automaton??]"

I was assuming you had in mind some kind of "automatic" analysis as 
opposed to the "human" analysis which is commonly used in order to 
invent algorithms and write programs which then do numerical calculation.
>
>> and where has it taken us to date?
>
> Lots of places. The Moon, for example.

Oh, if you mean scientific numerical digital computing, sure.  Though 
you've got to wonder how the Russians, probably WITHOUT much in the way 
of computers, put up an artificial satellite.
>
>> Do you perhaps mean "automatic ERROR analysis"?
>
> No. I mean automatic numerical analysis, as opposed to manual methods
> (Gauss, Adams, Richardson, ...). But I assume these folks were aware
> of significance issues, and handled them in informal intelligent human
> ways.
Oh, so they weren't automatic, but handled in some human way...

Try Wilkinson?
  Computers are stupider, and more capable of propagating error to
> the point of absurdity, so they require more care from their human
> tenders.
more care than what?   I think that large numbers of calculations done 
by "hand" or by mechanical calculators (as in pre-electronic days) would 
propagate errors too;  they would also include, to the extent that 
humans were involved in transferring numbers to/from paper, and keying 
them in, more checking for blunders.

>
>>
>> See, for example http://www.cs.berkeley.edu/~wkahan/Mindless.pdf

I highly recommend you look at the above reference.


>>
>> or if you can find it,
>> W.M. Kahan, "The Regrettable Failure of Automated Error Analysis,"
>> mini-course,<i>Conf. Computers and Mathematics,</i>  Massachusetts Inst.
>> of Technology, 1989.
>
> I was thinking more of
>
> H. H. Goldstine and J. von Neumann, "Numerical inverting of matrices
> of high order", Amer. Math. Soc. Bull. 53 (1947), 1021-1099

This was "automatic numerical analysis"?
>
> although appreciation of the problem goes back farther (why did the
> ENIAC have ten digit accumulators?).

You might refer to MANIAC, which had significance arithmetic, and also 
lost out.
>
>>
>> Oh, as for subtraction being a problem... Typically if you subtract two
>> nearly equal items you get a small quantity.  The small quantity is not
>> troublesome if you add it to something that is not so small.  What
>> sometimes happens is that you do something ELSE.  Like divide by it.
>> That is more likely to cause problems.
>
> Exactly. That's a very common issue in numerical analysis. And that's
> why your "hardly ever" assertion is silly.

I recommend you look at the Kahan paper, which would explain to you this 
issue.


>



  • Prev by Date: Re: Coefficents of terms in an expression containing the matching string
  • Next by Date: Re: Manipulate, how to slowdown animation
  • Previous by thread: Re: Normalize[] gives incorrect answer for some norm functions
  • Next by thread: Re: why extending numbers by zeros instead of dropping precision