MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Numerical accuracy/precision - this is a bug or a feature?

  • To: mathgroup at smc.vnet.net
  • Subject: [mg120283] Re: Numerical accuracy/precision - this is a bug or a feature?
  • From: "W. Craig Carter" <ccarter at mit.edu>
  • Date: Sat, 16 Jul 2011 05:43:05 -0400 (EDT)
  • References: <201107150121.VAA23714@smc.vnet.net>

Sorry to add to the slow creep of this thread away from the topic of this group, but for what it's worth:

On 14 Jul,   2011, at 9:21 PM, Richard Fateman wrote:


>> Of course, an experiment is different from a number; but the experimentally determined number and its sources of imprecision  in a computer is just another number on the computer.
> Yes, though I would again not use "imprecision"  but "uncertainty".  I
> would also distinguish between the uncertainty of the input and the
> potential for subsequent computational errors.

I believe you are convoluting several things.
Usage may vary among fields but, for an experimentalist, the following is, perhaps, a reasonable taxonomy:
1. Imprecision derives from the quality of the measuring device,( i.e., a post office scale versus a laboratory scale).This sets bounds on the precision of a single measurement.
2. Uncertainty derives from repeated experiments of the same quantity. (i.e., unexplained differences between identical measurements).
3. Variability derives from the distribution of measurements and their uncertainities.
All of these can be independent of numerical machine (i.e., computer) precision. (but, in practice they are not).

>> There is no difference in meaning unless the precision of the experiment exceeds that of the measurement.
> You lost me here.

What I mean here is that there are two devices that set the precision when computation is involved.  The lowest precision trumps the highest when reporting a result.

For example, a very fine measurement could be precise to, say, 10^(-64) grams.  (rescaling to zeptograms does not increase precision).  If that number were to be stored and used for subsequent computation with insufficient precision to account for the 10^(-64), then it is the machine (computer) precision that bounds the quality of the any subsequent report.  If there is enough numerical precision, but an algorithm has the effect of decreasing precision, then it is the algorithm and the computer that bound the quality of any subsequent report.

> 
> or a thermometer, etc,  sort of work if you are only doing a few
> arithmetic operations on them and you know personally that the numbers
> are not correlated.  If you put a few thousand numbers in a computer and
> do a few billion arithmetic operations on them, the same rules don't
> work so well.
>


Yes, sure, details of the propagation of errors become more intricate.  Andrzej Kozlowski's extensive list of sources reinforces that.

WCC


  • Prev by Date: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Next by Date: Re: Keeping it real
  • Previous by thread: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Next by thread: Re: Numerical accuracy/precision - this is a bug or a feature?