[Date Index]
[Thread Index]
[Author Index]
Re: Numerical accuracy/precision - this is a bug or a feature?
*To*: mathgroup at smc.vnet.net
*Subject*: [mg120251] Re: Numerical accuracy/precision - this is a bug or a feature?
*From*: Richard Fateman <fateman at eecs.berkeley.edu>
*Date*: Thu, 14 Jul 2011 21:21:03 -0400 (EDT)
On 7/14/2011 7:12 AM, W. Craig Carter wrote:
> On Jul 14, 2011, at 11:21 AM, Richard Fateman<fateman at cs.berkeley.edu> wrote:
>> Learning mathematics from a physicist is hazardous.
>> Learning computer science from a physicist is hazardous too.
>> Numbers in a computer are different from experimental measurements.
> I believe that I won't be the only one who objects to this hyperbole and limited world view. I've learned poor quality math from poor quality instructors---but this is not "hazardous.". I've learned what I believe to be high quality math from excellent physics and mathematical physics instructors.
Generally hazardous. You may however be lucky. I too have had some
excellent physics instructors, but it is generally hazardous, judging
from papers I've read, and programs I've looked at written by
physicists. By the way, I was a physics major as an undergraduate.
> Every experimental measurement of a continuously varying quantity will have some imprecision. Entering that number into a computer may or may not add to the imprecision.
At this point the notion of precision is philosophically different.
Assume we are talking about floating-point representation. Next assume
that the number is scaled so that it is within range of some
(floating-point) representation. The external number can then be stored
in the computer with at most one rounding error, good to half a unit in
the last place (ULP). The precision of this number is related to the
floating-point data format, and varies depending upon
single/double/extended/ software-arbitrary-precision etc choices.
If you, additionally, wish to represent the uncertainty [[NOT
PRECISION]] of the number in the computer, you can do so by storing
more data in the computer.
> Keeping track of the sources of imprecision is what good experimentalist does.
OK. I would add that estimating or bounding the error in a
computational result is often a goal of a numerical computational
scientist. Estimating the computed error would depend on the uncertainty
of the input and the computational steps.
> Of course, an experiment is different from a number; but the experimentally determined number and its sources of imprecision in a computer is just another number on the computer.
Yes, though I would again not use "imprecision" but "uncertainty". I
would also distinguish between the uncertainty of the input and the
potential for subsequent computational errors.
> There is no difference in meaning unless the precision of the experiment exceeds that of the measurement.
You lost me here. Or maybe you mean something like taking 10
measurements, and computing an average, to get an extra
decimal digit??? I don't see how that is related to numerical
computation (as on a computer).
Fundamentally, taking the rules of thumb that you learn in an
undergraduate physics lab about uncertainties in reading a meter-stick
or a thermometer, etc, sort of work if you are only doing a few
arithmetic operations on them and you know personally that the numbers
are not correlated. If you put a few thousand numbers in a computer and
do a few billion arithmetic operations on them, the same rules don't
work so well.
RJF
Prev by Date:
**Re: Suming InterpolatingFunction Objects**
Next by Date:
**Re: Numerical accuracy/precision - this is a bug or a feature?**
Previous by thread:
**Re: Numerical accuracy/precision - this is a bug or a feature?**
Next by thread:
**Re: Numerical accuracy/precision - this is a bug or a feature?**
| |