Services & Resources / Wolfram Forums / MathGroup Archive
-----

MathGroup Archive 2011

[Date Index] [Thread Index] [Author Index]

Search the Archive

Re: Numerical accuracy/precision - this is a bug or a feature?

  • To: mathgroup at smc.vnet.net
  • Subject: [mg120204] Re: Numerical accuracy/precision - this is a bug or a feature?
  • From: Noqsi <noqsiaerospace at gmail.com>
  • Date: Wed, 13 Jul 2011 03:10:18 -0400 (EDT)
  • References: <ius5op$2g7$1@smc.vnet.net> <ius7b6$30t$1@smc.vnet.net> <iv9ehj$dct$1@smc.vnet.net>

On Jul 9, 4:37 am, "Oleksandr Rasputinov"
<oleksandr_rasputi... at hmamail.com> wrote:
> On Fri, 08 Jul 2011 09:51:53 +0100, Noqsi <noqsiaerosp... at gmail.com> wrote:
> > On Jul 7, 5:40 am, "slawek" <sla... at host.pl> wrote:
>
> >> The convention that 2.0 is less accurate than 2.00 is applied ONLY in
> >> Mathematica (the computer program).
>
> > Not true. This is a long-standing convention in experimental science.
>
> Unfortunately so, given that it is severely erroneous:

Such an unqualified statement is unjustified. Whether this produces
troublesome error or not depends on what you're doing.

> see e.g.  
> <http://www.av8n.com/physics/uncertainty.htm>.

I have some sympathy with this point of view. But I recall a rather
vehement dispute between two colleagues in a tame case where recording
only significant digits was fine. That dispute poisoned human
relationships unnecessarily. The real folly is to be ideological about
this, instead of understanding what you're doing *from the perspective
of the problem domain*. One size does not fit all.

> However, Mathematica's  
> approximation of how these uncertainties propagate is first-order, not 
> zeroth-order. This does not make it completely reliable, of course, but 
> certainly it is not almost always wrong as is the significant digits  
> convention. Within the bounds of its own applicability, Mathematica's  
> approximation is reasonable, although it would still be a mistake to apply  
> it to experimental uncertainty analysis given the much broader scope of 
> the latter.

Experimental uncertainty analysis almost always involves
approximation. Avoiding shortcuts in the statistical formulation can
lead to extremely large calculations even when the data set is very
small. See, for example, Loredo and Lamb 1989, bibcode 1989NYASA.
571..601L. This example still required numerical approximation. In
this case, I think the enormous human and computational effort was
justified, but the appropriateness of any particular approach depends
on the problem to be solved.



  • Prev by Date: Re: How to write a "proper" math document
  • Next by Date: Re: MultinormalDistribution Question
  • Previous by thread: Re: Numerical accuracy/precision - this is a bug or a feature?
  • Next by thread: Re: Numerical accuracy/precision - this is a bug or a feature?