Re: Numerical accuracy/precision - this is a bug or a feature?
- To: mathgroup at smc.vnet.net
- Subject: [mg120205] Re: Numerical accuracy/precision - this is a bug or a feature?
- From: Noqsi <noqsiaerospace at gmail.com>
- Date: Wed, 13 Jul 2011 03:10:29 -0400 (EDT)
- References: <ius5op$2g7$1@smc.vnet.net> <ius7b6$30t$1@smc.vnet.net>
On Jul 12, 4:01 am, "slawek" <sla... at host.pl> wrote: > U=BFytkownik "Noqsi" <noqsiaerosp... at gmail.com> napisa=B3 w wiadomo=B6ci grup > dyskusyjnych:iv6gf9$ru... at smc.vnet.net... > > > On Jul 7, 5:40 am, "slawek" <sla... at host.pl> wrote: > > >> The convention that 2.0 is less accurate than 2.00 is applied ONLY in > >> Mathematica (the computer program). > > > Not true. This is a long-standing convention in experimental science. > > There is no such convention. You mean you're not familiar with it. But it exists. > There is always possible that 2.00 +- 0.53 . That's a different convention. It also exists. > Nobody should believe that 2.00 is more exact than 2.0 or even 2 . (If so, > then 2 Pi have got 10% std. dev. ;) There are different traditions here. Those who use the "2.00" to indicate that further digits are unknown do generally not find "2 Pi" confusing. The clash of conventions in engineering, though, is more troublesome, where the units change by factors of 1000 and fractions are often avoided. So, does "200 mm" mean "0.2 m" or "0.200 m"? Still, this rarely causes serious confusion.