MathGroup Archive 2002

[Date Index] [Thread Index] [Author Index]

Search the Archive

RE: Re: re: Accuracy and Precision

  • To: mathgroup at smc.vnet.net
  • Subject: [mg37203] RE: [mg37177] Re: re: Accuracy and Precision
  • From: "DrBob" <drbob at bigfoot.com>
  • Date: Wed, 16 Oct 2002 14:26:29 -0400 (EDT)
  • Reply-to: <drbob at bigfoot.com>
  • Sender: owner-wri-mathgroup at wolfram.com

You're using SetPrecision when infinite precision is a meaningful option
-- when there's no doubt about the coefficients and powers in the
series.  Bignums clearly make the computation faster in that case.

However, if the coefficients and powers of your example series were not
perfectly known, what then?  If they begin life as machine numbers,
adding arbitrary digits serves no purpose.  Yes, plots may get smoother
as more digits are added, but they would not converge to a "correct"
result -- merely to a precise one.

(In the chemistry industry where my wife works, the difference between
accuracy and precision is well known.  Precision means getting the same
answer over and over --- whether it's right or not.  Accuracy means
getting the right answer --- whether it's precise or not.  It's low
variance versus small bias.)

Modify your example like this:

ser = N@Normal[Series[Cos[#], {#, 0, 200}]];
Timing[pts = With[{ss = 
      ser}, Table[SetPrecision[{#, ss}, 80] &@x, {x, 50., 70., .1}]];]
ListPlot[pts, PlotJoined -> True, PlotRange -> All];
MaxMemoryUsed[]

Once the series coefficients have lost precision, you can't get it back
again.  Furthermore, in using SetPrecision, there's a danger that one
could THINK he has regained it.

Bobby

-----Original Message-----
From: Allan Hayes [mailto:hay at haystack.demon.co.uk] 
To: mathgroup at smc.vnet.net
Subject: [mg37203] [mg37177] Re: re: Accuracy and Precision


"Mark Coleman" <mark at markscoleman.com> wrote in message
news:aobg22$hrn$1 at smc.vnet.net...
> Greetings,
>
> I have read with great interest this lively debate on numerical
prcesion
and
> accuracy. As I work in the fields of finance and economics, where we
feel
> ourselves blessed if we get three digits of accuracy, I'm curious as
to
what
> scientific endeavors require 50+ digits of precision? As I recall
there
are
> some areas, such as high energy physics and some elements of
astronomy,
that
> might require so many digits in some circumstances. Are there others?
>
> Thanks
>
> -Mark


Mark,

There may be occasions when the outcome of a "real" process is so
sensitive
to changes in input that unless we know very precisely what the input is
then we can know very little about the outcome - chaotic processes are
of
this kind. The difficulty is real and no amount of computer power or
clever
progamming will do much about it.

Another situation is when the the process is not so sensitive but
calculating with our formula or programme introduces accumulates
significant
errors.

Here is a very artificial example of the latter (I time the computation
and
find the MaximumMemory used in the session as we go through the
example):

    ser=Normal[Series[Cos[#],{#,0,200}]];

    MaxMemoryUsed[]

        1714248

Calculating with machine number does not show much of a pattern ( I
have
deleted the graphics - please evaluate the code),


    pts= With[{ss=ser},Table[ {#,ss}&[x],
          {x,50.,70., .1}]];//Timing
    ListPlot[pts, PlotJoined->True];
    MaxMemoryUsed[]

        {5.11 Second,Null}

        1723840

Using bigfloat inputs with precision 20 shows some pattern:

    pts= With[{ss=ser},Table[ {#,ss}&[SetPrecision[x,20]],
          {x,50.,70., .1}]];//Timing
    ListPlot[pts, PlotJoined->True,PlotRange\[Rule]All];
    MaxMemoryUsed[]

        {17.52 Second,Null}

        1759664


Precision 40 does very well:

    pts= With[{ss=ser},Table[ {#,ss}&[SetPrecision[x,40]],
          {x,50.,70., .1}]];//Timing
    ListPlot[pts, PlotJoined->True,PlotRange\[Rule]All];
    MaxMemoryUsed[]

        {19.38 Second,Null}

        1797072

Now we might think the correct outcomes are showing up, and use an
interpolating function for further , and faster, calculation.

    f=Interpolation[pts]

        InterpolatingFunction[{{50.000000,70.00000}},<>]

    pts= Table[ f[x],{x,50, 70, .1}];//Timing
    ListPlot[pts, PlotJoined->True,PlotRange\[Rule]All];
    MaxMemoryUsed[]

        {0.33 Second,Null}


As a matter of interest, this is what happens if we substitute exact
numbers
(rationals and integers) for reals--
the computation takes an excessively long time and quite a bit more
memory.

    pts= With[{ss=ser},Table[ {#,ss}&[SetPrecision[x,Infinity]],
          {x,50.,70., .1}]];//Timing
    ListPlot[pts, PlotJoined->True,PlotRange\[Rule]All];
    MaxMemoryUsed[]

        {992.28 Second,Null}

        2413808

This also shows that we may in fact want to replace exact inputs with
bigfloats.


I should be interested to hear of other example, really "real" one in
particular. I imagine that there are many situations where trends and
shapes
are more important than specific values.

--
Allan

---------------------
Allan Hayes
Mathematica Training and Consulting
Leicester UK
www.haystack.demon.co.uk
hay at haystack.demon.co.uk
Voice: +44 (0)116 271 4198
Fax: +44 (0)870 164 0565


>
>
> <snip>
>
>
>









  • Prev by Date: Re: Re: re: Accuracy and Precision
  • Next by Date: Re: Re: re: Accuracy and Precision
  • Previous by thread: Re: re: Accuracy and Precision
  • Next by thread: Re: Re: re: Accuracy and Precision