MathGroup Archive 2002

[Date Index] [Thread Index] [Author Index]

Search the Archive

RE: Re: re: Accuracy and Precision

  • To: mathgroup at smc.vnet.net
  • Subject: [mg37208] RE: [mg37177] Re: re: Accuracy and Precision
  • From: "DrBob" <drbob at bigfoot.com>
  • Date: Wed, 16 Oct 2002 14:27:02 -0400 (EDT)
  • Reply-to: <drbob at bigfoot.com>
  • Sender: owner-wri-mathgroup at wolfram.com

If we have inaccurately known parameters, I think Interval arithmetic
does a far better job of assessing the situation.  As for "impossible"
demands on memory and time, the computation that took 992.3 seconds for
you took 32.8 seconds for me.  Anyway, it can be done faster AND more
accurately without bignums:

Timing[pts = With[{ss = ser}, Table[({#1, ss} & )[x], {x, 50, 70,
1/10}]]; ]
ListPlot[pts, PlotJoined -> True, PlotRange -> All]; 
MaxMemoryUsed[]

{10.640999999999998*Second,   Null}

In any case, we spent far more time writing code and evaluating results
than waiting on execution.

If anything, your examples suggest only that machine precision AND
bignum computations are suspect.  The results may or may not be worth
the pixels they take up on my screen, and unless I compute in some
alternative way instead -- or use progressively more digits in bignums
until things settle down -- I can only guess at their reliability.

For an application such as your example, I think the best solution is to
use infinite precision for a limited number of points, and then
Interpolation.  It's safer than using SetPrecision because it doesn't
involve guessing how many digits of precision to use, and it's far
faster because it doesn't involve testing higher and higher levels of
precision.  The choice of points for exact computation may be tricky,
but there are adaptive algorithms for that.  Here's an interesting way
to proceed, for instance:

ser = Normal[Series[Cos[x], {x, 0, 200}]];
Timing[pts = Table[{x, ser}, {x, 50, 70, 1/2}];]
f = Interpolation[pts];
Timing[plot1 = Plot[f[x], {x, 50, 70}, PlotPoints -> 30, PlotDivision ->
3];]
Cases[plot1, Line[a__] -> a, Infinity][[1, All, 1]];
Timing[newPts = 
      Union[pts, ({x, ser} /. x -> #) & /@ (Rationalize[#, 1/100] & /@
%)];]
g = Interpolation[newPts, InterpolationOrder -> 5];
plot1 = Plot[Cos[x] - g[x], {x, 50, 70}, PlotRange -> All];

{1.703000000000003*Second,   Null}
{0.1560000000000059*Second,   Null}
{4.968999999999994*Second,   Null}
{0.546999999999997*Second,   Null}

Length[pts]
Length[newPts]

41
124

I used only a few points for the first plot and it already looked good.
Just to be sure, I used Plot to select more points, and used infinite
precision computation again for those points.  The final Plot shows
error limited to about 10^-6.  Increasing InterpolationOrder decreases
errors significantly, too, at fairly small cost.

Bobby

-----Original Message-----
From: Allan Hayes [mailto:hay at haystack.demon.co.uk] 
To: mathgroup at smc.vnet.net
Subject: [mg37208] Re: [mg37177] Re: re: Accuracy and Precision


Bobby,

You rightly point out that care should be exercised when using (high
precision) bigfloats, but this should not obscure the proper use of
them.
I have suggested some uses that are valid subject to circumstances
(raising
precision) or essential (converting exact numbers to bigfloats to avoid
impossible demands on memory and time) - Daniel Lichtblau gave others.

>However, if the coefficients and powers of your example series were not
> perfectly known, what then?

We need to distinguish between having an exact value which is
imperfectly
known and not having an exact value - having a range of values.

If I may speculate a little more on "real" uses:
- If we have inaccurately known parameters that do have definite values
we
may still want to calculae accurately over possible ranges of the
parameter;
and if the definite values give distinctive outcomes then testing with
high
accuracy inputs is a way of getting a more accurate determination of the
real value -
rather like using an inverse function.
- if parameters do not have a definite value then we are into
statistics,
however we might still need to know the outcomes of inputing accurate
values
to get an idea of the behaviour of the process.


Allan

---------------------
Allan Hayes
Mathematica Training and Consulting
Leicester UK
www.haystack.demon.co.uk
hay at haystack.demon.co.uk
Voice: +44 (0)116 271 4198
Fax: +44 (0)870 164 0565


----- Original Message -----
From: "DrBob" <drbob at bigfoot.com>
To: mathgroup at smc.vnet.net
Subject: [mg37208] RE: [mg37177] Re: re: Accuracy and Precision


> You're using SetPrecision when infinite precision is a meaningful
option
> -- when there's no doubt about the coefficients and powers in the
> series.  Bignums clearly make the computation faster in that case.
>
> However, if the coefficients and powers of your example series were
not
> perfectly known, what then?  If they begin life as machine numbers,
> adding arbitrary digits serves no purpose.  Yes, plots may get
smoother
> as more digits are added, but they would not converge to a "correct"
> result -- merely to a precise one.
>
> (In the chemistry industry where my wife works, the difference between
> accuracy and precision is well known.  Precision means getting the
same
> answer over and over --- whether it's right or not.  Accuracy means
> getting the right answer --- whether it's precise or not.  It's low
> variance versus small bias.)
>
> Modify your example like this:
>
> ser = N@Normal[Series[Cos[#], {#, 0, 200}]];
> Timing[pts = With[{ss =
>       ser}, Table[SetPrecision[{#, ss}, 80] &@x, {x, 50., 70., .1}]];]
> ListPlot[pts, PlotJoined -> True, PlotRange -> All];
> MaxMemoryUsed[]
>
> Once the series coefficients have lost precision, you can't get it
back
> again.  Furthermore, in using SetPrecision, there's a danger that one
> could THINK he has regained it.
>
> Bobby
>
> -----Original Message-----
> From: Allan Hayes [mailto:hay at haystack.demon.co.uk]
To: mathgroup at smc.vnet.net
> Sent: Tuesday, October 15, 2002 3:18 AM
> To: mathgroup at smc.vnet.net
> Subject: [mg37208] [mg37177] Re: re: Accuracy and Precision
>
>
> "Mark Coleman" <mark at markscoleman.com> wrote in message
> news:aobg22$hrn$1 at smc.vnet.net...
> > Greetings,
> >
> > I have read with great interest this lively debate on numerical
> prcesion
> and
> > accuracy. As I work in the fields of finance and economics, where we
> feel
> > ourselves blessed if we get three digits of accuracy, I'm curious as
> to
> what
> > scientific endeavors require 50+ digits of precision? As I recall
> there
> are
> > some areas, such as high energy physics and some elements of
> astronomy,
> that
> > might require so many digits in some circumstances. Are there
others?
> >
> > Thanks
> >
> > -Mark
>
>
> Mark,
>
> There may be occasions when the outcome of a "real" process is so
> sensitive
> to changes in input that unless we know very precisely what the input
is
> then we can know very little about the outcome - chaotic processes are
> of
> this kind. The difficulty is real and no amount of computer power or
> clever
> progamming will do much about it.
>
> Another situation is when the the process is not so sensitive but
> calculating with our formula or programme introduces accumulates
> significant
> errors.
>
> Here is a very artificial example of the latter (I time the
computation
> and
> find the MaximumMemory used in the session as we go through the
> example):
>
>     ser=Normal[Series[Cos[#],{#,0,200}]];
>
>     MaxMemoryUsed[]
>
>         1714248
>
> Calculating with machine number does not show much of a pattern ( I
> have
> deleted the graphics - please evaluate the code),
>
>
>     pts= With[{ss=ser},Table[ {#,ss}&[x],
>           {x,50.,70., .1}]];//Timing
>     ListPlot[pts, PlotJoined->True];
>     MaxMemoryUsed[]
>
>         {5.11 Second,Null}
>
>         1723840
>
> Using bigfloat inputs with precision 20 shows some pattern:
>
>     pts= With[{ss=ser},Table[ {#,ss}&[SetPrecision[x,20]],
>           {x,50.,70., .1}]];//Timing
>     ListPlot[pts, PlotJoined->True,PlotRange\[Rule]All];
>     MaxMemoryUsed[]
>
>         {17.52 Second,Null}
>
>         1759664
>
>
> Precision 40 does very well:
>
>     pts= With[{ss=ser},Table[ {#,ss}&[SetPrecision[x,40]],
>           {x,50.,70., .1}]];//Timing
>     ListPlot[pts, PlotJoined->True,PlotRange\[Rule]All];
>     MaxMemoryUsed[]
>
>         {19.38 Second,Null}
>
>         1797072
>
> Now we might think the correct outcomes are showing up, and use an
> interpolating function for further , and faster, calculation.
>
>     f=Interpolation[pts]
>
>         InterpolatingFunction[{{50.000000,70.00000}},<>]
>
>     pts= Table[ f[x],{x,50, 70, .1}];//Timing
>     ListPlot[pts, PlotJoined->True,PlotRange\[Rule]All];
>     MaxMemoryUsed[]
>
>         {0.33 Second,Null}
>
>
> As a matter of interest, this is what happens if we substitute exact
> numbers
> (rationals and integers) for reals--
> the computation takes an excessively long time and quite a bit more
> memory.
>
>     pts= With[{ss=ser},Table[ {#,ss}&[SetPrecision[x,Infinity]],
>           {x,50.,70., .1}]];//Timing
>     ListPlot[pts, PlotJoined->True,PlotRange\[Rule]All];
>     MaxMemoryUsed[]
>
>         {992.28 Second,Null}
>
>         2413808
>
> This also shows that we may in fact want to replace exact inputs with
> bigfloats.
>
>
> I should be interested to hear of other example, really "real" one in
> particular. I imagine that there are many situations where trends and
> shapes
> are more important than specific values.
>
> --
> Allan
>
> ---------------------
> Allan Hayes
> Mathematica Training and Consulting
> Leicester UK
> www.haystack.demon.co.uk
> hay at haystack.demon.co.uk
> Voice: +44 (0)116 271 4198
> Fax: +44 (0)870 164 0565
>
>
> >
> >
> > <snip>
> >
> >
> >
>
>
>
>
>
>
>
>
>






  • Prev by Date: Re: Re: re: Accuracy and Precision
  • Next by Date: Statistical cluster analysis in Mathematica
  • Previous by thread: Re: Re: re: Accuracy and Precision
  • Next by thread: RE: Re: re: Accuracy and Precision