MathGroup Archive 2004

[Date Index] [Thread Index] [Author Index]

Search the Archive

Setting the acceptable level of accuracy when comparing two algorithms results?

  • To: mathgroup at smc.vnet.net
  • Subject: [mg49838] Setting the acceptable level of accuracy when comparing two algorithms results?
  • From: tookeey at yahoo.com (Jenet Sinclair)
  • Date: Tue, 3 Aug 2004 01:11:10 -0400 (EDT)
  • Sender: owner-wri-mathgroup at wolfram.com

Hello,

   I use Mathemetica as a verification tool for my software's
algothims.   Theoretically, if Mathematica and I implemented the same
algorithm and I gave each the same input data, then the results should
the same.  My question is what level of accuracy verifies they are
"the same"?  An error margin of plus or minue ten to the negative
sixth?  Ten to the negative ninth?  Is there any industry standard
regarding this?  Are there known limitations of Mathemetica or
computer processors that can dictate to me what is actual error
(possible bugs in my algorithm) and what is normal statistical error?

Thanks for pondering my query!
Eric


  • Prev by Date: Re: NthPermutation: how to change output format?
  • Next by Date: integral question
  • Previous by thread: WHY: cannot launch MathLink via C++?
  • Next by thread: integral question