MathGroup Archive 2004

[Date Index] [Thread Index] [Author Index]

Search the Archive

Setting the acceptable level of accuracy when comparing two algorithms results?


   I use Mathemetica as a verification tool for my software's
algothims.   Theoretically, if Mathematica and I implemented the same
algorithm and I gave each the same input data, then the results should
the same.  My question is what level of accuracy verifies they are
"the same"?  An error margin of plus or minue ten to the negative
sixth?  Ten to the negative ninth?  Is there any industry standard
regarding this?  Are there known limitations of Mathemetica or
computer processors that can dictate to me what is actual error
(possible bugs in my algorithm) and what is normal statistical error?

Thanks for pondering my query!

  • Prev by Date: Re: NthPermutation: how to change output format?
  • Next by Date: integral question
  • Previous by thread: WHY: cannot launch MathLink via C++?
  • Next by thread: integral question