[Date Index] [Thread Index] [Author Index]
Setting the acceptable level of accuracy when comparing two algorithms results?
Hello, I use Mathemetica as a verification tool for my software's algothims. Theoretically, if Mathematica and I implemented the same algorithm and I gave each the same input data, then the results should the same. My question is what level of accuracy verifies they are "the same"? An error margin of plus or minue ten to the negative sixth? Ten to the negative ninth? Is there any industry standard regarding this? Are there known limitations of Mathemetica or computer processors that can dictate to me what is actual error (possible bugs in my algorithm) and what is normal statistical error? Thanks for pondering my query! Eric