On Thu, Aug 12, 2010 at 12:30 PM, John H Palmieri
<[email protected]> wrote:
> How can I tell if numerical noise is actually noise, or if it is
> indicative of a bug?  For example, with one or two OS/processor
> combinations, I get this (from chmm.pyx):
>
>    sage: m.viterbi([0,1,10,10,1])
> Expected:
>    ([0, 0, 1, 1, 0], -9.0604285688230899)
> Got:
>    ([0, 0, 1, 1, 0], -9.0604285688230917)
>
> Can I tell how accurate this is actually supposed to be?  I can
> certainly just change the doctest to
>
>    ([0, 0, 1, 1, 0], -9.0604285688230...)
>
> but if the code is actually supposed to be accurate to a few more
> decimal places, this is concealing a bug.  Sometimes I can look at the
> code and easily figure out its supposed accuracy, but more frequently
> I can't.  So what should be done in cases like this?

I'm the upstream author of 100% of this code, and know precisely what
it does, which is a sequence of floating point ops and calls to the
math library, which *of course* can be machine dependent.

Put in dots.

 -- William

-- 
To post to this group, send an email to [email protected]
To unsubscribe from this group, send an email to 
[email protected]
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to