On Thu, Feb 16, 2012 at 5:20 PM, Pauli Virtanen <[email protected]> wrote: > Hi, > > 16.02.2012 18:00, Nathaniel Smith kirjoitti: > [clip] >> I agree, but the behavior is still surprising -- people reasonably >> expect something like svd to be deterministic. So there's probably a >> doc bug for alerting people that their reasonable expectation is, in >> fact, wrong :-). > > The problem here is that these warnings should in principle appear in > the documentation of every numerical algorithm that contains branches > chosen on the basis of floating point data. For example, optimization > algorithms --- they terminate after a tolerance is satisfied, and so the > results can contain similar quasi-random error much larger than the > rounding error, tol > |err| >> eps. > > Floating point sucks, it's full of gotchas for all ages :(
Yes, and maybe I'm just projecting my own particular naivete... I'm very familiar with numerical stability and rounding as issues, and of course optimization-based algorithms have the issue you raise. I'm still surprised to learn that on a single machine, with bit-identical inputs, using a mature low-level routine like svd, you can get *qualitatively* different results depending on memory alignment. (I wouldn't expect dense SVD to use a fixed tolerance optimization routine either!) -- Nathaniel _______________________________________________ NumPy-Discussion mailing list [email protected] http://mail.scipy.org/mailman/listinfo/numpy-discussion
