I just came across a real-head scratcher that took me a bit to figure out. I don't know if it counts as a bug or not.
I have an array with dtype "f4" and a separate python float. Some elements of this array gets assigned this numpy float64 scalar value. (I know, I should be better off with a mask, but bear with me, this is just a demonstration code to isolate the core problem from a much more complicated program...) import numpy as np a = np.empty((500,500), dtype='f4') a[:] = np.random.random(a.shape) bad_val = 10*a.max() b = np.where(a > 0.8, bad_val, a) Now, the following seems to always evaluate to False, as expected: >>> np.any(b > bad_val) but, if I am (un-)lucky enough, this will sometimes evaluate to True: >>> any([(c > bad_val) for c in b.flat]) What it seems to me is that for the first comparison test, bad_val is casted down to float32 (or maybe b is casted up to float64?), but for the second example, the opposite is true. This can lead to some unexpected behaviors. Is there some sort of difference between type-casting of numpy scalars and numpy arrays? I would expect both to be the same. Ben Root
_______________________________________________ NumPy-Discussion mailing list [email protected] http://mail.scipy.org/mailman/listinfo/numpy-discussion
