Hi,
On Sat, Nov 5, 2011 at 6:24 PM, Charles R Harris
<[email protected]> wrote:
>
>
> On Fri, Nov 4, 2011 at 5:21 PM, Matthew Brett <[email protected]>
> wrote:
>>
>> Hi,
>>
>> I noticed this:
>>
>> (Intel Mac):
>>
>> In [2]: np.int32(np.float32(2**31))
>> Out[2]: -2147483648
>>
>> (PPC):
>>
>> In [3]: np.int32(np.float32(2**31))
>> Out[3]: 2147483647
>>
>> I assume what is happening is that the casting is handing off to the c
>> library, and that behavior of the c library differs on these
>> platforms? Should we expect or hope that this behavior would be the
>> same across platforms?
>
> Heh. I think the conversion is basically undefined because 2**31 won't fit
> in int32. The Intel example just takes the bottom 32 bits of 2**31 expressed
> as a binary integer, the PPC throws up its hands and returns the maximum
> value supported by int32. Numpy supports casts from unsigned to signed 32
> bit numbers by using the same bits, as does C, and that would comport with
> the Intel example. It would probably be useful to have a Numpy convention
> for this so that the behavior was consistent across platforms. Maybe for
> float types we should raise an error if the value is out of bounds.
Just to see what happens:
#include <stdio.h>
#include <math.h>
int main(int argc, char* argv) {
double x;
int y;
x = pow(2, 31);
y = (int)x;
printf("%d, %d\n", sizeof(int), y);
}
Intel, gcc:
4, -2147483648
PPC, gcc:
4, 2147483647
I think that's what you predicted. Is it strange that the same
compiler gives different results?
It would be good if the behavior was the same across platforms - the
unexpected negative overflow caught me out at least. An error sounds
sensible to me. Would it cost lots of cycles?
Cheers,
Matthew
_______________________________________________
NumPy-Discussion mailing list
[email protected]
http://mail.scipy.org/mailman/listinfo/numpy-discussion