2013/8/24 Peter Prettenhofer <peter.prettenho...@gmail.com>:
> can anybody help me understand why the output of the following code snippet
> is different depending on 32bit or 64bit architecture::
>
>     x = np.empty((10 ** 6,), dtype=np.float64)
>     x.fill(1e-9)
>     hash(x.mean())
>
> on 64bit I get: 2475364768
> on 32bit I get: -1839780448
>
> I expected that given you set an explicit dtype the result would be equal on
> whatever architecture.

Is that Python's built-in hash function? That's not even guaranteed to
be consistent between runs of the same Python interpreter on the same
machine.

> Even worse when I do the following::
>
>     np.sum(x)
>
> I too get different results but when I do::
>
>     sum(x)
>
> I get equal results... could it be that the temporary variables that numpy
> uses in this routines are platform dependent or am I missing something here.

How unequal? IIRC, x86 processors do all computations using 80-bit
floating point numbers internally, and amd64 (x86-64) uses 128-bit.

-- 
Lars Buitinck
Scientific programmer, ILPS
University of Amsterdam

------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to