You can expect sum to always have the same result with this precision.
Depending on how the sum is done (first to last or last to first for
instance), you will have this kind of results.
Le 25 août 2013 08:47, "Peter Prettenhofer" <peter.prettenho...@gmail.com>
a écrit :

>
>
>
> 2013/8/24 Lars Buitinck <l.j.buiti...@uva.nl>
>
>> 2013/8/24 Peter Prettenhofer <peter.prettenho...@gmail.com>:
>> > can anybody help me understand why the output of the following code
>> snippet
>> > is different depending on 32bit or 64bit architecture::
>> >
>> >     x = np.empty((10 ** 6,), dtype=np.float64)
>> >     x.fill(1e-9)
>> >     hash(x.mean())
>> >
>> > on 64bit I get: 2475364768
>> > on 32bit I get: -1839780448
>> >
>> > I expected that given you set an explicit dtype the result would be
>> equal on
>> > whatever architecture.
>>
>> Is that Python's built-in hash function? That's not even guaranteed to
>> be consistent between runs of the same Python interpreter on the same
>> machine.
>>
>
> yes - I used the builtin hash to show its different (if you print the
> floating point you can see the difference as well).
> If hash would not be consistent than pickling dicts would not be a good
> idea... ?
>
>>
>> > Even worse when I do the following::
>> >
>> >     np.sum(x)
>> >
>> > I too get different results but when I do::
>> >
>> >     sum(x)
>> >
>> > I get equal results... could it be that the temporary variables that
>> numpy
>> > uses in this routines are platform dependent or am I missing something
>> here.
>>
>> How unequal? IIRC, x86 processors do all computations using 80-bit
>> floating point numbers internally, and amd64 (x86-64) uses 128-bit.
>>
>
> on 32bit machine:
>
> In [6]: print('%.16f' % sum(x))
> 0.0010000000000082
>
> In [7]: print('%.16f' % np.sum(x))
> 0.0010000000000000
>
>
>
>>  --
>> Lars Buitinck
>> Scientific programmer, ILPS
>> University of Amsterdam
>>
>>
>> ------------------------------------------------------------------------------
>> Introducing Performance Central, a new site from SourceForge and
>> AppDynamics. Performance Central is your source for news, insights,
>> analysis and resources for efficient Application Performance Management.
>> Visit us today!
>>
>> http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
>> _______________________________________________
>> Scikit-learn-general mailing list
>> Scikit-learn-general@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>>
>
>
>
> --
> Peter Prettenhofer
>
>
> ------------------------------------------------------------------------------
> Introducing Performance Central, a new site from SourceForge and
> AppDynamics. Performance Central is your source for news, insights,
> analysis and resources for efficient Application Performance Management.
> Visit us today!
> http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
> _______________________________________________
> Scikit-learn-general mailing list
> Scikit-learn-general@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/scikit-learn-general
>
>
------------------------------------------------------------------------------
Introducing Performance Central, a new site from SourceForge and 
AppDynamics. Performance Central is your source for news, insights, 
analysis and resources for efficient Application Performance Management. 
Visit us today!
http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to