To elaborate on that point; knowing that numpy accumulates in a simple
first-to-last sweep, and does not implicitly upcast, the original problem
can be solved in several ways; specifying a higher precision to sum with,
or by a nested summation, like X.mean(0).mean(0)==1.0. I personally like
this
The dtype returned by np.where looks right (int64):
import platform
platform.architecture()
('64bit', 'WindowsPE')
import numpy as np
np.__version__
'1.9.0b1'
a = np.zeros(10)
np.where(a == 0)
(array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int64),)
--
Olivier
How does the build trigger? If its just a matter of clicking on something
when released. I think we can handle that :)
On Saturday, May 17, 2014 7:22:00 AM UTC-4, Jeff wrote:
Hi,
I'm pleased to announce the availability of the first release candidate of
Pandas 0.14.0.
Please try this RC
At 01:22 AM 7/25/2014, you wrote:
Actually the maximum precision I am not so
sure of, as I personally prefer to make an
informed decision about precision used, and get
an error on a platform that does not support
the specified precision, rather than obtain
subtly or horribly broken
On Fri, Jul 25, 2014 at 3:11 PM, RayS r...@blue-cove.com wrote:
At 01:22 AM 7/25/2014, you wrote:
Actually the maximum precision I am not so
sure of, as I personally prefer to make an
informed decision about precision used, and get
an error on a platform that does not support
the specified
At 07:22 AM 7/25/2014, you wrote:
We were talking on this in the office, as we
realized it does affect a couple of lines dealing
with large arrays, including complex64.
As I expect Python modules to work uniformly
cross platform unless documented otherwise, to me
that includes 32 vs 64 bit
Arguably, the whole of floating point numbers and their related shenanigans is
not very pythonic in the first place. The accuracy of the output WILL depend on
the input, to some degree or another. At the risk of repeating myself: explicit
is better than implicit
-Original Message-
On 7/25/2014 1:40 PM, Eelco Hoogendoorn wrote:
At the risk of repeating myself: explicit is better than implicit
This sounds like an argument for renaming the `mean` function `naivemean`
rather than `mean`. Whatever numpy names `mean`, shouldn't it
implement an algorithm that produces the
Hi,
On Fri, Jul 25, 2014 at 9:52 AM, Jeff jeffreb...@gmail.com wrote:
How does the build trigger? If its just a matter of clicking on something
when released. I think we can handle that :)
The two options are:
* I add you and whoever else does releases to my repo, and you can
trigger builds
On Fri, Jul 25, 2014 at 5:56 PM, RayS r...@blue-cove.com wrote:
The important point was that it would be best if all of the methods affected
by summing 32 bit floats with 32 bit accumulators had the same Notes as
numpy.mean(). We went through a lot of code yesterday, assuming that any
numpy or
It need not be exactly representable as such; take the mean of [1, 1+eps]
for instance. Granted, there are at most two number in the range of the
original dtype which are closest to the true mean; but im not sure that
computing them exactly is a tractable problem for arbitrary input.
Im not sure
At 11:29 AM 7/25/2014, you wrote:
On Fri, Jul 25, 2014 at 5:56 PM, RayS r...@blue-cove.com wrote:
The important point was that it would be best if all of the
methods affected
by summing 32 bit floats with 32 bit accumulators had the same Notes as
numpy.mean(). We went through a lot of code
On Fri, Jul 25, 2014 at 4:25 PM, RayS r...@blue-cove.com wrote:
At 11:29 AM 7/25/2014, you wrote:
On Fri, Jul 25, 2014 at 5:56 PM, RayS r...@blue-cove.com wrote:
The important point was that it would be best if all of the
methods affected
by summing 32 bit floats with 32 bit
Ray: I'm not working with Hubble data, but yeah these are all issues I've run
into with my terrabytes of microscopy data as well. Given that such raw data
comes as uint16, its best to do your calculations as much as possible in good
old ints. What you compute is what you get, no obscure
On 25.07.2014 23:51, Eelco Hoogendoorn wrote:
Ray: I'm not working with Hubble data, but yeah these are all issues
I've run into with my terrabytes of microscopy data as well. Given that
such raw data comes as uint16, its best to do your calculations as much
as possible in good old ints. What
At 02:36 PM 7/25/2014, you wrote:
But it doesn't compensate for users to be aware of the problems. I
think the docstring and the description of the dtype argument is pretty clear.
Most of the docs for the affected functions do not have a Note with
the same warning as mean()
- Ray
16 matches
Mail list logo