True, i suppose there is no harm in accumulating with max precision, and 
storing the result in the Original dtype, unless otherwise specified, although 
i wonder if the current nditer supports such behavior.

-----Original Message-----
From: "Alan G Isaac" <alan.is...@gmail.com>
Sent: ‎24-‎7-‎2014 18:09
To: "Discussion of Numerical Python" <numpy-discussion@scipy.org>
Subject: Re: [Numpy-discussion] numpy.mean still broken for large float32arrays

On 7/24/2014 5:59 AM, Eelco Hoogendoorn wrote to Thomas:
> np.mean isn't broken; your understanding of floating point number is.


This comment seems to conflate separate issues:
the desirable return type, and the computational algorithm.
It is certainly possible to compute a mean of float32
doing reduction in float64 and still return a float32.
There is nothing implicit in the name `mean` that says
we have to just add everything up and divide by the count.

My own view is that `mean` would behave enough better
if computed as a running mean to justify the speed loss.
Naturally similar issues arise for `var` and `std`, etc.
See http://www.johndcook.com/standard_deviation.html
for some discussion and references.

Alan Isaac
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to