On Wed, Oct 2, 2013 at 7:51 PM,  <josef.p...@gmail.com> wrote:
> On Wed, Oct 2, 2013 at 2:49 PM,  <josef.p...@gmail.com> wrote:
>> On Wed, Oct 2, 2013 at 2:05 PM, Stéfan van der Walt <ste...@sun.ac.za> wrote:
>>> On 2 Oct 2013 19:14, "Benjamin Root" <ben.r...@ou.edu> wrote:
>>>>
>>>> And it is logically consistent, I think.  a[nanargmax(a)] == nanmax(a)
>>>> (ignoring the silly detail that you can't do an equality on nans).
>>>
>>> Why do you call this a silly detail? It seems to me a fundamental flaw to
>>> this approach.
>>
>> a nan is a nan is a NaN
>>
>>>>> np.testing.assert_equal([0, np.nan], [0, np.nan])
>>>>>
>
> and the functions have "nan" in their names
> nan in - NaN out

This makes no sense :-). The nan in the names means "pretend the nans
aren't there", not "please scatter nans in the output"!

These are just vectorized operations that can fail in some cases and
not others, there's nothing special about the fact that the function's
definition also involves nan.

> what about nanmean, nansum, ...?

They do the same thing as mean([]), sum([]), etc., which are
well-defined. (nan and 0 respectively.)

-n
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to