Re: [Numpy-discussion] reshaping empty array bug?

2016-02-23 Thread Benjamin Root
On Tue, Feb 23, 2016 at 3:30 PM, Nathaniel Smith  wrote:

> What should this do?
>
> np.zeros((12, 0)).reshape((10, -1, 2))
>


It should error out, I already covered that. 12 != 20.

Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reshaping empty array bug?

2016-02-23 Thread Nathaniel Smith
On Tue, Feb 23, 2016 at 12:23 PM, Benjamin Root  wrote:
>
> On Tue, Feb 23, 2016 at 3:14 PM, Nathaniel Smith  wrote:
>>
>> Sure, it's totally ambiguous. These are all legal:
>
>
>
> I would argue that except for the first reshape, all of those should be an
> error, and that the current algorithm is buggy.

Reshape doesn't care about axes at all; all it cares about is that the
number of elements stay the same. E.g. this is also totally legal:

np.zeros((12, 5)).reshape((10, 3, 2))

And so are the equivalents

np.zeros((12, 5)).reshape((-1, 3, 2))
np.zeros((12, 5)).reshape((10, -1, 2))
np.zeros((12, 5)).reshape((10, 3, -1))

> This isn't a heuristic. It isn't guessing. It is making the semantics
> consistent. The fact that I can do:
> a.shape = (-1, 5, 64)
> or
> a.shape = (0, 5, 64)
>
> but not
> a.shape = (0, 5, -1)
>
> is totally inconsistent.

It's certainly annoying and unpleasant, but it follows inevitably from
the most natural way of defining the -1 semantics, so I'm not sure I'd
say "inconsistent" :-)

What should this do?

np.zeros((12, 0)).reshape((10, -1, 2))

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reshaping empty array bug?

2016-02-23 Thread Benjamin Root
On Tue, Feb 23, 2016 at 3:14 PM, Nathaniel Smith  wrote:

> Sure, it's totally ambiguous. These are all legal:



I would argue that except for the first reshape, all of those should be an
error, and that the current algorithm is buggy.

This isn't a heuristic. It isn't guessing. It is making the semantics
consistent. The fact that I can do:
a.shape = (-1, 5, 64)
or
a.shape = (0, 5, 64)

but not
a.shape = (0, 5, -1)

is totally inconsistent.

Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reshaping empty array bug?

2016-02-23 Thread Nathaniel Smith
On Tue, Feb 23, 2016 at 8:45 AM, Benjamin Root  wrote:
> but, it isn't really ambiguous, is it? The -1 can only refer to a single
> dimension, and if you ignore the zeros in the original and new shape, the -1
> is easily solvable, right?

Sure, it's totally ambiguous. These are all legal:

In [1]: a = np.zeros((0, 5, 64))

In [2]: a.shape = (0, 5 * 64)

In [3]: a.shape = (0, 5 * 65)

In [4]: a.shape = (0, 5, 102)

In [5]: a.shape = (0, 102, 64)

Generally, the -1 gets replaced by prod(old_shape) //
prod(specified_entries_in_new_shape). If the specified new shape has a
0 in it, then this is a divide-by-zero. In this case it happens
because it's the solution to the equation
  prod((0, 5, 64)) == prod((0, 5, x))
for which there is no unique solution for 'x'.

Your proposed solution feels very heuristic-y to me, and heuristics
make me very nervous :-/

If what you really want to say is "flatten axes 1 and 2 together",
then maybe there should be some API that lets you directly specify
*that*? As a bonus you might be able to avoid awkward tuple
manipulations to compute the new shape.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reshaping empty array bug?

2016-02-23 Thread Sebastian Berg
On Di, 2016-02-23 at 21:06 +0100, Sebastian Berg wrote:
> On Di, 2016-02-23 at 14:57 -0500, Benjamin Root wrote:
> > I'd be more than happy to write up the patch. I don't think it
> > would
> > be quite like make zeros be ones, but it would be along those
> > lines.
> > One case I need to wrap my head around is to make sure that a 0
> > would
> > happen if the following was true:
> > 
> > > > > a = np.ones((0, 5*64))
> > > > > a.shape = (-1, 5, 64)
> > 
> > EDIT: Just tried the above, and it works as expected (zero in the
> > first dim)!
> > 
> > Just tried out a couple of other combos:
> > > > > a.shape = (-1,)
> > > > > a.shape
> > (0,)
> > > > > a.shape = (-1, 5, 64)
> > > > > a.shape
> > (0, 5, 64)
> > 
> 
> Seems right to me on first sight :). (I don't like shape assignments
> though, who cares for one extra view). Well, maybe 1 instead of 0
> (ignore 0s), but if the result for -1 is to use 1 and the shape is 0
> convert the 1 back to 0. But it is starting to sound a bit tricky,
> though I think it might be straight forward (i.e. no real traps and
> when it works it always is what you expect).
> The main point is, whether you can design cases where the conversion
> back to 0 hides bugs by not failing when it should. And whether that
> would be a tradeoff we are willing to accept.
> 

Another thought. Maybe you can figure out the -1 correctly, if there is
no *other* 0 involved. If there is any other 0, I could imagine
problems.

> - Sebastian
> 
> 
> > 
> > This is looking more and more like a bug to me.
> > 
> > Ben Root
> > 
> > 
> > On Tue, Feb 23, 2016 at 1:58 PM, Sebastian Berg <
> > sebast...@sipsolutions.net> wrote:
> > > On Di, 2016-02-23 at 11:45 -0500, Benjamin Root wrote:
> > > > but, it isn't really ambiguous, is it? The -1 can only refer to
> > > > a
> > > > single dimension, and if you ignore the zeros in the original
> > > > and
> > > new
> > > > shape, the -1 is easily solvable, right?
> > > 
> > > I think if there is a simple logic (like using 1 for all zeros in
> > > both
> > > input and output shape for the -1 calculation), maybe we could do
> > > it. I
> > > would like someone to think about it carefully that it would not
> > > also
> > > allow some unexpected generalizations. And at least I am getting
> > > a
> > > BrainOutOfResourcesError right now trying to figure that out :).
> > > 
> > > - Sebastian
> > > 
> > > 
> > > > Ben Root
> > > > 
> > > > On Tue, Feb 23, 2016 at 11:41 AM, Warren Weckesser <
> > > > warren.weckes...@gmail.com> wrote:
> > > > > 
> > > > > 
> > > > > On Tue, Feb 23, 2016 at 11:32 AM, Benjamin Root <
> > > > > ben.v.r...@gmail.com> wrote:
> > > > > > Not exactly sure if this should be a bug or not. This came
> > > > > > up
> > > in
> > > > > > a fairly general function of mine to process satellite
> > > > > > data.
> > > > > > Unexpectedly, one of the satellite files had no scans in
> > > > > > it,
> > > > > > triggering an exception when I tried to reshape the data
> > > > > > from
> > > it.
> > > > > > 
> > > > > > > > > import numpy as np
> > > > > > > > > a = np.zeros((0, 5*64))
> > > > > > > > > a.shape
> > > > > > (0, 320)
> > > > > > > > > a.shape = (0, 5, 64)
> > > > > > > > > a.shape
> > > > > > (0, 5, 64)
> > > > > > > > > a.shape = (0, 5*64)
> > > > > > > > > a.shape = (0, 5, -1)
> > > > > > Traceback (most recent call last):
> > > > > >   File "", line 1, in 
> > > > > > ValueError: total size of new array must be unchanged
> > > > > > 
> > > > > > So, if I know all of the dimensions, I can reshape just
> > > > > > fine.
> > > But
> > > > > > if I wanted to use the nifty -1 semantic, it completely
> > > > > > falls
> > > > > > apart. I can see arguments going either way for whether
> > > > > > this
> > > is a
> > > > > > bug or not.
> > > > > > 
> > > > > 
> > > > > When you try `a.shape = (0, 5, -1)`, the size of the third
> > > > > dimension is ambiguous.  From the Zen of Python:  "In the
> > > > > face
> > > of
> > > > > ambiguity, refuse the temptation to guess."
> > > > > 
> > > > > Warren
> > > > > 
> > > > > 
> > > > > 
> > > > > > 
> > > > > > Thoughts?
> > > > > > 
> > > > > > Ben Root
> > > > > > 
> > > > > > ___
> > > > > > NumPy-Discussion mailing list
> > > > > > NumPy-Discussion@scipy.org
> > > > > > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> > > > > > 
> > > > > 
> > > > > ___
> > > > > NumPy-Discussion mailing list
> > > > > NumPy-Discussion@scipy.org
> > > > > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> > > > > 
> > > > ___
> > > > NumPy-Discussion mailing list
> > > > NumPy-Discussion@scipy.org
> > > > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> > > 
> > > ___
> > > NumPy-Discussion mailing list
> > > NumPy-Discussion@scipy.org
> > > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> > > 
> > 

Re: [Numpy-discussion] reshaping empty array bug?

2016-02-23 Thread Sebastian Berg
On Di, 2016-02-23 at 14:57 -0500, Benjamin Root wrote:
> I'd be more than happy to write up the patch. I don't think it would
> be quite like make zeros be ones, but it would be along those lines.
> One case I need to wrap my head around is to make sure that a 0 would
> happen if the following was true:
> 
> >>> a = np.ones((0, 5*64))
> >>> a.shape = (-1, 5, 64)
> 
> EDIT: Just tried the above, and it works as expected (zero in the
> first dim)!
> 
> Just tried out a couple of other combos:
> >>> a.shape = (-1,)
> >>> a.shape
> (0,)
> >>> a.shape = (-1, 5, 64)
> >>> a.shape
> (0, 5, 64)
> 

Seems right to me on first sight :). (I don't like shape assignments
though, who cares for one extra view). Well, maybe 1 instead of 0
(ignore 0s), but if the result for -1 is to use 1 and the shape is 0
convert the 1 back to 0. But it is starting to sound a bit tricky,
though I think it might be straight forward (i.e. no real traps and
when it works it always is what you expect).
The main point is, whether you can design cases where the conversion
back to 0 hides bugs by not failing when it should. And whether that
would be a tradeoff we are willing to accept.

- Sebastian


> 
> This is looking more and more like a bug to me.
> 
> Ben Root
> 
> 
> On Tue, Feb 23, 2016 at 1:58 PM, Sebastian Berg <
> sebast...@sipsolutions.net> wrote:
> > On Di, 2016-02-23 at 11:45 -0500, Benjamin Root wrote:
> > > but, it isn't really ambiguous, is it? The -1 can only refer to a
> > > single dimension, and if you ignore the zeros in the original and
> > new
> > > shape, the -1 is easily solvable, right?
> > 
> > I think if there is a simple logic (like using 1 for all zeros in
> > both
> > input and output shape for the -1 calculation), maybe we could do
> > it. I
> > would like someone to think about it carefully that it would not
> > also
> > allow some unexpected generalizations. And at least I am getting a
> > BrainOutOfResourcesError right now trying to figure that out :).
> > 
> > - Sebastian
> > 
> > 
> > > Ben Root
> > >
> > > On Tue, Feb 23, 2016 at 11:41 AM, Warren Weckesser <
> > > warren.weckes...@gmail.com> wrote:
> > > >
> > > >
> > > > On Tue, Feb 23, 2016 at 11:32 AM, Benjamin Root <
> > > > ben.v.r...@gmail.com> wrote:
> > > > > Not exactly sure if this should be a bug or not. This came up
> > in
> > > > > a fairly general function of mine to process satellite data.
> > > > > Unexpectedly, one of the satellite files had no scans in it,
> > > > > triggering an exception when I tried to reshape the data from
> > it.
> > > > >
> > > > > >>> import numpy as np
> > > > > >>> a = np.zeros((0, 5*64))
> > > > > >>> a.shape
> > > > > (0, 320)
> > > > > >>> a.shape = (0, 5, 64)
> > > > > >>> a.shape
> > > > > (0, 5, 64)
> > > > > >>> a.shape = (0, 5*64)
> > > > > >>> a.shape = (0, 5, -1)
> > > > > Traceback (most recent call last):
> > > > >   File "", line 1, in 
> > > > > ValueError: total size of new array must be unchanged
> > > > >
> > > > > So, if I know all of the dimensions, I can reshape just fine.
> > But
> > > > > if I wanted to use the nifty -1 semantic, it completely falls
> > > > > apart. I can see arguments going either way for whether this
> > is a
> > > > > bug or not.
> > > > >
> > > >
> > > > When you try `a.shape = (0, 5, -1)`, the size of the third
> > > > dimension is ambiguous.  From the Zen of Python:  "In the face
> > of
> > > > ambiguity, refuse the temptation to guess."
> > > >
> > > > Warren
> > > >
> > > >
> > > >
> > > > >
> > > > > Thoughts?
> > > > >
> > > > > Ben Root
> > > > >
> > > > > ___
> > > > > NumPy-Discussion mailing list
> > > > > NumPy-Discussion@scipy.org
> > > > > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> > > > >
> > > >
> > > > ___
> > > > NumPy-Discussion mailing list
> > > > NumPy-Discussion@scipy.org
> > > > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> > > >
> > > ___
> > > NumPy-Discussion mailing list
> > > NumPy-Discussion@scipy.org
> > > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> > 
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> > 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion

signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reshaping empty array bug?

2016-02-23 Thread Benjamin Root
I'd be more than happy to write up the patch. I don't think it would be
quite like make zeros be ones, but it would be along those lines. One case
I need to wrap my head around is to make sure that a 0 would happen if the
following was true:

>>> a = np.ones((0, 5*64))
>>> a.shape = (-1, 5, 64)

EDIT: Just tried the above, and it works as expected (zero in the first
dim)!

Just tried out a couple of other combos:
>>> a.shape = (-1,)
>>> a.shape
(0,)
>>> a.shape = (-1, 5, 64)
>>> a.shape
(0, 5, 64)


This is looking more and more like a bug to me.

Ben Root


On Tue, Feb 23, 2016 at 1:58 PM, Sebastian Berg 
wrote:

> On Di, 2016-02-23 at 11:45 -0500, Benjamin Root wrote:
> > but, it isn't really ambiguous, is it? The -1 can only refer to a
> > single dimension, and if you ignore the zeros in the original and new
> > shape, the -1 is easily solvable, right?
>
> I think if there is a simple logic (like using 1 for all zeros in both
> input and output shape for the -1 calculation), maybe we could do it. I
> would like someone to think about it carefully that it would not also
> allow some unexpected generalizations. And at least I am getting a
> BrainOutOfResourcesError right now trying to figure that out :).
>
> - Sebastian
>
>
> > Ben Root
> >
> > On Tue, Feb 23, 2016 at 11:41 AM, Warren Weckesser <
> > warren.weckes...@gmail.com> wrote:
> > >
> > >
> > > On Tue, Feb 23, 2016 at 11:32 AM, Benjamin Root <
> > > ben.v.r...@gmail.com> wrote:
> > > > Not exactly sure if this should be a bug or not. This came up in
> > > > a fairly general function of mine to process satellite data.
> > > > Unexpectedly, one of the satellite files had no scans in it,
> > > > triggering an exception when I tried to reshape the data from it.
> > > >
> > > > >>> import numpy as np
> > > > >>> a = np.zeros((0, 5*64))
> > > > >>> a.shape
> > > > (0, 320)
> > > > >>> a.shape = (0, 5, 64)
> > > > >>> a.shape
> > > > (0, 5, 64)
> > > > >>> a.shape = (0, 5*64)
> > > > >>> a.shape = (0, 5, -1)
> > > > Traceback (most recent call last):
> > > >   File "", line 1, in 
> > > > ValueError: total size of new array must be unchanged
> > > >
> > > > So, if I know all of the dimensions, I can reshape just fine. But
> > > > if I wanted to use the nifty -1 semantic, it completely falls
> > > > apart. I can see arguments going either way for whether this is a
> > > > bug or not.
> > > >
> > >
> > > When you try `a.shape = (0, 5, -1)`, the size of the third
> > > dimension is ambiguous.  From the Zen of Python:  "In the face of
> > > ambiguity, refuse the temptation to guess."
> > >
> > > Warren
> > >
> > >
> > >
> > > >
> > > > Thoughts?
> > > >
> > > > Ben Root
> > > >
> > > > ___
> > > > NumPy-Discussion mailing list
> > > > NumPy-Discussion@scipy.org
> > > > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> > > >
> > >
> > > ___
> > > NumPy-Discussion mailing list
> > > NumPy-Discussion@scipy.org
> > > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> > >
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reshaping empty array bug?

2016-02-23 Thread Sebastian Berg
On Di, 2016-02-23 at 11:45 -0500, Benjamin Root wrote:
> but, it isn't really ambiguous, is it? The -1 can only refer to a
> single dimension, and if you ignore the zeros in the original and new
> shape, the -1 is easily solvable, right?

I think if there is a simple logic (like using 1 for all zeros in both
input and output shape for the -1 calculation), maybe we could do it. I
would like someone to think about it carefully that it would not also
allow some unexpected generalizations. And at least I am getting a
BrainOutOfResourcesError right now trying to figure that out :).

- Sebastian


> Ben Root
> 
> On Tue, Feb 23, 2016 at 11:41 AM, Warren Weckesser <
> warren.weckes...@gmail.com> wrote:
> > 
> > 
> > On Tue, Feb 23, 2016 at 11:32 AM, Benjamin Root <
> > ben.v.r...@gmail.com> wrote:
> > > Not exactly sure if this should be a bug or not. This came up in
> > > a fairly general function of mine to process satellite data.
> > > Unexpectedly, one of the satellite files had no scans in it,
> > > triggering an exception when I tried to reshape the data from it.
> > > 
> > > >>> import numpy as np
> > > >>> a = np.zeros((0, 5*64))
> > > >>> a.shape
> > > (0, 320)
> > > >>> a.shape = (0, 5, 64)
> > > >>> a.shape
> > > (0, 5, 64)
> > > >>> a.shape = (0, 5*64)
> > > >>> a.shape = (0, 5, -1)
> > > Traceback (most recent call last):
> > >   File "", line 1, in 
> > > ValueError: total size of new array must be unchanged
> > > 
> > > So, if I know all of the dimensions, I can reshape just fine. But
> > > if I wanted to use the nifty -1 semantic, it completely falls
> > > apart. I can see arguments going either way for whether this is a
> > > bug or not.
> > > 
> > 
> > When you try `a.shape = (0, 5, -1)`, the size of the third
> > dimension is ambiguous.  From the Zen of Python:  "In the face of
> > ambiguity, refuse the temptation to guess."
> > 
> > Warren
> > 
> > 
> > 
> > > 
> > > Thoughts?
> > > 
> > > Ben Root
> > > 
> > > ___
> > > NumPy-Discussion mailing list
> > > NumPy-Discussion@scipy.org
> > > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> > > 
> > 
> > ___
> > NumPy-Discussion mailing list
> > NumPy-Discussion@scipy.org
> > https://mail.scipy.org/mailman/listinfo/numpy-discussion
> > 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion

signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reshaping empty array bug?

2016-02-23 Thread Benjamin Root
but, it isn't really ambiguous, is it? The -1 can only refer to a single
dimension, and if you ignore the zeros in the original and new shape, the
-1 is easily solvable, right?

Ben Root

On Tue, Feb 23, 2016 at 11:41 AM, Warren Weckesser <
warren.weckes...@gmail.com> wrote:

>
>
> On Tue, Feb 23, 2016 at 11:32 AM, Benjamin Root 
> wrote:
>
>> Not exactly sure if this should be a bug or not. This came up in a fairly
>> general function of mine to process satellite data. Unexpectedly, one of
>> the satellite files had no scans in it, triggering an exception when I
>> tried to reshape the data from it.
>>
>> >>> import numpy as np
>> >>> a = np.zeros((0, 5*64))
>> >>> a.shape
>> (0, 320)
>> >>> a.shape = (0, 5, 64)
>> >>> a.shape
>> (0, 5, 64)
>> >>> a.shape = (0, 5*64)
>> >>> a.shape = (0, 5, -1)
>> Traceback (most recent call last):
>>   File "", line 1, in 
>> ValueError: total size of new array must be unchanged
>>
>> So, if I know all of the dimensions, I can reshape just fine. But if I
>> wanted to use the nifty -1 semantic, it completely falls apart. I can see
>> arguments going either way for whether this is a bug or not.
>>
>
>
> When you try `a.shape = (0, 5, -1)`, the size of the third dimension is
> ambiguous.  From the Zen of Python:  "In the face of ambiguity, refuse the
> temptation to guess."
>
> Warren
>
>
>
>
>> Thoughts?
>>
>> Ben Root
>>
>> ___
>> NumPy-Discussion mailing list
>> NumPy-Discussion@scipy.org
>> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>>
>>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] reshaping empty array bug?

2016-02-23 Thread Warren Weckesser
On Tue, Feb 23, 2016 at 11:32 AM, Benjamin Root 
wrote:

> Not exactly sure if this should be a bug or not. This came up in a fairly
> general function of mine to process satellite data. Unexpectedly, one of
> the satellite files had no scans in it, triggering an exception when I
> tried to reshape the data from it.
>
> >>> import numpy as np
> >>> a = np.zeros((0, 5*64))
> >>> a.shape
> (0, 320)
> >>> a.shape = (0, 5, 64)
> >>> a.shape
> (0, 5, 64)
> >>> a.shape = (0, 5*64)
> >>> a.shape = (0, 5, -1)
> Traceback (most recent call last):
>   File "", line 1, in 
> ValueError: total size of new array must be unchanged
>
> So, if I know all of the dimensions, I can reshape just fine. But if I
> wanted to use the nifty -1 semantic, it completely falls apart. I can see
> arguments going either way for whether this is a bug or not.
>


When you try `a.shape = (0, 5, -1)`, the size of the third dimension is
ambiguous.  From the Zen of Python:  "In the face of ambiguity, refuse the
temptation to guess."

Warren




> Thoughts?
>
> Ben Root
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> https://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] reshaping empty array bug?

2016-02-23 Thread Benjamin Root
Not exactly sure if this should be a bug or not. This came up in a fairly
general function of mine to process satellite data. Unexpectedly, one of
the satellite files had no scans in it, triggering an exception when I
tried to reshape the data from it.

>>> import numpy as np
>>> a = np.zeros((0, 5*64))
>>> a.shape
(0, 320)
>>> a.shape = (0, 5, 64)
>>> a.shape
(0, 5, 64)
>>> a.shape = (0, 5*64)
>>> a.shape = (0, 5, -1)
Traceback (most recent call last):
  File "", line 1, in 
ValueError: total size of new array must be unchanged

So, if I know all of the dimensions, I can reshape just fine. But if I
wanted to use the nifty -1 semantic, it completely falls apart. I can see
arguments going either way for whether this is a bug or not.

Thoughts?

Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-17 Thread Sebastian Berg
On Di, 2014-09-16 at 16:51 -0400, Nathaniel Smith wrote:
 On Tue, Sep 16, 2014 at 4:31 PM, Jaime Fernández del Río
 jaime.f...@gmail.com wrote:
  If it is a bug, it is an extended one, because it is the same behavior of
  einsum:
 
  np.einsum('i,i', [1,1,1], [1])
  3
  np.einsum('i,i', [1,1,1], [1,1])
  Traceback (most recent call last):
File stdin, line 1, in module
  ValueError: operands could not be broadcast together with remapped shapes
  [origi
  nal-remapped]: (3,)-(3,) (2,)-(2,)
 
  And I think it is a conscious design decision, there is a comment about
  broadcasting missing core dimensions here:
 
  https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1940
 
 intentional and sensible are not always the same thing :-). That
 said, it isn't totally obvious to me what the correct behaviour for
 einsum is in this case.
 
  and the code makes it very explicit that input argument dimensions with the
  same label are broadcast to a common shape, see here:
 
  https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1956
 
  I kind of expect numpy to broadcast whenever possible, so this doesn't feel
  wrong to me.
 
 The case Chuck is talking about is like if we allowed matrix
 multiplication between an array with shape (n, 1) with an array with
 (k, m), because (n, 1) can be broadcast to (n, k). This feels VERY
 wrong to me, will certainly hide many bugs, and is definitely not how
 it works right now (for np.dot, anyway; apparently it does work that
 way for the brand-new gufunc np.linalg.matrix_multiply, but this must
 be an accident).

Agreed, the only argument to not change it right away would be being
afraid of breaking user code abusing the kind of thing Josef mentioned.

- Sebastian

 
  That said, it is hard to come up with convincing examples of how this
  behavior would be useful in any practical context. But changing something
  that has been working like that for so long seems like a risky thing. And I
  cannot come with a convincing example of why it would be harmful either.
 
 gufuncs are very new.
 
 -n
 



signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-17 Thread Sebastian Berg
On Mi, 2014-09-17 at 06:33 -0600, Charles R Harris wrote:
 
 
snip
 
 
 It would also be nice if the order could be made part of the signature
 as DGEMM and friends like one of the argument axis to be contiguous,
 but I don't see a clean way to do that. The gufuncs do have an order
 parameter which should probably default to 'C'  if the arrays/vectors
 are stacked. I think the default is currently 'K'. Hmm, we could make
 'K' refer to the last one or two dimensions in the inputs. OTOH, that
 isn't needed for types not handled by BLAS. Or it could be handled in
 the inner loops.
 

This is a different discussion, right? It would be nice to have an order
flag for the core dimensions. The gufunc itself should not care at all
about the outer ones.
All the orders for the core dimensions would be nice probably, including
no contiguity being enforced (or actually, maybe we can define 'K' to
mean that in this context). To be honest, if 'K' means that, it seems
like a decent default.

- Sebastian

 
 snip
 
 
 Chuck
 
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



signature.asc
Description: This is a digitally signed message part
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-17 Thread Charles R Harris
On Wed, Sep 17, 2014 at 6:48 AM, Sebastian Berg sebast...@sipsolutions.net
wrote:

 On Mi, 2014-09-17 at 06:33 -0600, Charles R Harris wrote:
 
 
 snip
 
 
  It would also be nice if the order could be made part of the signature
  as DGEMM and friends like one of the argument axis to be contiguous,
  but I don't see a clean way to do that. The gufuncs do have an order
  parameter which should probably default to 'C'  if the arrays/vectors
  are stacked. I think the default is currently 'K'. Hmm, we could make
  'K' refer to the last one or two dimensions in the inputs. OTOH, that
  isn't needed for types not handled by BLAS. Or it could be handled in
  the inner loops.
 

 This is a different discussion, right? It would be nice to have an order
 flag for the core dimensions. The gufunc itself should not care at all
 about the outer ones.


Right. It is possible to check all these things in the loop, but the loop
code grows..
.

 All the orders for the core dimensions would be nice probably, including
 no contiguity being enforced (or actually, maybe we can define 'K' to
 mean that in this context). To be honest, if 'K' means that, it seems
 like a decent default.


With regards to the main topic, we could extend the signature notation,
using `[...]` instead of `(...)' for the new behavior.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-17 Thread Charles R Harris
On Wed, Sep 17, 2014 at 6:57 AM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Wed, Sep 17, 2014 at 6:48 AM, Sebastian Berg 
 sebast...@sipsolutions.net wrote:

 On Mi, 2014-09-17 at 06:33 -0600, Charles R Harris wrote:
 
 
 snip
 
 
  It would also be nice if the order could be made part of the signature
  as DGEMM and friends like one of the argument axis to be contiguous,
  but I don't see a clean way to do that. The gufuncs do have an order
  parameter which should probably default to 'C'  if the arrays/vectors
  are stacked. I think the default is currently 'K'. Hmm, we could make
  'K' refer to the last one or two dimensions in the inputs. OTOH, that
  isn't needed for types not handled by BLAS. Or it could be handled in
  the inner loops.
 

 This is a different discussion, right? It would be nice to have an order
 flag for the core dimensions. The gufunc itself should not care at all
 about the outer ones.


 Right. It is possible to check all these things in the loop, but the loop
 code grows..
 .

 All the orders for the core dimensions would be nice probably, including
 no contiguity being enforced (or actually, maybe we can define 'K' to
 mean that in this context). To be honest, if 'K' means that, it seems
 like a decent default.


 With regards to the main topic, we could extend the signature notation,
 using `[...]` instead of `(...)' for the new behavior.


Or we could add a new function,  PyUFunc_StrictGeneralizedFunction, with
the new behavior. that might be the safe way to go.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-17 Thread Jaime Fernández del Río
On Wed, Sep 17, 2014 at 1:27 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Wed, Sep 17, 2014 at 6:57 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Wed, Sep 17, 2014 at 6:48 AM, Sebastian Berg 
 sebast...@sipsolutions.net wrote:

 On Mi, 2014-09-17 at 06:33 -0600, Charles R Harris wrote:
 
 
 snip
 
 
  It would also be nice if the order could be made part of the signature
  as DGEMM and friends like one of the argument axis to be contiguous,
  but I don't see a clean way to do that. The gufuncs do have an order
  parameter which should probably default to 'C'  if the arrays/vectors
  are stacked. I think the default is currently 'K'. Hmm, we could make
  'K' refer to the last one or two dimensions in the inputs. OTOH, that
  isn't needed for types not handled by BLAS. Or it could be handled in
  the inner loops.
 

 This is a different discussion, right? It would be nice to have an order
 flag for the core dimensions. The gufunc itself should not care at all
 about the outer ones.


 Right. It is possible to check all these things in the loop, but the loop
 code grows..
 .

 All the orders for the core dimensions would be nice probably, including
 no contiguity being enforced (or actually, maybe we can define 'K' to
 mean that in this context). To be honest, if 'K' means that, it seems
 like a decent default.


 With regards to the main topic, we could extend the signature notation,
 using `[...]` instead of `(...)' for the new behavior.


 Or we could add a new function,  PyUFunc_StrictGeneralizedFunction, with
 the new behavior. that might be the safe way to go.


That sounds good to me, the current flow is that 'ufunc_generic_call',
which is the function in the tp_call slot of the PyUFunc object, calls
'PyUFunc_GenericFunction', which will call 'PyUFunc_GeneralizedFunction' if
the 'core_enabled' member variable is set to 1. We could have a new
'PyUFunc_StrictFromFuncAndDataAndSignature' that sets the 'core_enabled'
variable to e.g. 2, and then dispatch on this value in
'PyUFunc_GenericFunction' to the new 'PyUFunc_StrictGeneralizedFunction'.

This will also give us a better sandbox to experiment with all the other
enhancements we have been talking about: frozen dimensions, optional
dimensions, computed dimensions...

I am guessing we still want to deprecate the old behavior in the next
release and remove it entirely in a couple more, right?

Jaime


 Chuck

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
(\__/)
( O.o)
(  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
de dominación mundial.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-17 Thread Charles R Harris
On Wed, Sep 17, 2014 at 3:01 PM, Jaime Fernández del Río 
jaime.f...@gmail.com wrote:

 On Wed, Sep 17, 2014 at 1:27 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Wed, Sep 17, 2014 at 6:57 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Wed, Sep 17, 2014 at 6:48 AM, Sebastian Berg 
 sebast...@sipsolutions.net wrote:

 On Mi, 2014-09-17 at 06:33 -0600, Charles R Harris wrote:
 
 
 snip
 
 
  It would also be nice if the order could be made part of the signature
  as DGEMM and friends like one of the argument axis to be contiguous,
  but I don't see a clean way to do that. The gufuncs do have an order
  parameter which should probably default to 'C'  if the arrays/vectors
  are stacked. I think the default is currently 'K'. Hmm, we could make
  'K' refer to the last one or two dimensions in the inputs. OTOH, that
  isn't needed for types not handled by BLAS. Or it could be handled in
  the inner loops.
 

 This is a different discussion, right? It would be nice to have an order
 flag for the core dimensions. The gufunc itself should not care at all
 about the outer ones.


 Right. It is possible to check all these things in the loop, but the
 loop code grows..
 .

 All the orders for the core dimensions would be nice probably, including
 no contiguity being enforced (or actually, maybe we can define 'K' to
 mean that in this context). To be honest, if 'K' means that, it seems
 like a decent default.


 With regards to the main topic, we could extend the signature notation,
 using `[...]` instead of `(...)' for the new behavior.


 Or we could add a new function,  PyUFunc_StrictGeneralizedFunction, with
 the new behavior. that might be the safe way to go.


 That sounds good to me, the current flow is that 'ufunc_generic_call',
 which is the function in the tp_call slot of the PyUFunc object, calls
 'PyUFunc_GenericFunction', which will call 'PyUFunc_GeneralizedFunction' if
 the 'core_enabled' member variable is set to 1. We could have a new
 'PyUFunc_StrictFromFuncAndDataAndSignature' that sets the 'core_enabled'
 variable to e.g. 2, and then dispatch on this value in
 'PyUFunc_GenericFunction' to the new 'PyUFunc_StrictGeneralizedFunction'.

 This will also give us a better sandbox to experiment with all the other
 enhancements we have been talking about: frozen dimensions, optional
 dimensions, computed dimensions...


That sounds good, it is cleaner than the other solutions. The new
constructor will need to be in the interface and the interface version
updated.


 I am guessing we still want to deprecate the old behavior in the next
 release and remove it entirely in a couple more, right?


Don't know. It is in the interface, so might want to just deprecate it and
leave it laying around. Could maybe add an argument to the new constructor
that sets the `core_enabled` value so we don't need to keep adding new
functions to the api. If so, should probably be an enum in the include file
so valid values get passed.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-17 Thread Charles R Harris
On Wed, Sep 17, 2014 at 3:29 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Wed, Sep 17, 2014 at 3:01 PM, Jaime Fernández del Río 
 jaime.f...@gmail.com wrote:

 On Wed, Sep 17, 2014 at 1:27 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Wed, Sep 17, 2014 at 6:57 AM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Wed, Sep 17, 2014 at 6:48 AM, Sebastian Berg 
 sebast...@sipsolutions.net wrote:

 On Mi, 2014-09-17 at 06:33 -0600, Charles R Harris wrote:
 
 
 snip
 
 
  It would also be nice if the order could be made part of the
 signature
  as DGEMM and friends like one of the argument axis to be contiguous,
  but I don't see a clean way to do that. The gufuncs do have an order
  parameter which should probably default to 'C'  if the arrays/vectors
  are stacked. I think the default is currently 'K'. Hmm, we could make
  'K' refer to the last one or two dimensions in the inputs. OTOH, that
  isn't needed for types not handled by BLAS. Or it could be handled in
  the inner loops.
 

 This is a different discussion, right? It would be nice to have an
 order
 flag for the core dimensions. The gufunc itself should not care at all
 about the outer ones.


 Right. It is possible to check all these things in the loop, but the
 loop code grows..
 .

 All the orders for the core dimensions would be nice probably,
 including
 no contiguity being enforced (or actually, maybe we can define 'K' to
 mean that in this context). To be honest, if 'K' means that, it seems
 like a decent default.


 With regards to the main topic, we could extend the signature notation,
 using `[...]` instead of `(...)' for the new behavior.


 Or we could add a new function,  PyUFunc_StrictGeneralizedFunction, with
 the new behavior. that might be the safe way to go.


 That sounds good to me, the current flow is that 'ufunc_generic_call',
 which is the function in the tp_call slot of the PyUFunc object, calls
 'PyUFunc_GenericFunction', which will call 'PyUFunc_GeneralizedFunction' if
 the 'core_enabled' member variable is set to 1. We could have a new
 'PyUFunc_StrictFromFuncAndDataAndSignature' that sets the 'core_enabled'
 variable to e.g. 2, and then dispatch on this value in
 'PyUFunc_GenericFunction' to the new 'PyUFunc_StrictGeneralizedFunction'.

 This will also give us a better sandbox to experiment with all the other
 enhancements we have been talking about: frozen dimensions, optional
 dimensions, computed dimensions...


 That sounds good, it is cleaner than the other solutions. The new
 constructor will need to be in the interface and the interface version
 updated.


 I am guessing we still want to deprecate the old behavior in the next
 release and remove it entirely in a couple more, right?


 Don't know. It is in the interface, so might want to just deprecate it and
 leave it laying around. Could maybe add an argument to the new constructor
 that sets the `core_enabled` value so we don't need to keep adding new
 functions to the api. If so, should probably be an enum in the include file
 so valid values get passed.


And then Ufunc in the code_generator could be modified to take both utype
(core_enabled value) and signature and use the new constructor.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Is this a bug?

2014-09-16 Thread Charles R Harris
Hi All,

It turns out that gufuncs will broadcast the last dimension if it is one.
For instance, inner1d has signature `(n), (n) - ()`, yet

In [27]: inner1d([1,1,1], [1])
Out[27]: 3

In [28]: inner1d([1,1,1], [1,1])
---
ValueErrorTraceback (most recent call last)
ipython-input-28-e53e62e35349 in module()
 1 inner1d([1,1,1], [1,1])

ValueError: inner1d: Operand 1 has a mismatch in its core dimension 0, with
gufunc signature (i),(i)-() (size 2 is different from 3)


I'd think this is a bug, as the dimensions should match. Note that scalar 1
will be promoted to [1] in this case.

Thoughts?

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Charles R Harris
On Tue, Sep 16, 2014 at 1:27 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:

 Hi All,

 It turns out that gufuncs will broadcast the last dimension if it is one.
 For instance, inner1d has signature `(n), (n) - ()`, yet

 In [27]: inner1d([1,1,1], [1])
 Out[27]: 3

 In [28]: inner1d([1,1,1], [1,1])
 ---
 ValueErrorTraceback (most recent call last)
 ipython-input-28-e53e62e35349 in module()
  1 inner1d([1,1,1], [1,1])

 ValueError: inner1d: Operand 1 has a mismatch in its core dimension 0,
 with gufunc signature (i),(i)-() (size 2 is different from 3)


 I'd think this is a bug, as the dimensions should match. Note that scalar
 1 will be promoted to [1] in this case.

 Thoughts?


This also holds for matrix_multiply

In [33]: matrix_multiply(eye(3), [[1]])
Out[33]:
array([[ 1.],
   [ 1.],
   [ 1.]])

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Nathaniel Smith
On Tue, Sep 16, 2014 at 3:27 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
 Hi All,

 It turns out that gufuncs will broadcast the last dimension if it is one.
 For instance, inner1d has signature `(n), (n) - ()`, yet

 In [27]: inner1d([1,1,1], [1])
 Out[27]: 3

Yes, this looks totally wrong to me too... broadcasting is a feature
of auto-vectorizing a core operation over a set of dimensions, it
shouldn't be applied to the dimensions of the core operation itself
like this.

-n

-- 
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread josef.pktd
On Tue, Sep 16, 2014 at 3:42 PM, Nathaniel Smith n...@pobox.com wrote:
 On Tue, Sep 16, 2014 at 3:27 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 Hi All,

 It turns out that gufuncs will broadcast the last dimension if it is one.
 For instance, inner1d has signature `(n), (n) - ()`, yet

 In [27]: inner1d([1,1,1], [1])
 Out[27]: 3

 Yes, this looks totally wrong to me too... broadcasting is a feature
 of auto-vectorizing a core operation over a set of dimensions, it
 shouldn't be applied to the dimensions of the core operation itself
 like this.

Are these functions doing any numerical shortcuts in this case?

If yes, this would be convenient.

inner1d(x, weights)   with weights is either (n, ) or ()

if weights == 1:
return x.sum()
else:
return inner1d(x, weights)

Josef


 -n

 --
 Nathaniel J. Smith
 Postdoctoral researcher - Informatics - University of Edinburgh
 http://vorpus.org
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Charles R Harris
On Tue, Sep 16, 2014 at 1:55 PM, josef.p...@gmail.com wrote:

 On Tue, Sep 16, 2014 at 3:42 PM, Nathaniel Smith n...@pobox.com wrote:
  On Tue, Sep 16, 2014 at 3:27 PM, Charles R Harris
  charlesr.har...@gmail.com wrote:
  Hi All,
 
  It turns out that gufuncs will broadcast the last dimension if it is
 one.
  For instance, inner1d has signature `(n), (n) - ()`, yet
 
  In [27]: inner1d([1,1,1], [1])
  Out[27]: 3
 
  Yes, this looks totally wrong to me too... broadcasting is a feature
  of auto-vectorizing a core operation over a set of dimensions, it
  shouldn't be applied to the dimensions of the core operation itself
  like this.

 Are these functions doing any numerical shortcuts in this case?

 If yes, this would be convenient.

 inner1d(x, weights)   with weights is either (n, ) or ()

 if weights == 1:
 return x.sum()
 else:
 return inner1d(x, weights)


That depends on the inner inner loop ;) Currently inner1d inner loop
multiplies and adds so not as efficient as a sum in the scalar case.
However, it is probably faster than an if statement.

In [4]: timeit inner1d(a, 1)
1 loops, best of 3: 56.4 µs per loop

In [5]: timeit a.sum()
1 loops, best of 3: 48.3 µs per loop

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Nathaniel Smith
On Tue, Sep 16, 2014 at 3:55 PM,  josef.p...@gmail.com wrote:
 On Tue, Sep 16, 2014 at 3:42 PM, Nathaniel Smith n...@pobox.com wrote:
 On Tue, Sep 16, 2014 at 3:27 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:
 Hi All,

 It turns out that gufuncs will broadcast the last dimension if it is one.
 For instance, inner1d has signature `(n), (n) - ()`, yet

 In [27]: inner1d([1,1,1], [1])
 Out[27]: 3

 Yes, this looks totally wrong to me too... broadcasting is a feature
 of auto-vectorizing a core operation over a set of dimensions, it
 shouldn't be applied to the dimensions of the core operation itself
 like this.

 Are these functions doing any numerical shortcuts in this case?

 If yes, this would be convenient.

 inner1d(x, weights)   with weights is either (n, ) or ()

 if weights == 1:
 return x.sum()
 else:
 return inner1d(x, weights)

Yes, if this is the behaviour you want then I think you should write
this if statement :-). This case isn't general enough to build
directly into inner1d IMHO.

-n

-- 
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Jaime Fernández del Río
On Tue, Sep 16, 2014 at 12:27 PM, Charles R Harris 
charlesr.har...@gmail.com wrote:

 Hi All,

 It turns out that gufuncs will broadcast the last dimension if it is one.
 For instance, inner1d has signature `(n), (n) - ()`, yet

 In [27]: inner1d([1,1,1], [1])
 Out[27]: 3

 In [28]: inner1d([1,1,1], [1,1])
 ---
 ValueErrorTraceback (most recent call last)
 ipython-input-28-e53e62e35349 in module()
  1 inner1d([1,1,1], [1,1])

 ValueError: inner1d: Operand 1 has a mismatch in its core dimension 0,
 with gufunc signature (i),(i)-() (size 2 is different from 3)


 I'd think this is a bug, as the dimensions should match. Note that scalar
 1 will be promoted to [1] in this case.

 Thoughts?


If it is a bug, it is an extended one, because it is the same behavior of
einsum:

 np.einsum('i,i', [1,1,1], [1])
3
 np.einsum('i,i', [1,1,1], [1,1])
Traceback (most recent call last):
  File stdin, line 1, in module
ValueError: operands could not be broadcast together with remapped shapes
[origi
nal-remapped]: (3,)-(3,) (2,)-(2,)

And I think it is a conscious design decision, there is a comment about
broadcasting missing core dimensions here:


https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1940

and the code makes it very explicit that input argument dimensions with the
same label are broadcast to a common shape, see here:


https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1956

I kind of expect numpy to broadcast whenever possible, so this doesn't feel
wrong to me.

That said, it is hard to come up with convincing examples of how this
behavior would be useful in any practical context. But changing something
that has been working like that for so long seems like a risky thing. And I
cannot come with a convincing example of why it would be harmful either.

Jaime

-- 
(\__/)
( O.o)
(  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
de dominación mundial.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Nathaniel Smith
On Tue, Sep 16, 2014 at 4:31 PM, Jaime Fernández del Río
jaime.f...@gmail.com wrote:
 If it is a bug, it is an extended one, because it is the same behavior of
 einsum:

 np.einsum('i,i', [1,1,1], [1])
 3
 np.einsum('i,i', [1,1,1], [1,1])
 Traceback (most recent call last):
   File stdin, line 1, in module
 ValueError: operands could not be broadcast together with remapped shapes
 [origi
 nal-remapped]: (3,)-(3,) (2,)-(2,)

 And I think it is a conscious design decision, there is a comment about
 broadcasting missing core dimensions here:

 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1940

intentional and sensible are not always the same thing :-). That
said, it isn't totally obvious to me what the correct behaviour for
einsum is in this case.

 and the code makes it very explicit that input argument dimensions with the
 same label are broadcast to a common shape, see here:

 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1956

 I kind of expect numpy to broadcast whenever possible, so this doesn't feel
 wrong to me.

The case Chuck is talking about is like if we allowed matrix
multiplication between an array with shape (n, 1) with an array with
(k, m), because (n, 1) can be broadcast to (n, k). This feels VERY
wrong to me, will certainly hide many bugs, and is definitely not how
it works right now (for np.dot, anyway; apparently it does work that
way for the brand-new gufunc np.linalg.matrix_multiply, but this must
be an accident).

 That said, it is hard to come up with convincing examples of how this
 behavior would be useful in any practical context. But changing something
 that has been working like that for so long seems like a risky thing. And I
 cannot come with a convincing example of why it would be harmful either.

gufuncs are very new.

-n

-- 
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Charles R Harris
On Tue, Sep 16, 2014 at 2:51 PM, Nathaniel Smith n...@pobox.com wrote:

 On Tue, Sep 16, 2014 at 4:31 PM, Jaime Fernández del Río
 jaime.f...@gmail.com wrote:
  If it is a bug, it is an extended one, because it is the same behavior of
  einsum:
 
  np.einsum('i,i', [1,1,1], [1])
  3
  np.einsum('i,i', [1,1,1], [1,1])
  Traceback (most recent call last):
File stdin, line 1, in module
  ValueError: operands could not be broadcast together with remapped shapes
  [origi
  nal-remapped]: (3,)-(3,) (2,)-(2,)
 
  And I think it is a conscious design decision, there is a comment about
  broadcasting missing core dimensions here:
 
 
 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1940

 intentional and sensible are not always the same thing :-). That
 said, it isn't totally obvious to me what the correct behaviour for
 einsum is in this case.

  and the code makes it very explicit that input argument dimensions with
 the
  same label are broadcast to a common shape, see here:
 
 
 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1956
 
  I kind of expect numpy to broadcast whenever possible, so this doesn't
 feel
  wrong to me.

 The case Chuck is talking about is like if we allowed matrix
 multiplication between an array with shape (n, 1) with an array with
 (k, m), because (n, 1) can be broadcast to (n, k). This feels VERY
 wrong to me, will certainly hide many bugs, and is definitely not how
 it works right now (for np.dot, anyway; apparently it does work that
 way for the brand-new gufunc np.linalg.matrix_multiply, but this must
 be an accident).

  That said, it is hard to come up with convincing examples of how this
  behavior would be useful in any practical context. But changing something
  that has been working like that for so long seems like a risky thing.
 And I
  cannot come with a convincing example of why it would be harmful either.

 gufuncs are very new.


Or at least newly used. They've been sitting around for years with little
use and less testing. This is probably (easily?) fixable as the shape of
the operands is available.

In [22]: [d.shape for d in nditer([[1,1,1], [[1,1,1]]*3]).operands]
Out[22]: [(3,), (3, 3)]

In [23]: [d.shape for d in nditer([[[1,1,1]], [[1,1,1]]*3]).operands]
Out[23]: [(1, 3), (3, 3)]

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Jaime Fernández del Río
On Tue, Sep 16, 2014 at 3:26 PM, Charles R Harris charlesr.har...@gmail.com
 wrote:



 On Tue, Sep 16, 2014 at 2:51 PM, Nathaniel Smith n...@pobox.com wrote:

 On Tue, Sep 16, 2014 at 4:31 PM, Jaime Fernández del Río
 jaime.f...@gmail.com wrote:
  If it is a bug, it is an extended one, because it is the same behavior
 of
  einsum:
 
  np.einsum('i,i', [1,1,1], [1])
  3
  np.einsum('i,i', [1,1,1], [1,1])
  Traceback (most recent call last):
File stdin, line 1, in module
  ValueError: operands could not be broadcast together with remapped
 shapes
  [origi
  nal-remapped]: (3,)-(3,) (2,)-(2,)
 
  And I think it is a conscious design decision, there is a comment about
  broadcasting missing core dimensions here:
 
 
 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1940

 intentional and sensible are not always the same thing :-). That
 said, it isn't totally obvious to me what the correct behaviour for
 einsum is in this case.

  and the code makes it very explicit that input argument dimensions with
 the
  same label are broadcast to a common shape, see here:
 
 
 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1956
 
  I kind of expect numpy to broadcast whenever possible, so this doesn't
 feel
  wrong to me.

 The case Chuck is talking about is like if we allowed matrix
 multiplication between an array with shape (n, 1) with an array with
 (k, m), because (n, 1) can be broadcast to (n, k). This feels VERY
 wrong to me, will certainly hide many bugs, and is definitely not how
 it works right now (for np.dot, anyway; apparently it does work that
 way for the brand-new gufunc np.linalg.matrix_multiply, but this must
 be an accident).

  That said, it is hard to come up with convincing examples of how this
  behavior would be useful in any practical context. But changing
 something
  that has been working like that for so long seems like a risky thing.
 And I
  cannot come with a convincing example of why it would be harmful either.

 gufuncs are very new.


 Or at least newly used. They've been sitting around for years with little
 use and less testing. This is probably (easily?) fixable as the shape of
 the operands is available.

 In [22]: [d.shape for d in nditer([[1,1,1], [[1,1,1]]*3]).operands]
 Out[22]: [(3,), (3, 3)]

 In [23]: [d.shape for d in nditer([[[1,1,1]], [[1,1,1]]*3]).operands]
 Out[23]: [(1, 3), (3, 3)]


If we agree that it is broken, which I still am not fully sure of, then
yes, it is very easy to fix. I have been looking into that code quite a bit
lately, so I could patch something up pretty quick.

Are we OK with the appending of size 1 dimensions to complete the core
dimensions? That is, should matrix_multiply([1,1,1], [[1],[1],[1]]) work,
or should it complain about the first argument having less dimensions than
the core dimensions in the signature?

Lastly, there is an interesting side effect of the way this broadcasting is
handled: if a gufunc specifies a core dimension in an output argument only,
and an `out` kwarg is not passed in, then the output array will have that
core dimension set to be of size 1, e.g. if the signature of `f` is
'(),()-(a)', then f(1, 2).shape is (1,). This has always felt funny to me,
and I think that an unspecified dimension in an output array should either
be specified by a passed out array, or raise an error about an unspecified
core dimension or something like that. Does this sound right?

Jaime

-- 
(\__/)
( O.o)
(  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
de dominación mundial.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Eric Moore
On Tuesday, September 16, 2014, Jaime Fernández del Río 
jaime.f...@gmail.com wrote:

 On Tue, Sep 16, 2014 at 3:26 PM, Charles R Harris 
 charlesr.har...@gmail.com
 javascript:_e(%7B%7D,'cvml','charlesr.har...@gmail.com'); wrote:



 On Tue, Sep 16, 2014 at 2:51 PM, Nathaniel Smith n...@pobox.com
 javascript:_e(%7B%7D,'cvml','n...@pobox.com'); wrote:

 On Tue, Sep 16, 2014 at 4:31 PM, Jaime Fernández del Río
 jaime.f...@gmail.com
 javascript:_e(%7B%7D,'cvml','jaime.f...@gmail.com'); wrote:
  If it is a bug, it is an extended one, because it is the same behavior
 of
  einsum:
 
  np.einsum('i,i', [1,1,1], [1])
  3
  np.einsum('i,i', [1,1,1], [1,1])
  Traceback (most recent call last):
File stdin, line 1, in module
  ValueError: operands could not be broadcast together with remapped
 shapes
  [origi
  nal-remapped]: (3,)-(3,) (2,)-(2,)
 
  And I think it is a conscious design decision, there is a comment about
  broadcasting missing core dimensions here:
 
 
 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1940

 intentional and sensible are not always the same thing :-). That
 said, it isn't totally obvious to me what the correct behaviour for
 einsum is in this case.

  and the code makes it very explicit that input argument dimensions
 with the
  same label are broadcast to a common shape, see here:
 
 
 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1956
 
  I kind of expect numpy to broadcast whenever possible, so this doesn't
 feel
  wrong to me.

 The case Chuck is talking about is like if we allowed matrix
 multiplication between an array with shape (n, 1) with an array with
 (k, m), because (n, 1) can be broadcast to (n, k). This feels VERY
 wrong to me, will certainly hide many bugs, and is definitely not how
 it works right now (for np.dot, anyway; apparently it does work that
 way for the brand-new gufunc np.linalg.matrix_multiply, but this must
 be an accident).

  That said, it is hard to come up with convincing examples of how this
  behavior would be useful in any practical context. But changing
 something
  that has been working like that for so long seems like a risky thing.
 And I
  cannot come with a convincing example of why it would be harmful
 either.

 gufuncs are very new.


 Or at least newly used. They've been sitting around for years with little
 use and less testing. This is probably (easily?) fixable as the shape of
 the operands is available.

 In [22]: [d.shape for d in nditer([[1,1,1], [[1,1,1]]*3]).operands]
 Out[22]: [(3,), (3, 3)]

 In [23]: [d.shape for d in nditer([[[1,1,1]], [[1,1,1]]*3]).operands]
 Out[23]: [(1, 3), (3, 3)]


 If we agree that it is broken, which I still am not fully sure of, then
 yes, it is very easy to fix. I have been looking into that code quite a bit
 lately, so I could patch something up pretty quick.

 Are we OK with the appending of size 1 dimensions to complete the core
 dimensions? That is, should matrix_multiply([1,1,1], [[1],[1],[1]]) work,
 or should it complain about the first argument having less dimensions than
 the core dimensions in the signature?

 Lastly, there is an interesting side effect of the way this broadcasting
 is handled: if a gufunc specifies a core dimension in an output argument
 only, and an `out` kwarg is not passed in, then the output array will have
 that core dimension set to be of size 1, e.g. if the signature of `f` is
 '(),()-(a)', then f(1, 2).shape is (1,). This has always felt funny to me,
 and I think that an unspecified dimension in an output array should either
 be specified by a passed out array, or raise an error about an unspecified
 core dimension or something like that. Does this sound right?

 Jaime

 --
 (\__/)
 ( O.o)
 (  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
 de dominación mundial.


Given this and the earlier discussion about improvements to this code, I
wonder if it wouldn't be worth implemented the logic in python first. This
way there is something to test against, and something to play while all of
the cases are sorted out.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Charles R Harris
On Tue, Sep 16, 2014 at 4:56 PM, Jaime Fernández del Río 
jaime.f...@gmail.com wrote:

 On Tue, Sep 16, 2014 at 3:26 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Tue, Sep 16, 2014 at 2:51 PM, Nathaniel Smith n...@pobox.com wrote:

 On Tue, Sep 16, 2014 at 4:31 PM, Jaime Fernández del Río
 jaime.f...@gmail.com wrote:
  If it is a bug, it is an extended one, because it is the same behavior
 of
  einsum:
 
  np.einsum('i,i', [1,1,1], [1])
  3
  np.einsum('i,i', [1,1,1], [1,1])
  Traceback (most recent call last):
File stdin, line 1, in module
  ValueError: operands could not be broadcast together with remapped
 shapes
  [origi
  nal-remapped]: (3,)-(3,) (2,)-(2,)
 
  And I think it is a conscious design decision, there is a comment about
  broadcasting missing core dimensions here:
 
 
 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1940

 intentional and sensible are not always the same thing :-). That
 said, it isn't totally obvious to me what the correct behaviour for
 einsum is in this case.

  and the code makes it very explicit that input argument dimensions
 with the
  same label are broadcast to a common shape, see here:
 
 
 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1956
 
  I kind of expect numpy to broadcast whenever possible, so this doesn't
 feel
  wrong to me.

 The case Chuck is talking about is like if we allowed matrix
 multiplication between an array with shape (n, 1) with an array with
 (k, m), because (n, 1) can be broadcast to (n, k). This feels VERY
 wrong to me, will certainly hide many bugs, and is definitely not how
 it works right now (for np.dot, anyway; apparently it does work that
 way for the brand-new gufunc np.linalg.matrix_multiply, but this must
 be an accident).

  That said, it is hard to come up with convincing examples of how this
  behavior would be useful in any practical context. But changing
 something
  that has been working like that for so long seems like a risky thing.
 And I
  cannot come with a convincing example of why it would be harmful
 either.

 gufuncs are very new.


 Or at least newly used. They've been sitting around for years with little
 use and less testing. This is probably (easily?) fixable as the shape of
 the operands is available.

 In [22]: [d.shape for d in nditer([[1,1,1], [[1,1,1]]*3]).operands]
 Out[22]: [(3,), (3, 3)]

 In [23]: [d.shape for d in nditer([[[1,1,1]], [[1,1,1]]*3]).operands]
 Out[23]: [(1, 3), (3, 3)]


 If we agree that it is broken, which I still am not fully sure of, then
 yes, it is very easy to fix. I have been looking into that code quite a bit
 lately, so I could patch something up pretty quick.


That would be nice... I've been starting to look through the code and
didn't relish it.


 Are we OK with the appending of size 1 dimensions to complete the core
 dimensions? That is, should matrix_multiply([1,1,1], [[1],[1],[1]]) work,
 or should it complain about the first argument having less dimensions than
 the core dimensions in the signature?


Yes, I think we need to keep that part. It is even essential ;)


 Lastly, there is an interesting side effect of the way this broadcasting
 is handled: if a gufunc specifies a core dimension in an output argument
 only, and an `out` kwarg is not passed in, then the output array will have
 that core dimension set to be of size 1, e.g. if the signature of `f` is
 '(),()-(a)', then f(1, 2).shape is (1,). This has always felt funny to me,
 and I think that an unspecified dimension in an output array should either
 be specified by a passed out array, or raise an error about an unspecified
 core dimension or something like that. Does this sound right?


Uh, I need to get my head around that before commenting.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Charles R Harris
On Tue, Sep 16, 2014 at 5:03 PM, Eric Moore e...@redtetrahedron.org wrote:



 On Tuesday, September 16, 2014, Jaime Fernández del Río 
 jaime.f...@gmail.com wrote:

 On Tue, Sep 16, 2014 at 3:26 PM, Charles R Harris 
 charlesr.har...@gmail.com wrote:



 On Tue, Sep 16, 2014 at 2:51 PM, Nathaniel Smith n...@pobox.com wrote:

 On Tue, Sep 16, 2014 at 4:31 PM, Jaime Fernández del Río
 jaime.f...@gmail.com wrote:
  If it is a bug, it is an extended one, because it is the same
 behavior of
  einsum:
 
  np.einsum('i,i', [1,1,1], [1])
  3
  np.einsum('i,i', [1,1,1], [1,1])
  Traceback (most recent call last):
File stdin, line 1, in module
  ValueError: operands could not be broadcast together with remapped
 shapes
  [origi
  nal-remapped]: (3,)-(3,) (2,)-(2,)
 
  And I think it is a conscious design decision, there is a comment
 about
  broadcasting missing core dimensions here:
 
 
 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1940

 intentional and sensible are not always the same thing :-). That
 said, it isn't totally obvious to me what the correct behaviour for
 einsum is in this case.

  and the code makes it very explicit that input argument dimensions
 with the
  same label are broadcast to a common shape, see here:
 
 
 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1956
 
  I kind of expect numpy to broadcast whenever possible, so this
 doesn't feel
  wrong to me.

 The case Chuck is talking about is like if we allowed matrix
 multiplication between an array with shape (n, 1) with an array with
 (k, m), because (n, 1) can be broadcast to (n, k). This feels VERY
 wrong to me, will certainly hide many bugs, and is definitely not how
 it works right now (for np.dot, anyway; apparently it does work that
 way for the brand-new gufunc np.linalg.matrix_multiply, but this must
 be an accident).

  That said, it is hard to come up with convincing examples of how this
  behavior would be useful in any practical context. But changing
 something
  that has been working like that for so long seems like a risky thing.
 And I
  cannot come with a convincing example of why it would be harmful
 either.

 gufuncs are very new.


 Or at least newly used. They've been sitting around for years with
 little use and less testing. This is probably (easily?) fixable as the
 shape of the operands is available.

 In [22]: [d.shape for d in nditer([[1,1,1], [[1,1,1]]*3]).operands]
 Out[22]: [(3,), (3, 3)]

 In [23]: [d.shape for d in nditer([[[1,1,1]], [[1,1,1]]*3]).operands]
 Out[23]: [(1, 3), (3, 3)]


 If we agree that it is broken, which I still am not fully sure of, then
 yes, it is very easy to fix. I have been looking into that code quite a bit
 lately, so I could patch something up pretty quick.

 Are we OK with the appending of size 1 dimensions to complete the core
 dimensions? That is, should matrix_multiply([1,1,1], [[1],[1],[1]]) work,
 or should it complain about the first argument having less dimensions than
 the core dimensions in the signature?

 Lastly, there is an interesting side effect of the way this broadcasting
 is handled: if a gufunc specifies a core dimension in an output argument
 only, and an `out` kwarg is not passed in, then the output array will have
 that core dimension set to be of size 1, e.g. if the signature of `f` is
 '(),()-(a)', then f(1, 2).shape is (1,). This has always felt funny to me,
 and I think that an unspecified dimension in an output array should either
 be specified by a passed out array, or raise an error about an unspecified
 core dimension or something like that. Does this sound right?

 Jaime

 --
 (\__/)
 ( O.o)
 (  ) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
 de dominación mundial.


 Given this and the earlier discussion about improvements to this code, I
 wonder if it wouldn't be worth implemented the logic in python first. This
 way there is something to test against, and something to play while all of
 the cases are sorted out.


I've got a couple of generalized functions whose tests turned this up.
Speaking of which, they are tentatively named

mulvecvec
mulvecmat
mulmatvec
mulmatmat

and work on stacked matrices and vectors. I can see using 'dot' instead of
'mul', and any other suggestions would be welcome. I've also made it easier
to specify generalized functions in the code generator, but given the
multiple loops, I haven't settled on a good way of using generic loops.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Nathaniel Smith
On Tue, Sep 16, 2014 at 6:56 PM, Jaime Fernández del Río
jaime.f...@gmail.com wrote:
 On Tue, Sep 16, 2014 at 3:26 PM, Charles R Harris
 charlesr.har...@gmail.com wrote:

 On Tue, Sep 16, 2014 at 2:51 PM, Nathaniel Smith n...@pobox.com wrote:

 On Tue, Sep 16, 2014 at 4:31 PM, Jaime Fernández del Río
 jaime.f...@gmail.com wrote:
  If it is a bug, it is an extended one, because it is the same behavior
  of
  einsum:
 
  np.einsum('i,i', [1,1,1], [1])
  3
  np.einsum('i,i', [1,1,1], [1,1])
  Traceback (most recent call last):
File stdin, line 1, in module
  ValueError: operands could not be broadcast together with remapped
  shapes
  [origi
  nal-remapped]: (3,)-(3,) (2,)-(2,)
 
  And I think it is a conscious design decision, there is a comment about
  broadcasting missing core dimensions here:
 
 
  https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1940

 intentional and sensible are not always the same thing :-). That
 said, it isn't totally obvious to me what the correct behaviour for
 einsum is in this case.

  and the code makes it very explicit that input argument dimensions with
  the
  same label are broadcast to a common shape, see here:
 
 
  https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1956
 
  I kind of expect numpy to broadcast whenever possible, so this doesn't
  feel
  wrong to me.

 The case Chuck is talking about is like if we allowed matrix
 multiplication between an array with shape (n, 1) with an array with
 (k, m), because (n, 1) can be broadcast to (n, k). This feels VERY
 wrong to me, will certainly hide many bugs, and is definitely not how
 it works right now (for np.dot, anyway; apparently it does work that
 way for the brand-new gufunc np.linalg.matrix_multiply, but this must
 be an accident).

  That said, it is hard to come up with convincing examples of how this
  behavior would be useful in any practical context. But changing
  something
  that has been working like that for so long seems like a risky thing.
  And I
  cannot come with a convincing example of why it would be harmful
  either.

 gufuncs are very new.


 Or at least newly used. They've been sitting around for years with little
 use and less testing. This is probably (easily?) fixable as the shape of the
 operands is available.

 In [22]: [d.shape for d in nditer([[1,1,1], [[1,1,1]]*3]).operands]
 Out[22]: [(3,), (3, 3)]

 In [23]: [d.shape for d in nditer([[[1,1,1]], [[1,1,1]]*3]).operands]
 Out[23]: [(1, 3), (3, 3)]


 If we agree that it is broken, which I still am not fully sure of, then yes,
 it is very easy to fix. I have been looking into that code quite a bit
 lately, so I could patch something up pretty quick.

 Are we OK with the appending of size 1 dimensions to complete the core
 dimensions? That is, should matrix_multiply([1,1,1], [[1],[1],[1]]) work, or
 should it complain about the first argument having less dimensions than the
 core dimensions in the signature?

I think that by default, gufuncs should definitely *not* allow this.

Example case 1: qr can be applied equally well to a (1, n) array or an
(n, 1) array, but with different results. If the user passes in an
(n,) array, then how do we know which one they wanted?

Example case 2: matrix multiplication, as you know :-), is a case
where I do think we should allow for a bit more cleverness with the
core dimensions... but the appropriate cleverness is much more subtle
than just prepend size 1 dimensions until things fit. Instead, for
the first argument you need to prepend, for the second argument you
need to append, and then you need to remove the corresponding
dimensions from the output. Specific cases:

# Your version gives:
matmul([1, 1, 1], [[1], [1], [1]]).shape == (1, 1)
# But this should be (1,) (try it with np.dot)

# Your version gives:
matmul([[1, 1, 1]], [1, 1, 1]) - error, (1, 3) and (1, 3) are not conformable
# But this should work (second argument should be treated as (3, 1), not (1, 3))

So the default should be to be strict about core dimensions, unless
explicitly requested otherwise by the person defining the gufunc.

 Lastly, there is an interesting side effect of the way this broadcasting is
 handled: if a gufunc specifies a core dimension in an output argument only,
 and an `out` kwarg is not passed in, then the output array will have that
 core dimension set to be of size 1, e.g. if the signature of `f` is
 '(),()-(a)', then f(1, 2).shape is (1,). This has always felt funny to me,
 and I think that an unspecified dimension in an output array should either
 be specified by a passed out array, or raise an error about an unspecified
 core dimension or something like that. Does this sound right?

Does this have any use cases? My vote is that we simply disallow this
until we have concrete uses and can decide how to do it properly. That
way there won't be any backcompat concerns to deal with later.

-n

-- 
Nathaniel J. Smith
Postdoctoral researcher - Informatics 

Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Jaime Fernández del Río
On Tue, Sep 16, 2014 at 4:32 PM, Nathaniel Smith n...@pobox.com wrote:

 On Tue, Sep 16, 2014 at 6:56 PM, Jaime Fernández del Río
 jaime.f...@gmail.com wrote:
  On Tue, Sep 16, 2014 at 3:26 PM, Charles R Harris
  charlesr.har...@gmail.com wrote:
 
  On Tue, Sep 16, 2014 at 2:51 PM, Nathaniel Smith n...@pobox.com wrote:
 
  On Tue, Sep 16, 2014 at 4:31 PM, Jaime Fernández del Río
  jaime.f...@gmail.com wrote:
   If it is a bug, it is an extended one, because it is the same
 behavior
   of
   einsum:
  
   np.einsum('i,i', [1,1,1], [1])
   3
   np.einsum('i,i', [1,1,1], [1,1])
   Traceback (most recent call last):
 File stdin, line 1, in module
   ValueError: operands could not be broadcast together with remapped
   shapes
   [origi
   nal-remapped]: (3,)-(3,) (2,)-(2,)
  
   And I think it is a conscious design decision, there is a comment
 about
   broadcasting missing core dimensions here:
  
  
  
 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1940
 
  intentional and sensible are not always the same thing :-). That
  said, it isn't totally obvious to me what the correct behaviour for
  einsum is in this case.
 
   and the code makes it very explicit that input argument dimensions
 with
   the
   same label are broadcast to a common shape, see here:
  
  
  
 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1956
  
   I kind of expect numpy to broadcast whenever possible, so this
 doesn't
   feel
   wrong to me.
 
  The case Chuck is talking about is like if we allowed matrix
  multiplication between an array with shape (n, 1) with an array with
  (k, m), because (n, 1) can be broadcast to (n, k). This feels VERY
  wrong to me, will certainly hide many bugs, and is definitely not how
  it works right now (for np.dot, anyway; apparently it does work that
  way for the brand-new gufunc np.linalg.matrix_multiply, but this must
  be an accident).
 
   That said, it is hard to come up with convincing examples of how this
   behavior would be useful in any practical context. But changing
   something
   that has been working like that for so long seems like a risky thing.
   And I
   cannot come with a convincing example of why it would be harmful
   either.
 
  gufuncs are very new.
 
 
  Or at least newly used. They've been sitting around for years with
 little
  use and less testing. This is probably (easily?) fixable as the shape
 of the
  operands is available.
 
  In [22]: [d.shape for d in nditer([[1,1,1], [[1,1,1]]*3]).operands]
  Out[22]: [(3,), (3, 3)]
 
  In [23]: [d.shape for d in nditer([[[1,1,1]], [[1,1,1]]*3]).operands]
  Out[23]: [(1, 3), (3, 3)]
 
 
  If we agree that it is broken, which I still am not fully sure of, then
 yes,
  it is very easy to fix. I have been looking into that code quite a bit
  lately, so I could patch something up pretty quick.
 
  Are we OK with the appending of size 1 dimensions to complete the core
  dimensions? That is, should matrix_multiply([1,1,1], [[1],[1],[1]])
 work, or
  should it complain about the first argument having less dimensions than
 the
  core dimensions in the signature?

 I think that by default, gufuncs should definitely *not* allow this.


Too late! ;-)

I just put together some working code and sent a PR implementing the
behavior that Charles asked for:

https://github.com/numpy/numpy/pull/5077

Should we keep the discussion here, or take it over there?

Jaime



 Example case 1: qr can be applied equally well to a (1, n) array or an
 (n, 1) array, but with different results. If the user passes in an
 (n,) array, then how do we know which one they wanted?

 Example case 2: matrix multiplication, as you know :-), is a case
 where I do think we should allow for a bit more cleverness with the
 core dimensions... but the appropriate cleverness is much more subtle
 than just prepend size 1 dimensions until things fit. Instead, for
 the first argument you need to prepend, for the second argument you
 need to append, and then you need to remove the corresponding
 dimensions from the output. Specific cases:

 # Your version gives:
 matmul([1, 1, 1], [[1], [1], [1]]).shape == (1, 1)
 # But this should be (1,) (try it with np.dot)

 # Your version gives:
 matmul([[1, 1, 1]], [1, 1, 1]) - error, (1, 3) and (1, 3) are not
 conformable
 # But this should work (second argument should be treated as (3, 1), not
 (1, 3))

 So the default should be to be strict about core dimensions, unless
 explicitly requested otherwise by the person defining the gufunc.

  Lastly, there is an interesting side effect of the way this broadcasting
 is
  handled: if a gufunc specifies a core dimension in an output argument
 only,
  and an `out` kwarg is not passed in, then the output array will have that
  core dimension set to be of size 1, e.g. if the signature of `f` is
  '(),()-(a)', then f(1, 2).shape is (1,). This has always felt funny to
 me,
  and I think that an unspecified dimension in an output 

Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Nathaniel Smith
On Tue, Sep 16, 2014 at 8:31 PM, Jaime Fernández del Río
jaime.f...@gmail.com wrote:
 On Tue, Sep 16, 2014 at 4:32 PM, Nathaniel Smith n...@pobox.com wrote:

 On Tue, Sep 16, 2014 at 6:56 PM, Jaime Fernández del Río
 jaime.f...@gmail.com wrote:
  Are we OK with the appending of size 1 dimensions to complete the core
  dimensions? That is, should matrix_multiply([1,1,1], [[1],[1],[1]])
  work, or
  should it complain about the first argument having less dimensions than
  the
  core dimensions in the signature?

 I think that by default, gufuncs should definitely *not* allow this.

 Too late! ;-)

 I just put together some working code and sent a PR implementing the
 behavior that Charles asked for:

 https://github.com/numpy/numpy/pull/5077

 Should we keep the discussion here, or take it over there?

I guess the default is, design discussions here where people can chime
in, code finickiness over there to avoid boring people? So that would
suggest keeping the discussion here until we've resolved the
high-level debate about what the behaviour should even be. But it
isn't a huge issue either way...

-- 
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug?

2014-09-16 Thread Jaime Fernández del Río
On Tue, Sep 16, 2014 at 4:32 PM, Nathaniel Smith n...@pobox.com wrote:

 On Tue, Sep 16, 2014 at 6:56 PM, Jaime Fernández del Río
 jaime.f...@gmail.com wrote:
  On Tue, Sep 16, 2014 at 3:26 PM, Charles R Harris
  charlesr.har...@gmail.com wrote:
 
  On Tue, Sep 16, 2014 at 2:51 PM, Nathaniel Smith n...@pobox.com wrote:
 
  On Tue, Sep 16, 2014 at 4:31 PM, Jaime Fernández del Río
  jaime.f...@gmail.com wrote:
   If it is a bug, it is an extended one, because it is the same
 behavior
   of
   einsum:
  
   np.einsum('i,i', [1,1,1], [1])
   3
   np.einsum('i,i', [1,1,1], [1,1])
   Traceback (most recent call last):
 File stdin, line 1, in module
   ValueError: operands could not be broadcast together with remapped
   shapes
   [origi
   nal-remapped]: (3,)-(3,) (2,)-(2,)
  
   And I think it is a conscious design decision, there is a comment
 about
   broadcasting missing core dimensions here:
  
  
  
 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1940
 
  intentional and sensible are not always the same thing :-). That
  said, it isn't totally obvious to me what the correct behaviour for
  einsum is in this case.
 
   and the code makes it very explicit that input argument dimensions
 with
   the
   same label are broadcast to a common shape, see here:
  
  
  
 https://github.com/numpy/numpy/blob/master/numpy/core/src/umath/ufunc_object.c#L1956
  
   I kind of expect numpy to broadcast whenever possible, so this
 doesn't
   feel
   wrong to me.
 
  The case Chuck is talking about is like if we allowed matrix
  multiplication between an array with shape (n, 1) with an array with
  (k, m), because (n, 1) can be broadcast to (n, k). This feels VERY
  wrong to me, will certainly hide many bugs, and is definitely not how
  it works right now (for np.dot, anyway; apparently it does work that
  way for the brand-new gufunc np.linalg.matrix_multiply, but this must
  be an accident).
 
   That said, it is hard to come up with convincing examples of how this
   behavior would be useful in any practical context. But changing
   something
   that has been working like that for so long seems like a risky thing.
   And I
   cannot come with a convincing example of why it would be harmful
   either.
 
  gufuncs are very new.
 
 
  Or at least newly used. They've been sitting around for years with
 little
  use and less testing. This is probably (easily?) fixable as the shape
 of the
  operands is available.
 
  In [22]: [d.shape for d in nditer([[1,1,1], [[1,1,1]]*3]).operands]
  Out[22]: [(3,), (3, 3)]
 
  In [23]: [d.shape for d in nditer([[[1,1,1]], [[1,1,1]]*3]).operands]
  Out[23]: [(1, 3), (3, 3)]
 
 
  If we agree that it is broken, which I still am not fully sure of, then
 yes,
  it is very easy to fix. I have been looking into that code quite a bit
  lately, so I could patch something up pretty quick.
 
  Are we OK with the appending of size 1 dimensions to complete the core
  dimensions? That is, should matrix_multiply([1,1,1], [[1],[1],[1]])
 work, or
  should it complain about the first argument having less dimensions than
 the
  core dimensions in the signature?

 I think that by default, gufuncs should definitely *not* allow this.

 Example case 1: qr can be applied equally well to a (1, n) array or an
 (n, 1) array, but with different results. If the user passes in an
 (n,) array, then how do we know which one they wanted?

 Example case 2: matrix multiplication, as you know :-), is a case
 where I do think we should allow for a bit more cleverness with the
 core dimensions... but the appropriate cleverness is much more subtle
 than just prepend size 1 dimensions until things fit. Instead, for
 the first argument you need to prepend, for the second argument you
 need to append, and then you need to remove the corresponding
 dimensions from the output. Specific cases:

 # Your version gives:
 matmul([1, 1, 1], [[1], [1], [1]]).shape == (1, 1)
 # But this should be (1,) (try it with np.dot)

 # Your version gives:
 matmul([[1, 1, 1]], [1, 1, 1]) - error, (1, 3) and (1, 3) are not
 conformable
 # But this should work (second argument should be treated as (3, 1), not
 (1, 3))

 So the default should be to be strict about core dimensions, unless
 explicitly requested otherwise by the person defining the gufunc.


#5057 now implements this behavior, which I agree is a sensible thing to
do. And it doesn't seem very likely that anyone (numpy tests aside!) is
expecting the old behavior to hold.



  Lastly, there is an interesting side effect of the way this broadcasting
 is
  handled: if a gufunc specifies a core dimension in an output argument
 only,
  and an `out` kwarg is not passed in, then the output array will have that
  core dimension set to be of size 1, e.g. if the signature of `f` is
  '(),()-(a)', then f(1, 2).shape is (1,). This has always felt funny to
 me,
  and I think that an unspecified dimension in an output array should
 either
  be specified by a passed 

Re: [Numpy-discussion] np.dot and 'out' bug

2013-05-23 Thread Matthieu Brucher
Hi,

It's to be expected. You are overwritten one of your input vector while it
is still being used.
So not a numpy bug ;)

Matthieu


2013/5/23 Pierre Haessig pierre.haes...@crans.org

 Hi Nicolas,

 Le 23/05/2013 15:45, Nicolas Rougier a écrit :
  if I use either a or b as output, results are wrong (and nothing in the
 dot documentation prevents me from doing this):
 
  a = np.array([[1, 2], [3, 4]])
  b = np.array([[1, 2], [3, 4]])
  np.dot(a,b,out=a)
 
  - array([[ 6, 20],
[15, 46]])
 
 
  Can anyone confirm this behavior ? (tested using numpy 1.7.1)
 I just reproduced the same weird results with numpy 1.6.2

 best,
 Pierre


 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] np.dot and 'out' bug

2013-05-23 Thread Nathaniel Smith
On Thu, May 23, 2013 at 3:19 PM, Matthieu Brucher
matthieu.bruc...@gmail.com wrote:
 Hi,

 It's to be expected. You are overwritten one of your input vector while it
 is still being used.
 So not a numpy bug ;)

Sure, that's clearly what's going on, but numpy shouldn't let you
silently shoot yourself in the foot like that. Re-using input as
output is a very common operation, and usually supported fine.
Probably we should silently make a copy of any input(s) that overlap
with the output? For high-dimensional dot, buffering temprary
subspaces would still be more memory efficient than anything users
could reasonably accomplish by hand.

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] np.dot and 'out' bug

2013-05-23 Thread Nicolas Rougier

 
 Sure, that's clearly what's going on, but numpy shouldn't let you
 silently shoot yourself in the foot like that. Re-using input as
 output is a very common operation, and usually supported fine.
 Probably we should silently make a copy of any input(s) that overlap
 with the output? For high-dimensional dot, buffering temprary
 subspaces would still be more memory efficient than anything users
 could reasonably accomplish by hand.



Also, from a user point of view it is difficult to sort out which functions 
currently allow 'out=a' or  out=b' since nothing in the 'dot' documentation 
warned me about such problem.


Nicolas



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] np.dot and 'out' bug

2013-05-23 Thread Matthieu Brucher
In my point of view, you should never use an output argument equal to an
input argument. It can impede a lot of optimizations.

Matthieu


2013/5/23 Nicolas Rougier nicolas.roug...@inria.fr


 
  Sure, that's clearly what's going on, but numpy shouldn't let you
  silently shoot yourself in the foot like that. Re-using input as
  output is a very common operation, and usually supported fine.
  Probably we should silently make a copy of any input(s) that overlap
  with the output? For high-dimensional dot, buffering temprary
  subspaces would still be more memory efficient than anything users
  could reasonably accomplish by hand.



 Also, from a user point of view it is difficult to sort out which
 functions currently allow 'out=a' or  out=b' since nothing in the 'dot'
 documentation warned me about such problem.


 Nicolas



 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion




-- 
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher
Music band: http://liliejay.com/
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] np.dot and 'out' bug

2013-05-23 Thread Nathaniel Smith
On Thu, May 23, 2013 at 3:57 PM, Matthieu Brucher
matthieu.bruc...@gmail.com wrote:
 In my point of view, you should never use an output argument equal to an
 input argument. It can impede a lot of optimizations.

This is a fine philosophy in some cases, but a non-starter in others.
Python doesn't have optimizations in the first place, and in-place
operations are often critical for managing memory usage. '+=' is an
important operator, and in numpy it's just 'np.add(a, b, out=a)' under
the hood.

On Thu, May 23, 2013 at 3:50 PM, Nicolas Rougier
nicolas.roug...@inria.fr wrote:
 Also, from a user point of view it is difficult to sort out which functions 
 currently allow 'out=a' or  out=b' since nothing in the 'dot' documentation 
 warned me about such problem.

That's because AFAIK all functions allow out=a and out=b, except for
those which contain bugs :-).

Can you file a bug in the bug tracker so this won't get lost?

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] np.dot and 'out' bug

2013-05-23 Thread Nicolas Rougier


 Can you file a bug in the bug tracker so this won't get lost?

Done.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Second try: possible bug in assignment to complex array

2012-08-11 Thread Mark Bakker
Shall I file a bug report? Or is this fairly easy to fix?
Mark



 On Fri, Aug 10, 2012 at 11:41 AM, josef.p...@gmail.com wrote:

 
 
  On Fri, Aug 10, 2012 at 10:00 AM, Travis Oliphant tra...@continuum.io
 wrote:
 
 
  On Aug 10, 2012, at 5:37 AM, Paul Anton Letnes wrote:
 
  
  
   On 10. aug. 2012, at 09:54, Mark Bakker wrote:
  
   I am giving this a second try. Can anybody help me out?
  
   I think there is a problem with assigning a 1D complex array of
 length
  one
   to a position in another complex array.
  
   Example:
  
   a = ones(1,'D')
   b = ones(1,'D')
   a[0] = b
  
 
 ---
   TypeError Traceback (most recent call
  last)
   ipython-input-37-0c4fc6d780e3 in module()
    1 a[0] = b
  
   TypeError: can't convert complex to float
  
   This works correctly when a and b are real arrays:
  
   a = ones(1)
   b = ones(1)
   a[0] = b
  
   Bug or feature?
  
   The exact same thing happens on OS X 10.7.4, python 2.7.3, numpy
 1.6.1.
  
   Looks like a bug to me - or at least very surprising behavior.
 
  This is definitely an inconsistency.The error seems more correct
  (though the error message needs improvement).
 
  Can someone try this on NumPy 1.5 and see if this inconsistency existed
  there as well.
 
 
   np.__version__
  '1.5.1'
 
   a = np.ones(1,'D')
   b = np.ones(1,'D')
   a[0] = b
  Traceback (most recent call last):
File stdin, line 1, in module
 
  TypeError: can't convert complex to float
   a = np.ones(1)
   b = np.ones(1)
   a[0] = b
 

 and

  a = np.ones(1,'D')
  b = 2*np.ones(1)
  a[0] = b
  a
 array([ 2.+0.j])
  c = 3*np.ones(1, int)
  a[0] = c
  a
 array([ 3.+0.j])



 
  Josef
 
 
  Thanks,
 
  -Travis
 
  
   Paul
   ___
   NumPy-Discussion mailing list
   NumPy-Discussion@scipy.org
   http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion
 
 
 
 -- next part --
 An HTML attachment was scrubbed...
 URL:
 http://mail.scipy.org/pipermail/numpy-discussion/attachments/20120810/05588327/attachment.html

 --

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


 End of NumPy-Discussion Digest, Vol 71, Issue 18
 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Second try: possible bug in assignment to complex array

2012-08-10 Thread Mark Bakker
I am giving this a second try. Can anybody help me out?


 I think there is a problem with assigning a 1D complex array of length one
 to a position in another complex array.

 Example:

 a = ones(1,'D')
 b = ones(1,'D')
 a[0] = b
 ---
 TypeError Traceback (most recent call last)
 ipython-input-37-0c4fc6d780e3 in module()
  1 a[0] = b

 TypeError: can't convert complex to float

 This works correctly when a and b are real arrays:

 a = ones(1)
 b = ones(1)
 a[0] = b

 Bug or feature?

 Thanks,

 Mark

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Second try: possible bug in assignment to complex array

2012-08-10 Thread Dave Hirschfeld
Mark Bakker markbak at gmail.com writes:

 
 I think there is a problem with assigning a 1D complex array of length one
 to a position in another complex array.
 Example:
 a = ones(1,'D')
 b = ones(1,'D')
 a[0] = b
 ---
 TypeError                                 Traceback (most recent call last)
 ipython-input-37-0c4fc6d780e3 in module()
  1 a[0] = b
 TypeError: can't convert complex to float
 This works correctly when a and b are real arrays:
 a = ones(1)
 b = ones(1)
 a[0] = b
 Bug or feature?
 Thanks,
 Mark
 

I can't help unfortunately, but I can confirm that I also see the problem
on Win32 Python 2.7.3, numpy 1.6.2.

As a workaround it appears that slicing works:


In [15]: sys.version
Out[15]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]'

In [16]: sys.version
Out[16]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]'

In [17]: np.__version__
Out[17]: '1.6.2'

In [18]: a = ones(1,'D')

In [19]: b = 2*ones(1,'D')

In [20]: a[0] = b
---
TypeError Traceback (most recent call last)
ipython-input-20-0c4fc6d780e3 in module()
 1 a[0] = b

TypeError: can't convert complex to float

In [21]: a[0:1] = b

In [22]: 

-Dave

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Second try: possible bug in assignment to complex array

2012-08-10 Thread Fabrice Silva
Le vendredi 10 août 2012, Dave Hirschfeld a écrit :
 Mark Bakker markbak at gmail.com writes:
  I think there is a problem with assigning a 1D complex array of length one
  to a position in another complex array.
  Example:
  a = ones(1,'D')
  b = ones(1,'D')
  a[0] = b
  ---
  TypeError Traceback (most recent call last)
  ipython-input-37-0c4fc6d780e3 in module()
   1 a[0] = b
  TypeError: can't convert complex to float
  
 
 I can't help unfortunately, but I can confirm that I also see the problem
 on Win32 Python 2.7.3, numpy 1.6.2.
 As a workaround it appears that slicing works:

Same on debian (unstable), Python 2.7, numpy 1.6.2
In [5]: a[0] = b
TypeError: can't convert complex to float
In [6]: a[0] = b[0]

Other workarounds : asscalar and squeeze
In [7]: a[0] = np.asscalar(b)
In [8]: a[0] = b.squeeze()


-- 
Fabrice Silva

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Second try: possible bug in assignment to complex array

2012-08-10 Thread Paul Anton Letnes


On 10. aug. 2012, at 09:54, Mark Bakker wrote:

 I am giving this a second try. Can anybody help me out? 
 
 I think there is a problem with assigning a 1D complex array of length one
 to a position in another complex array.
 
 Example:
 
 a = ones(1,'D')
 b = ones(1,'D')
 a[0] = b
 ---
 TypeError Traceback (most recent call last)
 ipython-input-37-0c4fc6d780e3 in module()
  1 a[0] = b
 
 TypeError: can't convert complex to float
 
 This works correctly when a and b are real arrays:
 
 a = ones(1)
 b = ones(1)
 a[0] = b
 
 Bug or feature?

The exact same thing happens on OS X 10.7.4, python 2.7.3, numpy 1.6.1.

Looks like a bug to me - or at least very surprising behavior.

Paul
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Second try: possible bug in assignment to complex array

2012-08-10 Thread Travis Oliphant

On Aug 10, 2012, at 5:37 AM, Paul Anton Letnes wrote:

 
 
 On 10. aug. 2012, at 09:54, Mark Bakker wrote:
 
 I am giving this a second try. Can anybody help me out? 
 
 I think there is a problem with assigning a 1D complex array of length one
 to a position in another complex array.
 
 Example:
 
 a = ones(1,'D')
 b = ones(1,'D')
 a[0] = b
 ---
 TypeError Traceback (most recent call last)
 ipython-input-37-0c4fc6d780e3 in module()
  1 a[0] = b
 
 TypeError: can't convert complex to float
 
 This works correctly when a and b are real arrays:
 
 a = ones(1)
 b = ones(1)
 a[0] = b
 
 Bug or feature?
 
 The exact same thing happens on OS X 10.7.4, python 2.7.3, numpy 1.6.1.
 
 Looks like a bug to me - or at least very surprising behavior.

This is definitely an inconsistency.The error seems more correct (though 
the error message needs improvement).

Can someone try this on NumPy 1.5 and see if this inconsistency existed there 
as well. 

Thanks,

-Travis

 
 Paul
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Second try: possible bug in assignment to complex array

2012-08-10 Thread josef . pktd
On Fri, Aug 10, 2012 at 10:00 AM, Travis Oliphant tra...@continuum.iowrote:


 On Aug 10, 2012, at 5:37 AM, Paul Anton Letnes wrote:

 
 
  On 10. aug. 2012, at 09:54, Mark Bakker wrote:
 
  I am giving this a second try. Can anybody help me out?
 
  I think there is a problem with assigning a 1D complex array of length
 one
  to a position in another complex array.
 
  Example:
 
  a = ones(1,'D')
  b = ones(1,'D')
  a[0] = b
 
 ---
  TypeError Traceback (most recent call
 last)
  ipython-input-37-0c4fc6d780e3 in module()
   1 a[0] = b
 
  TypeError: can't convert complex to float
 
  This works correctly when a and b are real arrays:
 
  a = ones(1)
  b = ones(1)
  a[0] = b
 
  Bug or feature?
 
  The exact same thing happens on OS X 10.7.4, python 2.7.3, numpy 1.6.1.
 
  Looks like a bug to me - or at least very surprising behavior.

 This is definitely an inconsistency.The error seems more correct
 (though the error message needs improvement).

 Can someone try this on NumPy 1.5 and see if this inconsistency existed
 there as well.


 np.__version__
'1.5.1'

 a = np.ones(1,'D')
 b = np.ones(1,'D')
 a[0] = b
Traceback (most recent call last):
  File stdin, line 1, in module
TypeError: can't convert complex to float
 a = np.ones(1)
 b = np.ones(1)
 a[0] = b

Josef


 Thanks,

 -Travis

 
  Paul
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Second try: possible bug in assignment to complex array

2012-08-10 Thread josef . pktd
On Fri, Aug 10, 2012 at 11:41 AM, josef.p...@gmail.com wrote:



 On Fri, Aug 10, 2012 at 10:00 AM, Travis Oliphant tra...@continuum.iowrote:


 On Aug 10, 2012, at 5:37 AM, Paul Anton Letnes wrote:

 
 
  On 10. aug. 2012, at 09:54, Mark Bakker wrote:
 
  I am giving this a second try. Can anybody help me out?
 
  I think there is a problem with assigning a 1D complex array of length
 one
  to a position in another complex array.
 
  Example:
 
  a = ones(1,'D')
  b = ones(1,'D')
  a[0] = b
 
 ---
  TypeError Traceback (most recent call
 last)
  ipython-input-37-0c4fc6d780e3 in module()
   1 a[0] = b
 
  TypeError: can't convert complex to float
 
  This works correctly when a and b are real arrays:
 
  a = ones(1)
  b = ones(1)
  a[0] = b
 
  Bug or feature?
 
  The exact same thing happens on OS X 10.7.4, python 2.7.3, numpy 1.6.1.
 
  Looks like a bug to me - or at least very surprising behavior.

 This is definitely an inconsistency.The error seems more correct
 (though the error message needs improvement).

 Can someone try this on NumPy 1.5 and see if this inconsistency existed
 there as well.


  np.__version__
 '1.5.1'

  a = np.ones(1,'D')
  b = np.ones(1,'D')
  a[0] = b
 Traceback (most recent call last):
   File stdin, line 1, in module

 TypeError: can't convert complex to float
  a = np.ones(1)
  b = np.ones(1)
  a[0] = b


and

 a = np.ones(1,'D')
 b = 2*np.ones(1)
 a[0] = b
 a
array([ 2.+0.j])
 c = 3*np.ones(1, int)
 a[0] = c
 a
array([ 3.+0.j])




 Josef


 Thanks,

 -Travis

 
  Paul
  ___
  NumPy-Discussion mailing list
  NumPy-Discussion@scipy.org
  http://mail.scipy.org/mailman/listinfo/numpy-discussion

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Strange numpy behaviour (bug?)

2012-01-17 Thread Sturla Molden

While playing with a point-in-polygon test, I have discovered some a 
failure mode that I cannot make sence of.

The algorithm is vectorized for NumPy from a C and Python implementation 
I found on the net (see links below). It is written to process a large 
dataset in chunks. I'm rather happy with it, it can test 100,000 x,y 
points against a non-convex pentagon in just 50 ms.

Anyway, here is something very strange (or at least I think so):

If I use a small chunk size, it sometimes fails. I know I shouldn't 
blame it on NumPy, beacuse it is by all likelood my mistake. But it does 
not make any sence, as the parameter should not affect the computation.

Observed behavior:

1. Processing the whole dataset in one big chunk always works.

2. Processing the dataset in big chunks (e.g. 8192 points) always works.

3. Processing the dataset in small chunks (e.g. 32 points) sometimes fail.

4. Processing the dataset element-wise always work.

5. The scalar version behaves like the numpy version: fine for large 
chunks, sometimes it fails for small. That is, when list comprehensions 
is used for chunks. Big list comprehensions always work, small ones 
might fail.

It looks like the numerical robstness of the alorithm depends on a 
parameter that has nothing to do with the algorithm at all. For example 
in (5), we might think that calling a function from a nested loop makes 
it fail, depending on the length of the inner loop. But calling it from 
a single loop works just fine.

???

So I wonder:

Could there be a bug in numpy that only shows up only when taking a huge 
number of short slices?

I don't know... But try it if you care.

In the function inpolygon, change the call that says __chunk(n,8192) 
to e.g. __chunk(n,32) to see it fail (or at least it does on my 
computer, running Enthought 7.2-1 on Win64).


Regards,
Sturla Molden





def __inpolygon_scalar(x,y,poly):

 # Source code taken from:
 # http://paulbourke.net/geometry/insidepoly
 # http://www.ariel.com.au/a/python-point-int-poly.html

 n = len(poly)
 inside = False
 p1x,p1y = poly[0]
 xinters = 0
 for i in range(n+1):
 p2x,p2y = poly[i % n]
 if y  min(p1y,p2y):
 if y = max(p1y,p2y):
 if x = max(p1x,p2x):
 if p1y != p2y:
 xinters = (y-p1y)*(p2x-p1x)/(p2y-p1y)+p1x
 if p1x == p2x or x = xinters:
 inside = not inside
 p1x,p1y = p2x,p2y
 return inside


# the rest is (C) Sturla Molden, 2012
# University of Oslo

def __inpolygon_numpy(x,y,poly):
  numpy vectorized version 
 n = len(poly)
 inside = np.zeros(x.shape[0], dtype=bool)
 xinters = np.zeros(x.shape[0], dtype=float)
 p1x,p1y = poly[0]
 for i in range(n+1):
 p2x,p2y = poly[i % n]
 mask = (y  min(p1y,p2y))  (y = max(p1y,p2y))  (x = 
max(p1x,p2x))
 if p1y != p2y:
 xinters[mask] = (y[mask]-p1y)*(p2x-p1x)/(p2y-p1y)+p1x
 if p1x == p2x:
 inside[mask] = ~inside[mask]
 else:
 mask2 = x[mask] = xinters[mask]
 idx, = np.where(mask)
 idx2, = np.where(mask2)
 idx = idx[idx2]
 inside[idx] = ~inside[idx]
 p1x,p1y = p2x,p2y
 return inside

def __chunk(n,size):
 x = range(0,n,size)
 if (n%size):
 x.append(n)
 return zip(x[:-1],x[1:])

def inpolygon(x, y, poly):
 
 point-in-polygon test
 x and y are numpy arrays
 polygon is a list of (x,y) vertex tuples
 
 if np.isscalar(x) and np.isscalar(y):
 return __inpolygon_scalar(x, y, poly)
 else:
 x = np.asarray(x)
 y = np.asarray(y)
 n = x.shape[0]
 z = np.zeros(n, dtype=bool)
 for i,j in __chunk(n,8192): # COMPARE WITH __chunk(n,32) ???
 if j-i  1:
 z[i:j] = __inpolygon_numpy(x[i:j], y[i:j], poly)
 else:
 z[i] = __inpolygon_scalar(x[i], y[i], poly)
 return z



if __name__ == __main__:

 import matplotlib
 import matplotlib.pyplot as plt
 from time import clock

 n = 10
 polygon = [(0.,.1), (1.,.1), (.5,1.), (0.,.75), (.5,.5), (0.,.1)]
 xp = [x for x,y in polygon]
 yp = [y for x,y in polygon]
 x = np.random.rand(n)
 y = np.random.rand(n)
 t0 = clock()
 inside = inpolygon(x,y,polygon)
 t1 = clock()
 print 'elapsed time %.3g ms' % ((t0-t1)*1E3,)
 plt.figure()
 plt.plot(x[~inside],y[~inside],'ob', xp, yp, '-g')
 plt.axis([0,1,0,1])
 plt.show()









___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Strange numpy behaviour (bug?)

2012-01-17 Thread Sturla Molden

Never mind this, it was my own mistake as I expected :-)

def __chunk(n,size):
 x = range(0,n,size)
 x.append(n)
 return zip(x[:-1],x[1:])

makes it a lot better :)

Sturla



Den 18.01.2012 06:26, skrev Sturla Molden:
 While playing with a point-in-polygon test, I have discovered some a
 failure mode that I cannot make sence of.

 The algorithm is vectorized for NumPy from a C and Python implementation
 I found on the net (see links below). It is written to process a large
 dataset in chunks. I'm rather happy with it, it can test 100,000 x,y
 points against a non-convex pentagon in just 50 ms.

 Anyway, here is something very strange (or at least I think so):

 If I use a small chunk size, it sometimes fails. I know I shouldn't
 blame it on NumPy, beacuse it is by all likelood my mistake. But it does
 not make any sence, as the parameter should not affect the computation.

 Observed behavior:

 1. Processing the whole dataset in one big chunk always works.

 2. Processing the dataset in big chunks (e.g. 8192 points) always works.

 3. Processing the dataset in small chunks (e.g. 32 points) sometimes fail.

 4. Processing the dataset element-wise always work.

 5. The scalar version behaves like the numpy version: fine for large
 chunks, sometimes it fails for small. That is, when list comprehensions
 is used for chunks. Big list comprehensions always work, small ones
 might fail.

 It looks like the numerical robstness of the alorithm depends on a
 parameter that has nothing to do with the algorithm at all. For example
 in (5), we might think that calling a function from a nested loop makes
 it fail, depending on the length of the inner loop. But calling it from
 a single loop works just fine.

 ???

 So I wonder:

 Could there be a bug in numpy that only shows up only when taking a huge
 number of short slices?

 I don't know... But try it if you care.

 In the function inpolygon, change the call that says __chunk(n,8192)
 to e.g. __chunk(n,32) to see it fail (or at least it does on my
 computer, running Enthought 7.2-1 on Win64).


 Regards,
 Sturla Molden





 def __inpolygon_scalar(x,y,poly):

   # Source code taken from:
   # http://paulbourke.net/geometry/insidepoly
   # http://www.ariel.com.au/a/python-point-int-poly.html

   n = len(poly)
   inside = False
   p1x,p1y = poly[0]
   xinters = 0
   for i in range(n+1):
   p2x,p2y = poly[i % n]
   if y  min(p1y,p2y):
   if y= max(p1y,p2y):
   if x= max(p1x,p2x):
   if p1y != p2y:
   xinters = (y-p1y)*(p2x-p1x)/(p2y-p1y)+p1x
   if p1x == p2x or x= xinters:
   inside = not inside
   p1x,p1y = p2x,p2y
   return inside


 # the rest is (C) Sturla Molden, 2012
 # University of Oslo

 def __inpolygon_numpy(x,y,poly):
numpy vectorized version 
   n = len(poly)
   inside = np.zeros(x.shape[0], dtype=bool)
   xinters = np.zeros(x.shape[0], dtype=float)
   p1x,p1y = poly[0]
   for i in range(n+1):
   p2x,p2y = poly[i % n]
   mask = (y  min(p1y,p2y))  (y= max(p1y,p2y))  (x=
 max(p1x,p2x))
   if p1y != p2y:
   xinters[mask] = (y[mask]-p1y)*(p2x-p1x)/(p2y-p1y)+p1x
   if p1x == p2x:
   inside[mask] = ~inside[mask]
   else:
   mask2 = x[mask]= xinters[mask]
   idx, = np.where(mask)
   idx2, = np.where(mask2)
   idx = idx[idx2]
   inside[idx] = ~inside[idx]
   p1x,p1y = p2x,p2y
   return inside

 def __chunk(n,size):
   x = range(0,n,size)
   if (n%size):
   x.append(n)
   return zip(x[:-1],x[1:])

 def inpolygon(x, y, poly):
   
   point-in-polygon test
   x and y are numpy arrays
   polygon is a list of (x,y) vertex tuples
   
   if np.isscalar(x) and np.isscalar(y):
   return __inpolygon_scalar(x, y, poly)
   else:
   x = np.asarray(x)
   y = np.asarray(y)
   n = x.shape[0]
   z = np.zeros(n, dtype=bool)
   for i,j in __chunk(n,8192): # COMPARE WITH __chunk(n,32) ???
   if j-i  1:
   z[i:j] = __inpolygon_numpy(x[i:j], y[i:j], poly)
   else:
   z[i] = __inpolygon_scalar(x[i], y[i], poly)
   return z



 if __name__ == __main__:

   import matplotlib
   import matplotlib.pyplot as plt
   from time import clock

   n = 10
   polygon = [(0.,.1), (1.,.1), (.5,1.), (0.,.75), (.5,.5), (0.,.1)]
   xp = [x for x,y in polygon]
   yp = [y for x,y in polygon]
   x = np.random.rand(n)
   y = np.random.rand(n)
   t0 = clock()
   inside = inpolygon(x,y,polygon)
   t1 = clock()
   print 'elapsed time %.3g ms' % ((t0-t1)*1E3,)
   plt.figure()
   plt.plot(x[~inside],y[~inside],'ob', xp, yp, '-g')
   

[Numpy-discussion] numpy log2 has bug

2011-03-23 Thread Dmitrey
  from numpy import log2, __version__
   
   log2(2**63)  

   Traceback (most recent call
   last):   

   File stdin, line 1, in
   module 
   
   AttributeError: log2
__version__
   '2.0.0.dev-1fe8136'
   (doesn't work with 1.3.0 as well)
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy log2 has bug

2011-03-23 Thread josef . pktd
2011/3/23 Dmitrey tm...@ukr.net:
 from numpy import log2, __version__

 log2(2**63)
 Traceback (most recent call
 last):
   File stdin, line 1, in
 module
 AttributeError: log2
 __version__
 '2.0.0.dev-1fe8136'
 (doesn't work with 1.3.0 as well)

 np.array([2**63])
array([9223372036854775808], dtype=object)

 log2(2.**63)
62.993
 log2(2**63)
Traceback (most recent call last):
  File pyshell#9, line 1, in module
log2(2**63)
AttributeError: log2

integer conversion problem

Josef




 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] numpy log2 has bug

2011-03-23 Thread Robert Kern
On Wed, Mar 23, 2011 at 13:51,  josef.p...@gmail.com wrote:
 2011/3/23 Dmitrey tm...@ukr.net:
 from numpy import log2, __version__

 log2(2**63)
 Traceback (most recent call
 last):
   File stdin, line 1, in
 module
 AttributeError: log2
 __version__
 '2.0.0.dev-1fe8136'
 (doesn't work with 1.3.0 as well)

 np.array([2**63])
 array([9223372036854775808], dtype=object)

 log2(2.**63)
 62.993
 log2(2**63)
 Traceback (most recent call last):
  File pyshell#9, line 1, in module
    log2(2**63)
 AttributeError: log2

 integer conversion problem

Right. numpy cannot safely convert a long object of that size to a
dtype it knows about, so it leaves it as an object array. Most ufuncs
operate on object arrays by looking for a method on each element with
the name of the ufunc. So

  np.log2(np.array([x], dtype=object))

will look for x.log2().

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in repr ?

2011-03-16 Thread Mark Sienkiewicz

 In that case, would you agree that it is a bug for
 assert_array_almost_equal to use repr() to display the arrays, since it
 is printing identical values and saying they are different?  Or is there
 also a reason to do that?
 

 It should probably use np.array_repr(x, precision=16)
   


Ok, thanks - I see the issue.  I'll enter a ticket for an enhancement 
request for assert_array_almost_equal

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Is this a bug in repr ?

2011-03-15 Thread Mark Sienkiewicz
The usual expectation is that (when possible) repr() returns a value 
that you can eval() to get the original data back.  But,

  from numpy import *
  a = array( [  16.5069863163822 ] )
  b = eval(repr(a))
  a-b
array([ -3.6111e-09])
  import numpy.testing
  numpy.testing.assert_array_almost_equal(a,b,decimal=15)
Traceback (most recent call last):
  File stdin, line 1, in module
  File /usr/stsci/pyssgdev/2.7/numpy/testing/utils.py, line 775, in 
assert_array_almost_equal
header=('Arrays are not almost equal to %d decimals' % decimal))
  File /usr/stsci/pyssgdev/2.7/numpy/testing/utils.py, line 618, in 
assert_array_compare
raise AssertionError(msg)
AssertionError:
Arrays are not almost equal to 15 decimals

(mismatch 100.0%)
 x: array([ 16.50698632])
 y: array([ 16.50698632])
 

I noticed this because a bunch of tests failed exactly this way.  Of 
course, the problem is that assert_array_almost_equal does not print 
with the precision that it compared, which in turn happens because it 
just uses repr() to convert the array.

I would expect that repr would print the values at least to the 
resolution that they are stored, so I think this is a bug.

This happens with the current trunk of numpy in python 2.7 on Red Hat 
Enterprise linux in 32 and 64 bits, and on Macintosh Leopard in 32 
bits.  I did not try any other configuration.

Mark

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in repr ?

2011-03-15 Thread Charles R Harris
On Tue, Mar 15, 2011 at 10:20 AM, Mark Sienkiewicz sienk...@stsci.eduwrote:

 The usual expectation is that (when possible) repr() returns a value
 that you can eval() to get the original data back.  But,

   from numpy import *
   a = array( [  16.5069863163822 ] )
   b = eval(repr(a))
   a-b
 array([ -3.6111e-09])
   import numpy.testing
   numpy.testing.assert_array_almost_equal(a,b,decimal=15)
 Traceback (most recent call last):
  File stdin, line 1, in module
  File /usr/stsci/pyssgdev/2.7/numpy/testing/utils.py, line 775, in
 assert_array_almost_equal
header=('Arrays are not almost equal to %d decimals' % decimal))
  File /usr/stsci/pyssgdev/2.7/numpy/testing/utils.py, line 618, in
 assert_array_compare
raise AssertionError(msg)
 AssertionError:
 Arrays are not almost equal to 15 decimals

 (mismatch 100.0%)
  x: array([ 16.50698632])
  y: array([ 16.50698632])
  

 I noticed this because a bunch of tests failed exactly this way.  Of
 course, the problem is that assert_array_almost_equal does not print
 with the precision that it compared, which in turn happens because it
 just uses repr() to convert the array.

 I would expect that repr would print the values at least to the
 resolution that they are stored, so I think this is a bug.

 This happens with the current trunk of numpy in python 2.7 on Red Hat
 Enterprise linux in 32 and 64 bits, and on Macintosh Leopard in 32
 bits.  I did not try any other configuration.


Yes, I think it is a bug. IIRC, it also shows up for object arrays.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in repr ?

2011-03-15 Thread Robert Kern
On Tue, Mar 15, 2011 at 12:39, Charles R Harris
charlesr.har...@gmail.com wrote:

 Yes, I think it is a bug. IIRC, it also shows up for object arrays.

It's extremely long-standing, documented, intentional behavior dating
back to Numeric.

[~]
|1 import Numeric

[~]
|2 a = Numeric.array( [  16.5069863163822 ] )

[~]
|3 print repr(a)
array([ 16.50698632])


You can disagree with the feature, but it's not a bug.

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in repr ?

2011-03-15 Thread Mark Sienkiewicz
Robert Kern wrote:
 On Tue, Mar 15, 2011 at 12:39, Charles R Harris
 charlesr.har...@gmail.com wrote:

   
 Yes, I think it is a bug. IIRC, it also shows up for object arrays.
 

 It's extremely long-standing, documented, intentional behavior dating
 back to Numeric.

 [~]
 |1 import Numeric

 [~]
 |2 a = Numeric.array( [  16.5069863163822 ] )

 [~]
 |3 print repr(a)
 array([ 16.50698632])


 You can disagree with the feature, but it's not a bug.
   

So it is needed to maintain backward compatibility?  (Still?)

In that case, would you agree that it is a bug for 
assert_array_almost_equal to use repr() to display the arrays, since it 
is printing identical values and saying they are different?  Or is there 
also a reason to do that?

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in repr ?

2011-03-15 Thread Robert Kern
On Tue, Mar 15, 2011 at 13:10, Mark Sienkiewicz sienk...@stsci.edu wrote:
 Robert Kern wrote:
 On Tue, Mar 15, 2011 at 12:39, Charles R Harris
 charlesr.har...@gmail.com wrote:


 Yes, I think it is a bug. IIRC, it also shows up for object arrays.


 It's extremely long-standing, documented, intentional behavior dating
 back to Numeric.

 [~]
 |1 import Numeric

 [~]
 |2 a = Numeric.array( [  16.5069863163822 ] )

 [~]
 |3 print repr(a)
 array([ 16.50698632])


 You can disagree with the feature, but it's not a bug.


 So it is needed to maintain backward compatibility?  (Still?)

No, it's an intentional feature, and the reasons for it then still
pertain now. It's a pragmatic compromise to display useful amounts of
information at the interactive prompt without overwhelming the user
with what is usually unimportant detail that would take up excessive
space. We also elide elements using ... when there are too many
elements to display in a reasonable amount of time for the same
reason. You can control these settings using np.set_printoptions().

 In that case, would you agree that it is a bug for
 assert_array_almost_equal to use repr() to display the arrays, since it
 is printing identical values and saying they are different?  Or is there
 also a reason to do that?

It should probably use np.array_repr(x, precision=16)

-- 
Robert Kern

I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth.
  -- Umberto Eco
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] DataArray usage question + bug?

2010-08-23 Thread Skipper Seabold
I have some new typical data that I'm trying out DataArray with, and
I'm trying to get my head around it again.  Is this the best way to
hold data organized by, say, household and time (week number).  I
guess what I'm asking is do I understand the concept of axes and ticks
correctly?  It seems to be what I want, but my earlier understanding
of how I would go about this was a bit different.

import numpy as np
from datarray.datarray import DataArray

ddd = np.empty((52*5,5))

# Household ID
ddd[:,0] = np.repeat([,,,,], 52)

# Week Number
ddd[:,1] = np.tile(range(1,53), 5)

# Some Categorical Variable
ddd[:,2] = np.tile(range(1,6), 52)

# Some Variable
ddd[:,3] = np.random.uniform(0,5.0, size=52*5)

# Some Other Variable
ddd[:,4] = np.random.uniform(-5,0., size=52*5)

# Create axes labels and ticks
hhold_ax = 'households', np.unique(ddd[:,0]).tolist()
time_ax = 'week', np.unique(ddd[:,1]).tolist()
var_ax = 'variables', ['some_dummy', 'some_var', 'other_var']

darr = DataArray(ddd[:,2:].reshape(5,52,-1), [hhold_ax, time_ax, var_ax])

It might be nice to have some convenience functions that will do the
reshape on the original 2d data if they don't already exist, so
end-users don't have to think about it.

As for the bug report.  If I don't tolist() the ticks above there is
an error.  I can file a bug report if it's warranted.

snip
---
ValueErrorTraceback (most recent call last)

/home/skipper/school/RA/ipython console in module()

/usr/local/lib/python2.6/dist-packages/datarray/datarray.pyc in
__new__(cls, data, labels, dtype, copy)
604 axes.append(Axis(label, i, arr, ticks=ticks))
605
-- 606 _set_axes(arr, axes)
607
608 # validate the axes


/usr/local/lib/python2.6/dist-packages/datarray/datarray.pyc in
_set_axes(dest, in_axes)
475 # Create the containers for various axis-related info

476 for ax in in_axes:
-- 477 new_ax = ax._copy(parent_arr=dest)
478 axes.append(new_ax)
479 if hasattr(ax_holder, ax.name):

/usr/local/lib/python2.6/dist-packages/datarray/datarray.pyc in
_copy(self, **kwargs)
113 ticks = kwargs.pop('ticks', copy.copy(self.ticks))
114 ax.ticks = ticks
-- 115 if ticks and len(ticks) != len(self.ticks):
116 ax._tick_dict = dict( zip(ticks, xrange( len(ticks) )) )
117 else:

ValueError: The truth value of an array with more than one element is
ambiguous. Use a.any() or a.all()

Skipper
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] DataArray usage question + bug?

2010-08-23 Thread Keith Goodman
On Mon, Aug 23, 2010 at 3:28 PM, Skipper Seabold jsseab...@gmail.com wrote:
 hhold_ax = 'households', np.unique(ddd[:,0]).tolist()
snip
 As for the bug report.  If I don't tolist() the ticks above there is
 an error.  I can file a bug report if it's warranted.

If you add it to the tracker
(http://github.com/fperez/datarray/issues) then I am sure it will get
discussed at the next sprint.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] ma.std(ddof=1) bug?

2010-04-28 Thread Pierre GM
On Apr 23, 2010, at 12:45 PM, josef.p...@gmail.com wrote:
 Is there a reason why ma.std(ddof=1)  does not calculated the std if
 there are 2 valid values?


Bug! Good call... Should be fixed in SVN r8370.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug, and if so, who's?

2010-04-25 Thread Travis Oliphant

On Apr 21, 2010, at 10:47 AM, Ken Basye wrote:

 Folks,
   Apologies for asking here, but I ran across this problem yesterday
 and probably need to file a bug.  The problem is I don't know if  
 this is
 a Numpy bug, a Python bug, or both.  Here's an illustration, platform
 information follows.

It's a bug in your implementation of class A.

The __float__ method is supposed to return a Python float.In A,  
you are just returning the 'y' field which is initialized to whatever  
you passed in (which in the last example is an np.float64.   An  
np.float64 is a subclass of Python's float, but it is not Python's  
float.

So, the explicit conversion is proper.


-Travis


   TIA,
   Ken


 #
 import collections
 import numpy as np

 class A (collections.namedtuple('ANT', ('x', 'y'))):
def __float__(self):
return self.y

 # Same as A, but explicitly convert y to a float in __float__()  -  
 this
 works around the assert fail
 class B (collections.namedtuple('BNT', ('x', 'y'))):
def __float__(self):
return float(self.y)

 a0 = A(1.0, 2.0)
 f0 = np.float64(a0)
 print f0

 a1 = A(float(1.0), float(2.0))
 f1 = np.float64(a1)
 print f1

 b1 = B(np.float64(1.0), np.float64(2.0))
 f2 = np.float64(b1)
 print f2

 a2 = A(np.float64(1.0), np.float64(2.0))
 # On some platforms, the next line will trigger an
 assert:

 # python: Objects/floatobject.c:1674: float_subtype_new: Assertion
 `PyObject*)(tmp))-ob_type) == PyFloat_Type)' failed.
 f3 = np.float64(a2)
 print f3
 #

 Platform info:

 Python 2.6.5 (r265:79063, Apr 14 2010, 13:32:56)
 [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2

 numpy.__version__
 '1.3.0'

 ~--$ uname -srvmpio
 Linux 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64
 x86_64 GNU/Linux

 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

--
Travis Oliphant
Enthought Inc.
1-512-536-1057
http://www.enthought.com
oliph...@enthought.com





___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] ma.std(ddof=1) bug?

2010-04-25 Thread josef . pktd
Is there a reason why ma.std(ddof=1)  does not calculated the std if
there are 2 valid values?

example
nan = np.nan
x1 = np.array([[9.0, 3.0, nan, nan, 9.0, nan],
  [1.0, 1.0, 1.0, nan, nan, nan],
  [2.0, 2.0, 0.01, nan, 1.0, nan],
  [3.0, 9.0, 2.0, nan, nan, nan],
  [4.0, 4.0, 3.0, 9.0, 2.0, nan],
  [5.0, 5.0, 4.0, 4.0, nan, nan]])

 print np.ma.fix_invalid(x1).std(0, ddof=0)
[2.58198889747 2.58198889747 1.41138796934 2.5 3.55902608401 --]

 print np.ma.fix_invalid(x1).std(0, ddof=1)
[2.82842712475 2.82842712475 1.57797972104 -- 4.35889894354 --] #
invalid column 3

scipy stats  (bias=True is default)
 print stats.nanstd(x1,0)
[ 2.82842712  2.82842712  1.57797972  3.53553391  4.35889894 NaN]

numpy with valid values
 np.array((9,4.)).std(ddof=1)
3.5355339059327378

Josef
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Is this a bug, and if so, who's?

2010-04-21 Thread Ken Basye
Folks,
   Apologies for asking here, but I ran across this problem yesterday 
and probably need to file a bug.  The problem is I don't know if this is 
a Numpy bug, a Python bug, or both.  Here's an illustration, platform 
information follows.
   TIA,
   Ken


#
import collections
import numpy as np

class A (collections.namedtuple('ANT', ('x', 'y'))):
def __float__(self):
return self.y

# Same as A, but explicitly convert y to a float in __float__()  - this 
works around the assert fail
class B (collections.namedtuple('BNT', ('x', 'y'))):
def __float__(self):
return float(self.y)

a0 = A(1.0, 2.0)
f0 = np.float64(a0)
print f0

a1 = A(float(1.0), float(2.0))
f1 = np.float64(a1)
print f1

b1 = B(np.float64(1.0), np.float64(2.0))
f2 = np.float64(b1)
print f2

a2 = A(np.float64(1.0), np.float64(2.0))
# On some platforms, the next line will trigger an 
assert: 
  

# python: Objects/floatobject.c:1674: float_subtype_new: Assertion 
`PyObject*)(tmp))-ob_type) == PyFloat_Type)' failed.
f3 = np.float64(a2)
print f3
#

Platform info:

Python 2.6.5 (r265:79063, Apr 14 2010, 13:32:56)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2

  numpy.__version__
'1.3.0'

~--$ uname -srvmpio
Linux 2.6.18-164.el5 #1 SMP Thu Sep 3 03:28:30 EDT 2009 x86_64 x86_64 
x86_64 GNU/Linux

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-08 Thread Bruce Southey

On 03/08/2010 01:30 AM, David Goldsmith wrote:
On Sun, Mar 7, 2010 at 4:41 AM, Friedrich Romstedt 
friedrichromst...@gmail.com mailto:friedrichromst...@gmail.com wrote:


2010/3/5 Pierre GM pgmdevl...@gmail.com
mailto:pgmdevl...@gmail.com:
 'm'fraid no. I gonna have to investigate that. Please open a
ticket with a self-contained example that reproduces the issue.
 Thx in advance...
 P.

I would like to stress the fact that imo this is maybe not ticket
and not a bug.

The issue arises when calling a.max() or similar of empty arrays
a, i.e., with:

 0 in a.shape
True

Opposed to the .prod() of an empty array, such a .max() or .min()
cannot be defined, because the set is empty.  So it's fully correct to
let such calls fail.  Just the failure is a bit deep in numpy, and
only the traceback gives some hint what went wrong.

I posted something similar also on the matplotlib-users list, sorry
for cross-posting thus.


Any suggestions, then, how to go about figuring out what's happening 
in my code that's causing this feature to manifest itself?


DG


Perhaps providing the code with specific versions of Python, numpy etc. 
would help.


I would guess that aquarius_test.py has not correctly setup the 
necessary inputs (or has invalid inputs) required by matplotlib (which I 
have no knowledge about). Really you have to find if the _A in cmp.py 
used by 'self.norm.autoscale_None(self._A)' is valid. You may be missing 
a valid initialization step because the TypeError exception in 
autoscale_None ('You must first set_array for mappable') implies 
something need to be done first.


Bruce





___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-08 Thread David Goldsmith
On Mon, Mar 8, 2010 at 6:52 AM, Bruce Southey bsout...@gmail.com wrote:

  On 03/08/2010 01:30 AM, David Goldsmith wrote:

 On Sun, Mar 7, 2010 at 4:41 AM, Friedrich Romstedt 
 friedrichromst...@gmail.com wrote:

 I would like to stress the fact that imo this is maybe not ticket and not
 a bug.

 The issue arises when calling a.max() or similar of empty arrays a, i.e.,
 with:

  0 in a.shape
 True

 Opposed to the .prod() of an empty array, such a .max() or .min()
 cannot be defined, because the set is empty.  So it's fully correct to
 let such calls fail.  Just the failure is a bit deep in numpy, and
 only the traceback gives some hint what went wrong.

 I posted something similar also on the matplotlib-users list, sorry
 for cross-posting thus.


 Any suggestions, then, how to go about figuring out what's happening in my
 code that's causing this feature to manifest itself?

 DG

 Perhaps providing the code with specific versions of Python, numpy etc.
 would help.

 I would guess that aquarius_test.py has not correctly setup the necessary
 inputs (or has invalid inputs) required by matplotlib (which I have no
 knowledge about). Really you have to find if the _A in cmp.py used by
 'self.norm.autoscale_None(self._A)' is valid. You may be missing a valid
 initialization step because the TypeError exception in autoscale_None ('You
 must first set_array for mappable') implies something need to be done first.


 Bruce


Python 2.5.4, Numpy 1.4.0, Matplotlib 0.99.0, Windows 32bit Vista Home
Premium SP2

# Code copyright 2010 by David Goldsmith
# Comments and unnecessaries edited for brevity
import numpy as N
import matplotlib as MPL
from matplotlib import pylab
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
from matplotlib.figure import Figure
import matplotlib.cm as cm

J = complex(0,1); tol = 1e-6; maxiter = 20;
roots = (-2 + J/2, -1 + J,  -0.5 + J/2, 0.5 + J,  1 + J/2, 2 + J, 2.5 + J/2,
 -2 - J,   -1 - J/2, -0.5 - J,  0.5 - J/2, 1 - J,  2 - J/2, 2.5 - J)

def ffp(z):
w, wp = (0J, 0J)
for root in roots:
z0 = z - root
w += N.sin(1/z0)
wp -= N.cos(1/z0)/(z0*z0)
return (w, wp)

def iter(z):
w, wp = ffp(z)
return z - w/wp

def find_root(z0):#, k, j):
count = 0
z1 = iter(z0)
if N.isnan(z1):
return N.complex64(N.inf)
while (N.abs(z1 - z0)  tol) and \
   (count  maxiter):
count += 1
z0 = z1
z1 = iter(z0)
if N.abs(z1 - z0)  tol:
result = 0
else:
result = z1
return N.complex64(result)

w, h, DPI = (3.2, 2.0, 100)
fig = Figure(figsize=(w, h),
 dpi=DPI,
 frameon=False)
ax = fig.add_subplot(1,1,1)
canvas = FigureCanvas(fig)
nx, xmin, xmax = (int(w*DPI), -0.5, 0.5)
ny, ymin, ymax = (int(h*DPI),  0.6, 1.2)

X, xincr = N.linspace(xmin,xmax,nx,retstep=True)
Y, yincr = N.linspace(ymin,ymax,ny,retstep=True)
W = N.zeros((ny,nx), dtype=N.complex64)

for j in N.arange(nx):
if not (j%100): # report progress
print j
for k in N.arange(ny):
x, y = (X[j], Y[k])
z0 = x + J*y
W[k,j] = find_root(z0)#,k,j)

print N.argwhere(N.logical_not(N.isfinite(W.real)))
print N.argwhere(N.logical_not(N.isfinite(W.imag)))
W = W.T
argW = N.angle(W)
print N.argwhere(N.logical_not(N.isfinite(argW)))
cms = (Blues,)# Blues_r, cool, cool_r,

def all_ticks_off(ax):
ax.xaxis.set_major_locator(pylab.NullLocator())
ax.yaxis.set_major_locator(pylab.NullLocator())

for cmap_name in cms:
all_ticks_off(ax)
ax.hold(True)
for i in range(4):
for j in range(4):
part2plot = argW[j*ny/4:(j+1)*ny/4, i*nx/4:(i+1)*nx/4]
if N.any(N.logical_not(N.isfinite(part2plot))):
print i, j,
print N.argwhere(N.logical_not(N.isfinite(part2plot)))
extent = (i*nx/4, (i+1)*nx/4, (j+1)*ny/4, j*ny/4)
ax.imshow(part2plot, cmap_name, extent = extent)
ax.set_xlim(0, nx)
ax.set_ylim(0, ny)
canvas.print_figure('../../Data-Figures/Zodiac/Aquarius/'+ cmap_name +
'Aquarius_test.png', dpi=DPI)
# End Aquarius_test.png

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-08 Thread David Goldsmith
On Mon, Mar 8, 2010 at 10:17 AM, David Goldsmith d.l.goldsm...@gmail.comwrote:

 On Mon, Mar 8, 2010 at 6:52 AM, Bruce Southey bsout...@gmail.com wrote:

  On 03/08/2010 01:30 AM, David Goldsmith wrote:

 On Sun, Mar 7, 2010 at 4:41 AM, Friedrich Romstedt 
 friedrichromst...@gmail.com wrote:

 I would like to stress the fact that imo this is maybe not ticket and not
 a bug.

 The issue arises when calling a.max() or similar of empty arrays a, i.e.,
 with:

  0 in a.shape
 True

 Opposed to the .prod() of an empty array, such a .max() or .min()
 cannot be defined, because the set is empty.  So it's fully correct to
 let such calls fail.  Just the failure is a bit deep in numpy, and
 only the traceback gives some hint what went wrong.

 I posted something similar also on the matplotlib-users list, sorry
 for cross-posting thus.


 Any suggestions, then, how to go about figuring out what's happening in my
 code that's causing this feature to manifest itself?

 DG

 Perhaps providing the code with specific versions of Python, numpy etc.
 would help.

 I would guess that aquarius_test.py has not correctly setup the necessary
 inputs (or has invalid inputs) required by matplotlib (which I have no
 knowledge about). Really you have to find if the _A in cmp.py used by
 'self.norm.autoscale_None(self._A)' is valid. You may be missing a valid
 initialization step because the TypeError exception in autoscale_None ('You
 must first set_array for mappable') implies something need to be done first.


 Bruce


 Python 2.5.4, Numpy 1.4.0, Matplotlib 0.99.0, Windows 32bit Vista Home
 Premium SP2

 # Code copyright 2010 by David Goldsmith
 # Comments and unnecessaries edited for brevity
 import numpy as N
 import matplotlib as MPL
 from matplotlib import pylab
 from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
 from matplotlib.figure import Figure
 import matplotlib.cm as cm

 J = complex(0,1); tol = 1e-6; maxiter = 20;
 roots = (-2 + J/2, -1 + J,  -0.5 + J/2, 0.5 + J,  1 + J/2, 2 + J, 2.5 +
 J/2,
  -2 - J,   -1 - J/2, -0.5 - J,  0.5 - J/2, 1 - J,  2 - J/2, 2.5 -
 J)

 def ffp(z):
 w, wp = (0J, 0J)
 for root in roots:
 z0 = z - root
 w += N.sin(1/z0)
 wp -= N.cos(1/z0)/(z0*z0)
 return (w, wp)

 def iter(z):
 w, wp = ffp(z)
 return z - w/wp

 def find_root(z0):#, k, j):
 count = 0
 z1 = iter(z0)
 if N.isnan(z1):
 return N.complex64(N.inf)
 while (N.abs(z1 - z0)  tol) and \
(count  maxiter):
 count += 1
 z0 = z1
 z1 = iter(z0)
 if N.abs(z1 - z0)  tol:
 result = 0
 else:
 result = z1
 return N.complex64(result)

 w, h, DPI = (3.2, 2.0, 100)
 fig = Figure(figsize=(w, h),
  dpi=DPI,
  frameon=False)
 ax = fig.add_subplot(1,1,1)
 canvas = FigureCanvas(fig)
 nx, xmin, xmax = (int(w*DPI), -0.5, 0.5)
 ny, ymin, ymax = (int(h*DPI),  0.6, 1.2)

 X, xincr = N.linspace(xmin,xmax,nx,retstep=True)
 Y, yincr = N.linspace(ymin,ymax,ny,retstep=True)
 W = N.zeros((ny,nx), dtype=N.complex64)

 for j in N.arange(nx):
 if not (j%100): # report progress
 print j
 for k in N.arange(ny):
 x, y = (X[j], Y[k])
 z0 = x + J*y
 W[k,j] = find_root(z0)#,k,j)

 print N.argwhere(N.logical_not(N.isfinite(W.real)))
 print N.argwhere(N.logical_not(N.isfinite(W.imag)))
 W = W.T
 argW = N.angle(W)
 print N.argwhere(N.logical_not(N.isfinite(argW)))
 cms = (Blues,)# Blues_r, cool, cool_r,

 def all_ticks_off(ax):
 ax.xaxis.set_major_locator(pylab.NullLocator())
 ax.yaxis.set_major_locator(pylab.NullLocator())

 for cmap_name in cms:
 all_ticks_off(ax)
 ax.hold(True)
 for i in range(4):
 for j in range(4):
 part2plot = argW[j*ny/4:(j+1)*ny/4, i*nx/4:(i+1)*nx/4]
 if N.any(N.logical_not(N.isfinite(part2plot))):
 print i, j,
 print N.argwhere(N.logical_not(N.isfinite(part2plot)))
 extent = (i*nx/4, (i+1)*nx/4, (j+1)*ny/4, j*ny/4)

 ax.imshow(part2plot, cmap_name, extent = extent)
 ax.set_xlim(0, nx)
 ax.set_ylim(0, ny)
 canvas.print_figure('../../Data-Figures/Zodiac/Aquarius/'+ cmap_name +
 'Aquarius_test.png', dpi=DPI)
 # End Aquarius_test.png

 DG


Oh, and here's fresh output (i.e., I just reran it to confirm that I'm
still having the problem).
0
100
200
300
[[133 319]]
[]
[]
Traceback (most recent call last):
File
C:\Users\Fermat\Documents\Fractals\Python\Source\Zodiac\aquarius_test.py,
line 108, in module
ax.imshow(part2plot, cmap_name, extent = extent)
File C:\Python254\lib\site-packages\matplotlib\axes.py, line 6261, in
imshow
im.autoscale_None()
File C:\Python254\lib\site-packages\matplotlib\cm.py, line 236, in
autoscale_None
self.norm.autoscale_None(self._A)
File C:\Python254\lib\site-packages\matplotlib\colors.py, line 792, in
autoscale_None
if self.vmin is None: self.vmin = 

Re: [Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-08 Thread Friedrich Romstedt
It's pretty simple, but I was stunned myself how simple.  Have a look
at line 65 of your script you provided:

W = W.T

This means,  x - y.  But in the for loops, you still act as if W
wasn't transposed.  I added some prints, the positions should be clear
for you:

argW.shape = (320, 200)
i, j = (0, 0)
j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (0, 50, 0, 80)
part2plot.shape = (50, 80)
i, j = (0, 1)
j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (50, 100, 0, 80)
part2plot.shape = (50, 80)
i, j = (0, 2)
j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (100, 150, 0, 80)
part2plot.shape = (50, 80)
i, j = (0, 3)
j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (150, 200, 0, 80)
part2plot.shape = (50, 80)
i, j = (1, 0)
j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (0, 50, 80, 160)
part2plot.shape = (50, 80)
i, j = (1, 1)
j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (50, 100, 80, 160)
part2plot.shape = (50, 80)
i, j = (1, 2)
j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (100, 150, 80, 160)
part2plot.shape = (50, 80)
i, j = (1, 3)
j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (150, 200, 80, 160)
part2plot.shape = (50, 80)
i, j = (2, 0)
j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (0, 50, 160, 240)
part2plot.shape = (50, 40)
i, j = (2, 1)
j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (50, 100, 160, 240)
part2plot.shape = (50, 40)
i, j = (2, 2)
j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (100, 150, 160, 240)
part2plot.shape = (50, 40)
i, j = (2, 3)
j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (150, 200, 160, 240)
part2plot.shape = (50, 40)
i, j = (3, 0)
j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (0, 50, 240, 320)
part2plot.shape = (50, 0)
Traceback (most recent call last):
  File D:\Home\Friedrich\Entwicklung\2010\David\aquarius.py, line 91, in ?
ax.imshow(part2plot, extent = extent)
  File D:\Programme\Programmierung\python-2.4.1\lib\site-packages\matplotlib\ax
es.py, line 5471, in imshow
im.autoscale_None()
  File D:\Programme\Programmierung\python-2.4.1\lib\site-packages\matplotlib\cm
.py, line 148, in autoscale_None
self.norm.autoscale_None(self._A)
  File D:\Programme\Programmierung\python-2.4.1\lib\site-packages\matplotlib\co
lors.py, line 682, in autoscale_None
if self.vmin is None: self.vmin = ma.minimum(A)
  File D:\Programme\Programmierung\python-2.4.1\lib\site-packages\numpy\ma\core
.py, line 3042, in __call__
return self.reduce(a)
  File D:\Programme\Programmierung\python-2.4.1\lib\site-packages\numpy\ma\core
.py, line 3057, in reduce
t = self.ufunc.reduce(target, **kargs)
ValueError: zero-size array to ufunc.reduce without identity

So you simply have to exchange the role of x and y in your slice
indicing expression, and everything will work out fine, I suspect :-)

Or simpy leave out the transposition?  Note that in the other case,
you also may have to consider to change to extent's axes to get it
properly reflected.

NB: With my version of matplotlib, it didn't accept the colormap, but
when yours does, it doesn't matter.

Friedrich
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-08 Thread Bruce Southey

On 03/08/2010 12:17 PM, David Goldsmith wrote:
On Mon, Mar 8, 2010 at 6:52 AM, Bruce Southey bsout...@gmail.com 
mailto:bsout...@gmail.com wrote:


On 03/08/2010 01:30 AM, David Goldsmith wrote:

On Sun, Mar 7, 2010 at 4:41 AM, Friedrich Romstedt
friedrichromst...@gmail.com
mailto:friedrichromst...@gmail.com wrote:

I would like to stress the fact that imo this is maybe not
ticket and not a bug.

The issue arises when calling a.max() or similar of empty
arrays a, i.e., with:

 0 in a.shape
True

Opposed to the .prod() of an empty array, such a .max() or .min()
cannot be defined, because the set is empty.  So it's fully
correct to
let such calls fail.  Just the failure is a bit deep in
numpy, and
only the traceback gives some hint what went wrong.

I posted something similar also on the matplotlib-users list,
sorry
for cross-posting thus.


Any suggestions, then, how to go about figuring out what's
happening in my code that's causing this feature to manifest
itself?

DG

Perhaps providing the code with specific versions of Python, numpy
etc. would help.

I would guess that aquarius_test.py has not correctly setup the
necessary inputs (or has invalid inputs) required by matplotlib
(which I have no knowledge about). Really you have to find if the
_A in cmp.py used by 'self.norm.autoscale_None(self._A)' is valid.
You may be missing a valid initialization step because the
TypeError exception in autoscale_None ('You must first set_array
for mappable') implies something need to be done first.

Bruce


Python 2.5.4, Numpy 1.4.0, Matplotlib 0.99.0, Windows 32bit Vista Home 
Premium SP2


# Code copyright 2010 by David Goldsmith
# Comments and unnecessaries edited for brevity
import numpy as N
import matplotlib as MPL
from matplotlib import pylab
from matplotlib.backends.backend_agg import FigureCanvasAgg as 
FigureCanvas

from matplotlib.figure import Figure
import matplotlib.cm http://matplotlib.cm as cm

J = complex(0,1); tol = 1e-6; maxiter = 20;
roots = (-2 + J/2, -1 + J,  -0.5 + J/2, 0.5 + J,  1 + J/2, 2 + J, 2.5 
+ J/2,
 -2 - J,   -1 - J/2, -0.5 - J,  0.5 - J/2, 1 - J,  2 - J/2, 
2.5 - J)


def ffp(z):
w, wp = (0J, 0J)
for root in roots:
z0 = z - root
w += N.sin(1/z0)
wp -= N.cos(1/z0)/(z0*z0)
return (w, wp)

def iter(z):
w, wp = ffp(z)
return z - w/wp

def find_root(z0):#, k, j):
count = 0
z1 = iter(z0)
if N.isnan(z1):
return N.complex64(N.inf)
while (N.abs(z1 - z0)  tol) and \
   (count  maxiter):
count += 1
z0 = z1
z1 = iter(z0)
if N.abs(z1 - z0)  tol:
result = 0
else:
result = z1
return N.complex64(result)

w, h, DPI = (3.2, 2.0, 100)
fig = Figure(figsize=(w, h),
 dpi=DPI,
 frameon=False)
ax = fig.add_subplot(1,1,1)
canvas = FigureCanvas(fig)
nx, xmin, xmax = (int(w*DPI), -0.5, 0.5)
ny, ymin, ymax = (int(h*DPI),  0.6, 1.2)

X, xincr = N.linspace(xmin,xmax,nx,retstep=True)
Y, yincr = N.linspace(ymin,ymax,ny,retstep=True)
W = N.zeros((ny,nx), dtype=N.complex64)

for j in N.arange(nx):
if not (j%100): # report progress
print j
for k in N.arange(ny):
x, y = (X[j], Y[k])
z0 = x + J*y
W[k,j] = find_root(z0)#,k,j)

print N.argwhere(N.logical_not(N.isfinite(W.real)))
print N.argwhere(N.logical_not(N.isfinite(W.imag)))
W = W.T
argW = N.angle(W)
print N.argwhere(N.logical_not(N.isfinite(argW)))
cms = (Blues,)# Blues_r, cool, cool_r,

def all_ticks_off(ax):
ax.xaxis.set_major_locator(pylab.NullLocator())
ax.yaxis.set_major_locator(pylab.NullLocator())

for cmap_name in cms:
all_ticks_off(ax)
ax.hold(True)
for i in range(4):
for j in range(4):
part2plot = argW[j*ny/4:(j+1)*ny/4, i*nx/4:(i+1)*nx/4]
if N.any(N.logical_not(N.isfinite(part2plot))):
print i, j,
print N.argwhere(N.logical_not(N.isfinite(part2plot)))
extent = (i*nx/4, (i+1)*nx/4, (j+1)*ny/4, j*ny/4)
ax.imshow(part2plot, cmap_name, extent = extent)
ax.set_xlim(0, nx)
ax.set_ylim(0, ny)
canvas.print_figure('../../Data-Figures/Zodiac/Aquarius/'+ cmap_name +
'Aquarius_test.png', dpi=DPI)
# End Aquarius_test.png

DG


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
   

Hmm,
Appears that you have mixed your indices when creating part2plot. If you 
this line instead it works:

part2plot = argW[j*nx/4:(j+1)*nx/4, i*ny/4:(i+1)*ny/4]


I found that by looking the shape of the part2plot array that is 
component of the argW array.


The shape of argW is (320, 200). So in your loops to find part2plot you 

Re: [Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-08 Thread Friedrich Romstedt
2010/3/8 Bruce Southey bsout...@gmail.com:
 Hmm,
 Appears that you have mixed your indices when creating part2plot. If you
 this line instead it works:
 part2plot = argW[j*nx/4:(j+1)*nx/4, i*ny/4:(i+1)*ny/4]


 I found that by looking the shape of the part2plot array that is component
 of the argW array.

 The shape of argW is (320, 200). So in your loops to find part2plot you
 eventually exceed 200 and eventually the index to the second axis is greater
 than 200 causing everything to crash:
 When i=0 or 1 then the shape of part2plot is (50, 80)
 when i=2 then the shape of part2plot is (50, 40)
 when i=3 then the shape of part2plot is (50,  0) # crash

Nice that we have found this out both at the same instance of time
(with 5min precision) :-)

Friedrich
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-08 Thread David Goldsmith
How embarrassing! :O  Well, as they say, 'nother set of eyes...

Thanks!

DG

On Mon, Mar 8, 2010 at 11:25 AM, Friedrich Romstedt 
friedrichromst...@gmail.com wrote:

 It's pretty simple, but I was stunned myself how simple.  Have a look
 at line 65 of your script you provided:

 W = W.T

 This means,  x - y.  But in the for loops, you still act as if W
 wasn't transposed.  I added some prints, the positions should be clear
 for you:

 argW.shape = (320, 200)
 i, j = (0, 0)
 j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (0, 50, 0, 80)
 part2plot.shape = (50, 80)
 i, j = (0, 1)
 j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (50, 100, 0, 80)
 part2plot.shape = (50, 80)
 i, j = (0, 2)
 j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (100, 150, 0, 80)
 part2plot.shape = (50, 80)
 i, j = (0, 3)
 j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (150, 200, 0, 80)
 part2plot.shape = (50, 80)
 i, j = (1, 0)
 j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (0, 50, 80, 160)
 part2plot.shape = (50, 80)
 i, j = (1, 1)
 j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (50, 100, 80, 160)
 part2plot.shape = (50, 80)
 i, j = (1, 2)
 j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (100, 150, 80, 160)
 part2plot.shape = (50, 80)
 i, j = (1, 3)
 j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (150, 200, 80, 160)
 part2plot.shape = (50, 80)
 i, j = (2, 0)
 j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (0, 50, 160, 240)
 part2plot.shape = (50, 40)
 i, j = (2, 1)
 j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (50, 100, 160, 240)
 part2plot.shape = (50, 40)
 i, j = (2, 2)
 j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (100, 150, 160, 240)
 part2plot.shape = (50, 40)
 i, j = (2, 3)
 j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (150, 200, 160, 240)
 part2plot.shape = (50, 40)
 i, j = (3, 0)
 j*ny/4, (j+1)*ny/4, i*nx/4, (i+1)*nx/4 = (0, 50, 240, 320)
 part2plot.shape = (50, 0)
 Traceback (most recent call last):
   File D:\Home\Friedrich\Entwicklung\2010\David\aquarius.py, line 91, in
 ?
ax.imshow(part2plot, extent = extent)
  File
 D:\Programme\Programmierung\python-2.4.1\lib\site-packages\matplotlib\ax
 es.py, line 5471, in imshow
im.autoscale_None()
  File
 D:\Programme\Programmierung\python-2.4.1\lib\site-packages\matplotlib\cm
 .py, line 148, in autoscale_None
 self.norm.autoscale_None(self._A)
   File
 D:\Programme\Programmierung\python-2.4.1\lib\site-packages\matplotlib\co
 lors.py, line 682, in autoscale_None
 if self.vmin is None: self.vmin = ma.minimum(A)
   File
 D:\Programme\Programmierung\python-2.4.1\lib\site-packages\numpy\ma\core
 .py, line 3042, in __call__
return self.reduce(a)
  File
 D:\Programme\Programmierung\python-2.4.1\lib\site-packages\numpy\ma\core
 .py, line 3057, in reduce
 t = self.ufunc.reduce(target, **kargs)
 ValueError: zero-size array to ufunc.reduce without identity

 So you simply have to exchange the role of x and y in your slice
 indicing expression, and everything will work out fine, I suspect :-)

 Or simpy leave out the transposition?  Note that in the other case,
 you also may have to consider to change to extent's axes to get it
 properly reflected.

 NB: With my version of matplotlib, it didn't accept the colormap, but
 when yours does, it doesn't matter.

 Friedrich
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-07 Thread Friedrich Romstedt
2010/3/5 Pierre GM pgmdevl...@gmail.com:
 'm'fraid no. I gonna have to investigate that. Please open a ticket with a 
 self-contained example that reproduces the issue.
 Thx in advance...
 P.

I would like to stress the fact that imo this is maybe not ticket and not a bug.

The issue arises when calling a.max() or similar of empty arrays a, i.e., with:

 0 in a.shape
True

Opposed to the .prod() of an empty array, such a .max() or .min()
cannot be defined, because the set is empty.  So it's fully correct to
let such calls fail.  Just the failure is a bit deep in numpy, and
only the traceback gives some hint what went wrong.

I posted something similar also on the matplotlib-users list, sorry
for cross-posting thus.

fwiw,
Friedrich
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-07 Thread David Goldsmith
On Sun, Mar 7, 2010 at 4:41 AM, Friedrich Romstedt 
friedrichromst...@gmail.com wrote:

 2010/3/5 Pierre GM pgmdevl...@gmail.com:
  'm'fraid no. I gonna have to investigate that. Please open a ticket with
 a self-contained example that reproduces the issue.
  Thx in advance...
  P.

 I would like to stress the fact that imo this is maybe not ticket and not a
 bug.

 The issue arises when calling a.max() or similar of empty arrays a, i.e.,
 with:

  0 in a.shape
 True

 Opposed to the .prod() of an empty array, such a .max() or .min()
 cannot be defined, because the set is empty.  So it's fully correct to
 let such calls fail.  Just the failure is a bit deep in numpy, and
 only the traceback gives some hint what went wrong.

 I posted something similar also on the matplotlib-users list, sorry
 for cross-posting thus.


Any suggestions, then, how to go about figuring out what's happening in my
code that's causing this feature to manifest itself?

DG




 fwiw,
 Friedrich
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-05 Thread David Goldsmith
Hi!  Sorry for the cross-post, but my own investigation has led me to
suspect that mine is actually a numpy problem, not a matplotlib problem.
I'm getting the following traceback from a call to matplotlib.imshow:

Traceback (most recent call last):
 File
C:\Users\Fermat\Documents\Fractals\Python\Source\Zodiac\aquarius_test.py,
line 108, in module
ax.imshow(part2plot, cmap_name, extent = extent)
 File C:\Python254\lib\site-packages\matplotlib\axes.py, line 6261, in
imshow
im.autoscale_None()
 File C:\Python254\lib\site-packages\matplotlib\cm.py, line 236, in
autoscale_None
self.norm.autoscale_None(self._A)
 File C:\Python254\lib\site-packages\matplotlib\colors.py, line 792, in
autoscale_None
if self.vmin is None: self.vmin = ma.minimum(A)
 File C:\Python254\Lib\site-packages\numpy\ma\core.py, line , in
__call__
return self.reduce(a)
 File C:\Python254\Lib\site-packages\numpy\ma\core.py, line 5570, in
reduce
t = self.ufunc.reduce(target, **kargs)
ValueError: zero-size array to ufunc.reduce without identity
Script terminated.

Based on examination of the code, the last self is an instance of
ma._extrema_operation (or one of its subclasses) - is there a reason why
this class is unable to deal with a zero-size array to ufunc.reduce without
identity, (i.e., was it thought that it would - or should - never get one)
or was this merely an oversight?  Either way, there's other instances on the
lists of this error cropping up, so this circumstance should probably be
handled more robustly.  In the meantime, workaround?

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-05 Thread Pierre GM
On Mar 5, 2010, at 4:38 AM, David Goldsmith wrote:
 Hi!  Sorry for the cross-post, but my own investigation has led me to suspect 
 that mine is actually a numpy problem, not a matplotlib problem.  I'm getting 
 the following traceback from a call to matplotlib.imshow:
 ...
 Based on examination of the code, the last self is an instance of 
 ma._extrema_operation (or one of its subclasses) - is there a reason why this 
 class is unable to deal with a zero-size array to ufunc.reduce without 
 identity, (i.e., was it thought that it would - or should - never get one) 
 or was this merely an oversight?  Either way, there's other instances on the 
 lists of this error cropping up, so this circumstance should probably be 
 handled more robustly.  In the meantime, workaround?


'm'fraid no. I gonna have to investigate that. Please open a ticket with a 
self-contained example that reproduces the issue.
Thx in advance...
P.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-05 Thread Vincent Schut
On 03/05/2010 11:51 AM, Pierre GM wrote:
 On Mar 5, 2010, at 4:38 AM, David Goldsmith wrote:
 Hi!  Sorry for the cross-post, but my own investigation has led me to 
 suspect that mine is actually a numpy problem, not a matplotlib problem.  
 I'm getting the following traceback from a call to matplotlib.imshow:
 ...
 Based on examination of the code, the last self is an instance of 
 ma._extrema_operation (or one of its subclasses) - is there a reason why 
 this class is unable to deal with a zero-size array to ufunc.reduce without 
 identity, (i.e., was it thought that it would - or should - never get one) 
 or was this merely an oversight?  Either way, there's other instances on the 
 lists of this error cropping up, so this circumstance should probably be 
 handled more robustly.  In the meantime, workaround?


 'm'fraid no. I gonna have to investigate that. Please open a ticket with a 
 self-contained example that reproduces the issue.
 Thx in advance...
 P.

This might be completely wrong, but I seem to remember a similar issue, 
which I then traced down to having a masked array with a mask that was 
set to True or False, instead of being a full fledged bool mask array. I 
was in a hurry then and completely forgot about it later, so filed no 
bug report whatsoever, for which I apologize.

VS.

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-05 Thread David Goldsmith
On Fri, Mar 5, 2010 at 2:51 AM, Pierre GM pgmdevl...@gmail.com wrote:

 On Mar 5, 2010, at 4:38 AM, David Goldsmith wrote:
  Hi!  Sorry for the cross-post, but my own investigation has led me to
 suspect that mine is actually a numpy problem, not a matplotlib problem.
  I'm getting the following traceback from a call to matplotlib.imshow:
  ...
  Based on examination of the code, the last self is an instance of
 ma._extrema_operation (or one of its subclasses) - is there a reason why
 this class is unable to deal with a zero-size array to ufunc.reduce without
 identity, (i.e., was it thought that it would - or should - never get one)
 or was this merely an oversight?  Either way, there's other instances on the
 lists of this error cropping up, so this circumstance should probably be
 handled more robustly.  In the meantime, workaround?


 'm'fraid no. I gonna have to investigate that. Please open a ticket with a
 self-contained example that reproduces the issue.


I'll do my best, but since it's a call from matplotlib and I don't really
know what's causing the problem (other than a literal reading of the
exception) I'm not sure I can.

DG


 Thx in advance...
 P.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-05 Thread David Goldsmith
On Fri, Mar 5, 2010 at 9:22 AM, David Goldsmith d.l.goldsm...@gmail.comwrote:

 On Fri, Mar 5, 2010 at 2:51 AM, Pierre GM pgmdevl...@gmail.com wrote:

 On Mar 5, 2010, at 4:38 AM, David Goldsmith wrote:
  Hi!  Sorry for the cross-post, but my own investigation has led me to
 suspect that mine is actually a numpy problem, not a matplotlib problem.
  I'm getting the following traceback from a call to matplotlib.imshow:
  ...
  Based on examination of the code, the last self is an instance of
 ma._extrema_operation (or one of its subclasses) - is there a reason why
 this class is unable to deal with a zero-size array to ufunc.reduce without
 identity, (i.e., was it thought that it would - or should - never get one)
 or was this merely an oversight?  Either way, there's other instances on the
 lists of this error cropping up, so this circumstance should probably be
 handled more robustly.  In the meantime, workaround?


 'm'fraid no. I gonna have to investigate that. Please open a ticket with a
 self-contained example that reproduces the issue.


 I'll do my best, but since it's a call from matplotlib and I don't really
 know what's causing the problem (other than a literal reading of the
 exception) I'm not sure I can.


Well, that was easy:

mn = N.ma.core._minimum_operation()
mn.reduce(N.array(()))
Traceback (most recent call last):
  File input, line 1, in module
  File C:\Python254\Lib\site-packages\numpy\ma\core.py, line 5570, in
reduce
t = self.ufunc.reduce(target, **kargs)
ValueError: zero-size array to ufunc.reduce without identity

I'll file a ticket.

DG



 DG


 Thx in advance...
 P.
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://mail.scipy.org/mailman/listinfo/numpy-discussion



___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.ma.reduce?

2010-03-05 Thread David Goldsmith
On Fri, Mar 5, 2010 at 9:43 AM, David Goldsmith d.l.goldsm...@gmail.comwrote:

 On Fri, Mar 5, 2010 at 9:22 AM, David Goldsmith 
 d.l.goldsm...@gmail.comwrote:

 On Fri, Mar 5, 2010 at 2:51 AM, Pierre GM pgmdevl...@gmail.com wrote:

 On Mar 5, 2010, at 4:38 AM, David Goldsmith wrote:
  Hi!  Sorry for the cross-post, but my own investigation has led me to
 suspect that mine is actually a numpy problem, not a matplotlib problem.
  I'm getting the following traceback from a call to matplotlib.imshow:
  ...
  Based on examination of the code, the last self is an instance of
 ma._extrema_operation (or one of its subclasses) - is there a reason why
 this class is unable to deal with a zero-size array to ufunc.reduce without
 identity, (i.e., was it thought that it would - or should - never get one)
 or was this merely an oversight?  Either way, there's other instances on the
 lists of this error cropping up, so this circumstance should probably be
 handled more robustly.  In the meantime, workaround?


 'm'fraid no. I gonna have to investigate that. Please open a ticket with
 a self-contained example that reproduces the issue.


 I'll do my best, but since it's a call from matplotlib and I don't really
 know what's causing the problem (other than a literal reading of the
 exception) I'm not sure I can.


 Well, that was easy:

 mn = N.ma.core._minimum_operation()
 mn.reduce(N.array(()))

 Traceback (most recent call last):
   File input, line 1, in module

   File C:\Python254\Lib\site-packages\numpy\ma\core.py, line 5570, in
 reduce
 t = self.ufunc.reduce(target, **kargs)
 ValueError: zero-size array to ufunc.reduce without identity

 I'll file a ticket.


OK, Ticket #1422 filed.

DG
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.4 MaskedArray bug?

2010-01-19 Thread Charles سمير Doutriaux
Hi Pierre,

We didn't move to 1.4 yet.

Should we wait for 1.4.1? It seems there's some issues with numpy.ma  
in 1.4 and we rely heavily on it.

C.

On Jan 12, 2010, at 11:50 AM, Pierre GM wrote:

 On Jan 12, 2010, at 1:52 PM, Charles R Harris wrote:



 On Tue, Jan 12, 2010 at 11:32 AM, Pauli Virtanen p...@iki.fi wrote:
 ti, 2010-01-12 kello 12:51 -0500, Pierre GM kirjoitti:
 [clip]
 a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
 numpy.ma.sum(a, 1)
 Traceback (most recent call last):
 File stdin, line 1, in module
 File
 /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux- 
 x86_64.egg/n
 umpy/ma/core.py, line 5682, in __call__
   return method(*args, **params)
 File
 /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux- 
 x86_64.egg/n
 umpy/ma/core.py, line 4357, in sum
   newmask = _mask.all(axis=axis)
 ValueError: axis(=1) out of bounds

 Confirmed.
 Before I take full blame for it, can you try the following on both  
 1.3 and 1.4 ?
 np.array(False).all().sum(1)

 Oh crap, it's mostly my fault:

 http://*projects.scipy.org/numpy/ticket/1286
 http://*projects.scipy.org/numpy/changeset/7697
 http://*projects.scipy.org/numpy/browser/trunk/doc/release/1.4.0- 
 notes.rst#deprecations

 Pretty embarassing, as very simple things break, although the test  
 suite
 miraculously passes...

 Back to your problem: I'll fix that ASAIC, but it'll be on the  
 SVN. Meanwhile, you can:
 * Use -1 instead of 1 for your axis.
 * Force the definition of a mask when you define your array with  
 masked_array(...,mask=False)

 Sounds like we need a 1.4.1 out at some point not too far in the  
 future,
 then.


 If so, then it should be sooner rather than later in order to sync  
 with the releases of ubuntu and fedora. Both of the upcoming  
 releases still use 1.3.0, but that could change...

 I guess that the easiest would be for me to provide a workaround for  
 the bug (Pauli's modifications make sense, I was relying on a  
 *feature* that wasn't very robust).
 I'll update both the trunk and the 1.4.x branch
 ___
 NumPy-Discussion mailing list
 NumPy-Discussion@scipy.org
 http://*mail.scipy.org/mailman/listinfo/numpy-discussion


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy 1.4 MaskedArray bug?

2010-01-12 Thread stephen.pascoe
We have noticed the MaskedArray implementation in numpy-1.4.0 breaks
some of our code.  For instance we see the following:
 
in 1.3.0:

 a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
 numpy.ma.sum(a, 1)
masked_array(data = [ 6 15],
mask = False,
fill_value = 99)

in 1.4.0

 a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
 numpy.ma.sum(a, 1)
Traceback (most recent call last):
  File stdin, line 1, in module
  File
/usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
umpy/ma/core.py, line 5682, in __call__
return method(*args, **params)
  File
/usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
umpy/ma/core.py, line 4357, in sum
newmask = _mask.all(axis=axis)
ValueError: axis(=1) out of bounds


Also note the Report Bugs link on http://numpy.scipy.org is broken
(http://numpy.scipy.org/bug-report.html)

Thanks,
Stephen.
 
---
Stephen Pascoe  +44 (0)1235 445980
British Atmospheric Data Centre
Rutherford Appleton Laboratory
-- 
Scanned by iCritical.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.4 MaskedArray bug?

2010-01-12 Thread Pierre GM
On Jan 12, 2010, at 10:52 AM, stephen.pas...@stfc.ac.uk 
stephen.pas...@stfc.ac.uk wrote:
 We have noticed the MaskedArray implementation in numpy-1.4.0 breaks
 some of our code.  For instance we see the following:

My, that's embarrassing. Sorry for the inconvenience.



 
 in 1.3.0:
 
 a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
 numpy.ma.sum(a, 1)
 masked_array(data = [ 6 15],
 mask = False,
 fill_value = 99)
 
 in 1.4.0
 
 a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
 numpy.ma.sum(a, 1)
 Traceback (most recent call last):
  File stdin, line 1, in module
  File
 /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
 umpy/ma/core.py, line 5682, in __call__
return method(*args, **params)
  File
 /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
 umpy/ma/core.py, line 4357, in sum
newmask = _mask.all(axis=axis)
 ValueError: axis(=1) out of bounds

Confirmed.
Before I take full blame for it, can you try the following on both 1.3 and 1.4 ?
 np.array(False).all().sum(1)

Back to your problem: I'll fix that ASAIC, but it'll be on the SVN. Meanwhile, 
you can:
* Use -1 instead of 1 for your axis.
* Force the definition of a mask when you define your array with 
masked_array(...,mask=False)




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.4 MaskedArray bug?

2010-01-12 Thread Pauli Virtanen
ti, 2010-01-12 kello 12:51 -0500, Pierre GM kirjoitti:
[clip] 
  a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
  numpy.ma.sum(a, 1)
  Traceback (most recent call last):
   File stdin, line 1, in module
   File
  /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
  umpy/ma/core.py, line 5682, in __call__
 return method(*args, **params)
   File
  /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
  umpy/ma/core.py, line 4357, in sum
 newmask = _mask.all(axis=axis)
  ValueError: axis(=1) out of bounds
 
 Confirmed.
 Before I take full blame for it, can you try the following on both 1.3 and 
 1.4 ?
  np.array(False).all().sum(1)

Oh crap, it's mostly my fault:

http://projects.scipy.org/numpy/ticket/1286
http://projects.scipy.org/numpy/changeset/7697
http://projects.scipy.org/numpy/browser/trunk/doc/release/1.4.0-notes.rst#deprecations

Pretty embarassing, as very simple things break, although the test suite
miraculously passes...

 Back to your problem: I'll fix that ASAIC, but it'll be on the SVN. 
 Meanwhile, you can:
 * Use -1 instead of 1 for your axis.
 * Force the definition of a mask when you define your array with 
 masked_array(...,mask=False)

Sounds like we need a 1.4.1 out at some point not too far in the future,
then.

Pauli


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.4 MaskedArray bug?

2010-01-12 Thread Charles R Harris
On Tue, Jan 12, 2010 at 11:32 AM, Pauli Virtanen p...@iki.fi wrote:

 ti, 2010-01-12 kello 12:51 -0500, Pierre GM kirjoitti:
 [clip]
   a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
   numpy.ma.sum(a, 1)
   Traceback (most recent call last):
File stdin, line 1, in module
File
  
 /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
   umpy/ma/core.py, line 5682, in __call__
  return method(*args, **params)
File
  
 /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
   umpy/ma/core.py, line 4357, in sum
  newmask = _mask.all(axis=axis)
   ValueError: axis(=1) out of bounds
 
  Confirmed.
  Before I take full blame for it, can you try the following on both 1.3
 and 1.4 ?
   np.array(False).all().sum(1)

 Oh crap, it's mostly my fault:

 http://projects.scipy.org/numpy/ticket/1286
 http://projects.scipy.org/numpy/changeset/7697

 http://projects.scipy.org/numpy/browser/trunk/doc/release/1.4.0-notes.rst#deprecations

 Pretty embarassing, as very simple things break, although the test suite
 miraculously passes...

  Back to your problem: I'll fix that ASAIC, but it'll be on the SVN.
 Meanwhile, you can:
  * Use -1 instead of 1 for your axis.
  * Force the definition of a mask when you define your array with
 masked_array(...,mask=False)

 Sounds like we need a 1.4.1 out at some point not too far in the future,
 then.


If so, then it should be sooner rather than later in order to sync with the
releases of ubuntu and fedora. Both of the upcoming releases still use
1.3.0, but that could change...

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy 1.4 MaskedArray bug?

2010-01-12 Thread Pierre GM
On Jan 12, 2010, at 1:52 PM, Charles R Harris wrote:
 
 
 
 On Tue, Jan 12, 2010 at 11:32 AM, Pauli Virtanen p...@iki.fi wrote:
 ti, 2010-01-12 kello 12:51 -0500, Pierre GM kirjoitti:
 [clip]
   a = numpy.ma.MaskedArray([[1,2,3],[4,5,6]])
   numpy.ma.sum(a, 1)
   Traceback (most recent call last):
File stdin, line 1, in module
File
   /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
   umpy/ma/core.py, line 5682, in __call__
  return method(*args, **params)
File
   /usr/lib64/python2.5/site-packages/numpy-1.4.0-py2.5-linux-x86_64.egg/n
   umpy/ma/core.py, line 4357, in sum
  newmask = _mask.all(axis=axis)
   ValueError: axis(=1) out of bounds
 
  Confirmed.
  Before I take full blame for it, can you try the following on both 1.3 and 
  1.4 ?
   np.array(False).all().sum(1)
 
 Oh crap, it's mostly my fault:
 
 http://projects.scipy.org/numpy/ticket/1286
 http://projects.scipy.org/numpy/changeset/7697
 http://projects.scipy.org/numpy/browser/trunk/doc/release/1.4.0-notes.rst#deprecations
 
 Pretty embarassing, as very simple things break, although the test suite
 miraculously passes...
 
  Back to your problem: I'll fix that ASAIC, but it'll be on the SVN. 
  Meanwhile, you can:
  * Use -1 instead of 1 for your axis.
  * Force the definition of a mask when you define your array with 
  masked_array(...,mask=False)
 
 Sounds like we need a 1.4.1 out at some point not too far in the future,
 then.
 
 
 If so, then it should be sooner rather than later in order to sync with the 
 releases of ubuntu and fedora. Both of the upcoming releases still use 1.3.0, 
 but that could change...

I guess that the easiest would be for me to provide a workaround for the bug 
(Pauli's modifications make sense, I was relying on a *feature* that wasn't 
very robust).
I'll update both the trunk and the 1.4.x branch
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy large array bug

2009-09-21 Thread Kashyap Ashwin
Hello,

I have downloaded numpy 1.3rc2 sources and compiled it on Ubuntu Hardy
Linux x86_64. numpy.test() seems to run ok as well.

 

Here is the bug I can reproduce

 

import numpy as np

a=np.zeros((2*1024*1024*1024 + 1), dtype=uint8)

a[:]=1

 

# returns immediately

a.mean()

0.0

 

print a

[0 0 0 ..., 0 0 0]

 

The bug only happens when the nElements  2G (2^31). So for
dtype=uint16/32, the bug happens when size is greater thatn 2^31 as
well. 

 

Can someone please tell me if I can find a patch for this? I checked the
mailing list and trac and I cannot find any related bug.

 

Thanks,

Ashwin

 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy large array bug

2009-09-21 Thread Francesc Alted
A Monday 21 September 2009 19:45:27 Kashyap Ashwin escrigué:
 Hello,

 I have downloaded numpy 1.3rc2 sources and compiled it on Ubuntu Hardy
 Linux x86_64. numpy.test() seems to run ok as well.



 Here is the bug I can reproduce



 import numpy as np

 a=np.zeros((2*1024*1024*1024 + 1), dtype=uint8)

 a[:]=1



 # returns immediately

 a.mean()

 0.0



 print a

 [0 0 0 ..., 0 0 0]



 The bug only happens when the nElements  2G (2^31). So for
 dtype=uint16/32, the bug happens when size is greater thatn 2^31 as
 well.

Yup.  I can reproduce your problem with NumPy 1.3.0 (final) and a 64-bit 
platform.  I suppose that you should file a bug better.

-- 
Francesc Alted
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy large array bug

2009-09-21 Thread Charles R Harris
On Mon, Sep 21, 2009 at 12:30 PM, Francesc Alted fal...@pytables.orgwrote:

 A Monday 21 September 2009 19:45:27 Kashyap Ashwin escrigué:
  Hello,
 
  I have downloaded numpy 1.3rc2 sources and compiled it on Ubuntu Hardy
  Linux x86_64. numpy.test() seems to run ok as well.
 
 
 
  Here is the bug I can reproduce
 
 
 
  import numpy as np
 
  a=np.zeros((2*1024*1024*1024 + 1), dtype=uint8)
 
  a[:]=1
 
 
 
  # returns immediately
 
  a.mean()
 
  0.0
 
 
 
  print a
 
  [0 0 0 ..., 0 0 0]
 
 
 
  The bug only happens when the nElements  2G (2^31). So for
  dtype=uint16/32, the bug happens when size is greater thatn 2^31 as
  well.

 Yup.  I can reproduce your problem with NumPy 1.3.0 (final) and a 64-bit
 platform.  I suppose that you should file a bug better.


Does is persist for svn? IIRC, there is another ticket for a slicing bug for
large arrays.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy large array bug

2009-09-21 Thread Kashyap Ashwin
Yes, it happens for the trunk as well.


  import numpy as np
 
  a=np.zeros((2*1024*1024*1024 + 1), dtype=uint8)
 
  a[:]=1
  # returns immediately
 
  a.mean()
 
  0.0
  print a
 
  [0 0 0 ..., 0 0 0]
  The bug only happens when the nElements  2G (2^31). So for
  dtype=uint16/32, the bug happens when size is greater thatn 2^31 as
  well.

 Yup.  I can reproduce your problem with NumPy 1.3.0 (final) and a
64-bit
 platform.  I suppose that you should file a bug better.


Does is persist for svn? IIRC, there is another ticket for a slicing bug
for
large arrays.


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy large array bug

2009-09-21 Thread Citi, Luca
I can confirm this bug for the last svn.

Also:
 a.put([2*1024*1024*1024 + 100,], 8)
IndexError: index out of range for array

in this case, I think the error is that in
numpy/core/src/multiarray/item_selection.c
in PyArray_PutTo line 209 should be:
intp i, chunk, ni, max_item, nv, tmp;
instead of:
int i, chunk, ni, max_item, nv, tmp;

fixing it as suggested:
 a.put([2*1024*1024*1024 + 100,], 8)
 a.max()
8

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Numpy large array bug

2009-09-21 Thread Kashyap Ashwin
Also, what about PyArray_PutMask()

That function also has a line like int i, chunk, ni, max_item, nv,
tmp;

Should that be changed as well?

(Your patch does not fix my original issue.)

 

BTW, in numpy 1.3, that is present in numpy/core/src/multiarraymodule.c.


Can someone please give me a temporary patch to test? I am not familiar
with numpy codebase!

 

-Ashwin

 

 

I can confirm this bug for the last svn.

 

Also:

 a.put([2*1024*1024*1024 + 100,], 8)

IndexError: index out of range for array

 

in this case, I think the error is that in

numpy/core/src/multiarray/item_selection.c

in PyArray_PutTo line 209 should be:

intp i, chunk, ni, max_item, nv, tmp;

instead of:

int i, chunk, ni, max_item, nv, tmp;

 

fixing it as suggested:

 a.put([2*1024*1024*1024 + 100,], 8)

 a.max()

8

 

 

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy large array bug

2009-09-21 Thread Citi, Luca
I think the original bug is due to
line 535 of numpy/core/src/multiarray/ctors.c (svn)
that should be:
intp numcopies, nbytes;
instead of:
int numcopies, nbytes;

To resume:
in line 535 of numpy/core/src/multiarray/ctors.c
and
in line 209 of numpy/core/src/multiarray/item_selection.c
int should be replaced with intp.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy large array bug

2009-09-21 Thread Charles R Harris
Hi, Luca,

On Mon, Sep 21, 2009 at 4:52 PM, Citi, Luca lc...@essex.ac.uk wrote:

 I think the original bug is due to
 line 535 of numpy/core/src/multiarray/ctors.c (svn)
 that should be:
intp numcopies, nbytes;
 instead of:
int numcopies, nbytes;

 To resume:
 in line 535 of numpy/core/src/multiarray/ctors.c
 and
 in line 209 of numpy/core/src/multiarray/item_selection.c
 int should be replaced with intp.


Please open a ticket for this.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Numpy large array bug

2009-09-21 Thread Citi, Luca
Here it is...
http://projects.scipy.org/numpy/ticket/1229
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.distutils ?

2009-08-04 Thread Dave
David Cournapeau david at ar.media.kyoto-u.ac.jp writes:

 
 Matthew Brett wrote:
  Hi,
 
  We are using numpy.distutils, and have run into this odd behavior in 
  windows:
 
 
 Short answer:
 
 I am afraid it cannot work as you want. Basically, when you pass an
 option to build_ext, it does not affect other distutils commands, which
 are run before build_ext, and need the compiler (config in this case I
 think). So you need to pass the -c option to every command affected by
 the compiler (build_ext, build_clib and config IIRC).
 
 cheers,
 
 David
 

I'm having the same problems! Running windows XP, Python 2.5.4 (r254:67916, 
Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)].

In my distutils.cfg I've got:

[build]
compiler=mingw32

[config]
compiler = mingw32

and previously a python setup.py bdist_wininst would create an .exe installer,
now I get the following error message:

error: Python was built with Visual Studio 2003;
extensions must be built with a compiler than can generate compatible binaries.
Visual Studio 2003 was not found on this system. If you have Cygwin installed,
you can try compiling with MingW32, by passing -c mingw32 to setup.py.

python setup.py build build_ext --compiler=mingw32 appeared to work (barring a
warning: numpy\core\setup_common.py:81: MismatchCAPIWarning) but then how do I
create a .exe installer afterwards? python setup.py bdist_wininst fails with
 the same error message as before and python setup.py bdist_wininst
--compiler=mingw32 fails with the message:
error: option --compiler not recognized

Is it still possible to create a .exe installer on Windows and if so what are
the commands we need to make it work?

Thanks in advance for any help/workarounds it would be much appreciated!

Regards,
Dave


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.distutils ?

2009-08-04 Thread David Cournapeau
Dave wrote:
 David Cournapeau david at ar.media.kyoto-u.ac.jp writes:

   
 Matthew Brett wrote:
 
 Hi,

 We are using numpy.distutils, and have run into this odd behavior in 
 windows:

   
 Short answer:

 I am afraid it cannot work as you want. Basically, when you pass an
 option to build_ext, it does not affect other distutils commands, which
 are run before build_ext, and need the compiler (config in this case I
 think). So you need to pass the -c option to every command affected by
 the compiler (build_ext, build_clib and config IIRC).

 cheers,

 David

 

 I'm having the same problems! Running windows XP, Python 2.5.4 (r254:67916, 
 Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)].

 In my distutils.cfg I've got:

 [build]
 compiler=mingw32

 [config]
 compiler = mingw32

   

Yes, config files are an alternative I did not mention. I never use them
because I prefer controlling the build on a per package basis, and the
interaction between command line and config files is not always clear.

 python setup.py build build_ext --compiler=mingw32 appeared to work (barring a
 warning: numpy\core\setup_common.py:81: MismatchCAPIWarning) 

The warning is harmless: it is just a reminder that before releasing
numpy 1.4.0, we will need to raise the C API version (to avoid problems
we had in the past with mismatched numpy version). There is no point
updating it during dev time I think.

 but then how do I
 create a .exe installer afterwards? python setup.py bdist_wininst fails with
  the same error message as before and python setup.py bdist_wininst
 --compiler=mingw32 fails with the message:
 error: option --compiler not recognized
   

You need to do as follows, if you want to control from the command line:

python setup.py build -c mingw32 bdist_wininst

That's how I build the official binaries .

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.distutils ?

2009-08-04 Thread Dave
David Cournapeau david at ar.media.kyoto-u.ac.jp writes:

 
 You need to do as follows, if you want to control from the command line:
 
 python setup.py build -c mingw32 bdist_wininst
 
 That's how I build the official binaries .
 
 cheers,
 
 David
 

Running the command:

C:\dev\src\numpypython setup.py build -c mingw32 bdist_wininst  build.txt

still gives me the error:

error: Python was built with Visual Studio 2003;
extensions must be built with a compiler than can generate compatible binaries.
Visual Studio 2003 was not found on this system. If you have Cygwin installed,
you can try compiling with MingW32, by passing -c mingw32 to setup.py.

I tried without a distutils.cfg file and deleted the build directory both times.

In case it helps the bulid log should be available from
http://pastebin.com/m607992ba

Am I doing something wrong?

-Dave

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Is this a bug in numpy.distutils ?

2009-08-04 Thread David Cournapeau
Dave wrote:
 David Cournapeau david at ar.media.kyoto-u.ac.jp writes:

   
 You need to do as follows, if you want to control from the command line:

 python setup.py build -c mingw32 bdist_wininst

 That's how I build the official binaries .

 cheers,

 David

 

 Running the command:

 C:\dev\src\numpypython setup.py build -c mingw32 bdist_wininst  build.txt

 still gives me the error:

 error: Python was built with Visual Studio 2003;
 extensions must be built with a compiler than can generate compatible 
 binaries.
 Visual Studio 2003 was not found on this system. If you have Cygwin installed,
 you can try compiling with MingW32, by passing -c mingw32 to setup.py.

 I tried without a distutils.cfg file and deleted the build directory both 
 times.

 In case it helps the bulid log should be available from
 http://pastebin.com/m607992ba

 Am I doing something wrong?
   

No, I think you and Matthew actually found a bug in recent changes I
have done in distutils. I will fix it right away,

cheers,

David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


  1   2   >