Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-21 Thread Charles R Harris
On Fri, Sep 21, 2012 at 5:51 PM, Eric Firing  wrote:

> On 2012/09/21 12:20 PM, Nathaniel Smith wrote:
> > On Fri, Sep 21, 2012 at 10:04 PM, Chris Barker 
> wrote:
> >> On Fri, Sep 21, 2012 at 10:03 AM, Nathaniel Smith 
> wrote:
> >>
> >>> You're right of course. What I meant is that
> >>>a += b
> >>> should produce the same result as
> >>>a[...] = a + b
> >>>
> >>> If we change the casting rule for the first one but not the second,
> though,
> >>> then these will produce different results if a is integer and b is
> float:
> >>
> >> I certainly agree that we would want that, however, numpy still needs
> >> to deal tih pyton symantics, which means that wile (at the numpy
> >> level) we can control what "a[...] =" means, and we can control what
> >> "a + b" produces, we can't change what "a + b" means depending on the
> >> context of the left hand side.
> >>
> >> that means we need to do the casting at the assignment stage, which I
> >> gues is your point -- so:
> >>
> >> a_int += a_float
> >>
> >> should do the addition with the "regular" casting rules, then cast to
> >> an int after doing that.
> >>
> >> not sure the implimentation details.
> >
> > Yes, that seems to be what happens.
> >
> > In [1]: a = np.arange(3)
> >
> > In [2]: a *= 1.5
> >
> > In [3]: a
> > Out[3]: array([0, 1, 3])
> >
> > But still, the question is, can and should we tighten up the
> > assignment casting rules to same_kind or similar?
>
> An example of where tighter casting seems undesirable is the case of
> functions that return integer values with floating point dtype, such as
> rint().  It seems natural to do something like
>
> In [1]: ind = np.empty((3,), dtype=int)
>
> In [2]: rint(np.arange(3, dtype=float) / 3, out=ind)
> Out[2]: array([0, 0, 1])
>
> where one is generating integer indices based on some manipulation of
> floating point numbers.  This works in 1.6 but fails in 1.7.
>

In [16]: rint(arange(3, dtype=float)/3, out=ind, casting='unsafe')
Out[16]: array([0, 0, 1])

I'm not sure how to make this backward compatible though.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-21 Thread Eric Firing
On 2012/09/21 12:20 PM, Nathaniel Smith wrote:
> On Fri, Sep 21, 2012 at 10:04 PM, Chris Barker  wrote:
>> On Fri, Sep 21, 2012 at 10:03 AM, Nathaniel Smith  wrote:
>>
>>> You're right of course. What I meant is that
>>>a += b
>>> should produce the same result as
>>>a[...] = a + b
>>>
>>> If we change the casting rule for the first one but not the second, though,
>>> then these will produce different results if a is integer and b is float:
>>
>> I certainly agree that we would want that, however, numpy still needs
>> to deal tih pyton symantics, which means that wile (at the numpy
>> level) we can control what "a[...] =" means, and we can control what
>> "a + b" produces, we can't change what "a + b" means depending on the
>> context of the left hand side.
>>
>> that means we need to do the casting at the assignment stage, which I
>> gues is your point -- so:
>>
>> a_int += a_float
>>
>> should do the addition with the "regular" casting rules, then cast to
>> an int after doing that.
>>
>> not sure the implimentation details.
>
> Yes, that seems to be what happens.
>
> In [1]: a = np.arange(3)
>
> In [2]: a *= 1.5
>
> In [3]: a
> Out[3]: array([0, 1, 3])
>
> But still, the question is, can and should we tighten up the
> assignment casting rules to same_kind or similar?

An example of where tighter casting seems undesirable is the case of 
functions that return integer values with floating point dtype, such as 
rint().  It seems natural to do something like

In [1]: ind = np.empty((3,), dtype=int)

In [2]: rint(np.arange(3, dtype=float) / 3, out=ind)
Out[2]: array([0, 0, 1])

where one is generating integer indices based on some manipulation of 
floating point numbers.  This works in 1.6 but fails in 1.7.

Eric
>
> -n
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-21 Thread Nathaniel Smith
On Fri, Sep 21, 2012 at 10:04 PM, Chris Barker  wrote:
> On Fri, Sep 21, 2012 at 10:03 AM, Nathaniel Smith  wrote:
>
>> You're right of course. What I meant is that
>>   a += b
>> should produce the same result as
>>   a[...] = a + b
>>
>> If we change the casting rule for the first one but not the second, though,
>> then these will produce different results if a is integer and b is float:
>
> I certainly agree that we would want that, however, numpy still needs
> to deal tih pyton symantics, which means that wile (at the numpy
> level) we can control what "a[...] =" means, and we can control what
> "a + b" produces, we can't change what "a + b" means depending on the
> context of the left hand side.
>
> that means we need to do the casting at the assignment stage, which I
> gues is your point -- so:
>
> a_int += a_float
>
> should do the addition with the "regular" casting rules, then cast to
> an int after doing that.
>
> not sure the implimentation details.

Yes, that seems to be what happens.

In [1]: a = np.arange(3)

In [2]: a *= 1.5

In [3]: a
Out[3]: array([0, 1, 3])

But still, the question is, can and should we tighten up the
assignment casting rules to same_kind or similar?

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-21 Thread Chris Barker
On Fri, Sep 21, 2012 at 10:03 AM, Nathaniel Smith  wrote:

> You're right of course. What I meant is that
>   a += b
> should produce the same result as
>   a[...] = a + b
>
> If we change the casting rule for the first one but not the second, though,
> then these will produce different results if a is integer and b is float:

I certainly agree that we would want that, however, numpy still needs
to deal tih pyton symantics, which means that wile (at the numpy
level) we can control what "a[...] =" means, and we can control what
"a + b" produces, we can't change what "a + b" means depending on the
context of the left hand side.

that means we need to do the casting at the assignment stage, which I
gues is your point -- so:

a_int += a_float

should do the addition with the "regular" casting rules, then cast to
an int after doing that.

not sure the implimentation details.

Oh, and:

a += b

should be the same as

a[..] = a + b

should be the same as

np.add(a, b, out=a)

not sure what the story is with that at this point.

-Chris




-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-21 Thread Nathaniel Smith
On 21 Sep 2012 17:31, "Chris Barker"  wrote:
>
> On Thu, Sep 20, 2012 at 2:48 PM, Nathaniel Smith  wrote:
> > because a += b
> > really should be the same as a = a + b.
>
> I don't think that's the case - the inplace operator should be (and
> are) more than syntactic sugar -- they have a different meaning and
> use (in fact, I think they should't work at all for immutable, sbut i
> guess the common increment-a-counter use was too good to pass up)
>
> in the numpy case:
>
> a = a + b
>
> means "make a new array, from the result of adding a and b"
>
> whereas:
>
> a += b
>
> means "change a in place by adding b to it"
>
> In the first case, I'd expect the type of the result to be determined
> by both a and b -- casting rules.
>
> In the second case, a should certainly not be a different object, and
> should not have a new data buffer, therefor should not change type.

You're right of course. What I meant is that
  a += b
should produce the same result as
  a[...] = a + b

If we change the casting rule for the first one but not the second, though,
then these will produce different results if a is integer and b is float:
the first will produce an error, while the second will succeed, silently
discarding fractional parts.

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-21 Thread Chris Barker
On Thu, Sep 20, 2012 at 2:48 PM, Nathaniel Smith  wrote:
> because a += b
> really should be the same as a = a + b.

I don't think that's the case - the inplace operator should be (and
are) more than syntactic sugar -- they have a different meaning and
use (in fact, I think they should't work at all for immutable, sbut i
guess the common increment-a-counter use was too good to pass up)

in the numpy case:

a = a + b

means "make a new array, from the result of adding a and b"

whereas:

a += b

means "change a in place by adding b to it"

In the first case, I'd expect the type of the result to be determined
by both a and b -- casting rules.

In the second case, a should certainly not be a different object, and
should not have a new data buffer, therefor should not change type.

Whereas the general case, there is no assumption that with:

a = b+c

a is the same type as either b or c, but certainly not the same object.

-Chris












-- 

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R(206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115   (206) 526-6317   main reception

chris.bar...@noaa.gov
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-20 Thread Nathaniel Smith
On Wed, Sep 19, 2012 at 1:08 AM, Charles R Harris
 wrote:
> 
>
> The relevant setting is in numpy/core/include/numpy/ndarraytypes.h
>
> #define NPY_DEFAULT_ASSIGN_CASTING NPY_SAME_KIND_CASTING
>
> I think that if we want to raise a warning we could define a new rule,
>
> NPY_WARN_SAME_KIND_CASTING
>
> Which would do the same as unsafe, only raise a warning on the way.

https://github.com/numpy/numpy/pull/451

Query: I would have thought that NPY_DEFAULT_ASSIGN_CASTING would
determine the default casting used for assignments. But in current
master:

>>> a = np.zeros(3, dtype=int)
>>> a[0] = 1.1
>>> a
array([1, 0, 0])

In fact, this variable seems to only be used by PyArray_Std,
PyArray_Round, and ufuncs. Okay, so, NPY_DEFAULT_ASSIGN_CASTING is
just misnamed, but -- what casting rule *should* plain old assignment
follow? I'd think same_kind casting is probably a good default here
for the same reason it's a good default for ufuncs, and because a += b
really should be the same as a = a + b. But, the only problem is, how
could you override it if desired? a.__setitem__(0, casting="unsafe")?

-n
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Charles R Harris
On Tue, Sep 18, 2012 at 6:08 PM, Charles R Harris  wrote:

> 
>
> The relevant setting is in numpy/core/include/numpy/ndarraytypes.h
>
> #define NPY_DEFAULT_ASSIGN_CASTING NPY_SAME_KIND_CASTING
>
> I think that if we want to raise a warning we could define a new rule,
>
> NPY_WARN_SAME_KIND_CASTING
>
> Which would do the same as unsafe, only raise a warning on the way.
>

On second thought, it might be easier to set a warn bit on the usual
casting macros, i.e.,

#define NPY_WARN_CASTING 256
#define NPY_MASK_CASTING 255
#define NPY_DEFAULT_ASSIGN_CASTING (NPY_UNSAFE_CASTING | NPY_WARN_CASTING)

and replace the current checks with masked checks.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Charles R Harris


The relevant setting is in numpy/core/include/numpy/ndarraytypes.h

#define NPY_DEFAULT_ASSIGN_CASTING NPY_SAME_KIND_CASTING

I think that if we want to raise a warning we could define a new rule,

NPY_WARN_SAME_KIND_CASTING

Which would do the same as unsafe, only raise a warning on the way.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Charles R Harris
On Tue, Sep 18, 2012 at 5:02 PM, Travis Oliphant wrote:

>
>>
>>
>>> That is sort of the point of all this.  We are using 16 bit integers
>>> because we wanted to be as efficient as possible and didn't need anything
>>> larger.  Note, that is what we changed the code to, I am just wondering if
>>> we are being too cautious.  The casting kwarg looks to be what I might
>>> want, though it isn't as clean as just writing an "*=" statement.
>>>
>>>
>> I think even there you will have an intermediate float array followed by
>> a cast.
>>
>>
>> This is true, but it is done in chunks of a fixed size (controllable by a
>> thread-local variable or keyword argument to the ufunc).
>>
>> How difficult would it be to change in-place operations back to the
>> "unsafe" default?
>>
>
> Probably not too difficult, but I think it would be a mistake. What
> keyword argument are you referring to? In the current case, I think what is
> wanted is a scaling function that will actually do things in place. The
> matplotlib folks would probably be happier with the result if they simply
> coded up a couple of small Cython routines to do that.
>
>
> http://docs.scipy.org/doc/numpy/reference/ufuncs.html#ufunc
>
> In particular, the extobj keyword argument or the thread-local variable at
> umath.UFUNC_PYVALS_NAME
>

Hmm, the ufunc  documentation that comes with the functions needs an
upgrade.


>
> But, the problem is not just for matplotlib.   Matplotlib is showing a
> symptom of the problem of just changing the default casting mode in one
> release.I think this is too stark of a change for a single minor
> release without some kind of glide path or warning system.
>
>
I think we need to change in-place multiplication back to "unsafe" and then
> put in the release notes that we are planning on changing this for 1.8.
> It would be ideal if we could raise a warning when "unsafe" castings occur.
>

I think that raising a warning would be appropriate, maybe with a note
concerning the future change since I expect few to read the release notes.
The new casting modes were introduced in 1.6 so code that needs to work
with older versions of numpy won't be able to use that option to work
around the default.

Type specific functions for scaling integers would be helpful, although I'd
probably restrict it to float32/float64 scaling factors to avoid
combinatorial bloat. Having such a function do the normal broadcasting
would probably be desirable.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Travis Oliphant
> 
>>   
>> That is sort of the point of all this.  We are using 16 bit integers because 
>> we wanted to be as efficient as possible and didn't need anything larger.  
>> Note, that is what we changed the code to, I am just wondering if we are 
>> being too cautious.  The casting kwarg looks to be what I might want, though 
>> it isn't as clean as just writing an "*=" statement.
>> 
>> 
>> I think even there you will have an intermediate float array followed by a 
>> cast.
> 
> This is true, but it is done in chunks of a fixed size (controllable by a 
> thread-local variable or keyword argument to the ufunc).
> 
> How difficult would it be to change in-place operations back to the "unsafe" 
> default?
> 
> Probably not too difficult, but I think it would be a mistake. What keyword 
> argument are you referring to? In the current case, I think what is wanted is 
> a scaling function that will actually do things in place. The matplotlib 
> folks would probably be happier with the result if they simply coded up a 
> couple of small Cython routines to do that.

http://docs.scipy.org/doc/numpy/reference/ufuncs.html#ufunc

In particular, the extobj keyword argument or the thread-local variable at 
umath.UFUNC_PYVALS_NAME

But, the problem is not just for matplotlib.   Matplotlib is showing a symptom 
of the problem of just changing the default casting mode in one release.I 
think this is too stark of a change for a single minor release without some 
kind of glide path or warning system.

I think we need to change in-place multiplication back to "unsafe" and then put 
in the release notes that we are planning on changing this for 1.8.   It would 
be ideal if we could raise a warning when "unsafe" castings occur. 

-Travis


___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Charles R Harris
On Tue, Sep 18, 2012 at 2:52 PM, Benjamin Root  wrote:

>
>
> On Tue, Sep 18, 2012 at 4:42 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Tue, Sep 18, 2012 at 2:33 PM, Travis Oliphant wrote:
>>
>>>
>>> On Sep 18, 2012, at 2:44 PM, Charles R Harris wrote:
>>>
>>>
>>>
>>> On Tue, Sep 18, 2012 at 1:35 PM, Benjamin Root  wrote:
>>>


 On Tue, Sep 18, 2012 at 3:25 PM, Charles R Harris <
 charlesr.har...@gmail.com> wrote:

>
>
> On Tue, Sep 18, 2012 at 1:13 PM, Benjamin Root wrote:
>
>>
>>
>> On Tue, Sep 18, 2012 at 2:47 PM, Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root wrote:
>>>


 On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris <
 charlesr.har...@gmail.com> wrote:

>
>
> On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant <
> tra...@continuum.io> wrote:
>
>>
>> On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:
>>
>> > Consider the following code:
>> >
>> > import numpy as np
>> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>> > a *= float(255) / 15
>> >
>> > In v1.6.x, this yields:
>> > array([17, 34, 51, 68, 85], dtype=int16)
>> >
>> > But in master, this throws an exception about failing to cast
>> via same_kind.
>> >
>> > Note that numpy was smart about this operation before, consider:
>> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>> > a *= float(128) / 256
>>
>> > yields:
>> > array([0, 1, 1, 2, 2], dtype=int16)
>> >
>> > Of course, this is different than if one does it in a
>> non-in-place manner:
>> > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
>> >
>> > which yields an array with floating point dtype in both
>> versions.  I can appreciate the arguments for preventing this kind of
>> implicit casting between non-same_kind dtypes, but I argue that 
>> because the
>> operation is in-place, then I (as the programmer) am explicitly 
>> stating
>> that I desire to utilize the current array to store the results of 
>> the
>> operation, dtype and all.  Obviously, we can't completely turn off 
>> this
>> rule (for example, an in-place addition between integer array and a
>> datetime64 makes no sense), but surely there is some sort of happy 
>> medium
>> that would allow these sort of operations to take place?
>> >
>> > Lastly, if it is determined that it is desirable to allow
>> in-place operations to continue working like they have before, I 
>> would like
>> to see such a fix in v1.7 because if it isn't in 1.7, then other 
>> libraries
>> (such as matplotlib, where this issue was first found) would have to 
>> change
>> their code anyway just to be compatible with numpy.
>>
>> I agree that in-place operations should allow different casting
>> rules.  There are different opinions on this, of course, but 
>> generally this
>> is how NumPy has worked in the past.
>>
>> We did decide to change the default casting rule to "same_kind"
>> but making an exception for in-place seems reasonable.
>>
>
> I think that in these cases same_kind will flag what are most
> likely programming errors and sloppy code. It is easy to be explicit 
> and
> doing so will make the code more readable because it will be 
> immediately
> obvious what the multiplicand is without the need to recall what the 
> numpy
> casting rules are in this exceptional case. IISTR several mentions of 
> this
> before (Gael?), and in some of those cases it turned out that bugs 
> were
> being turned up. Catching bugs with minimal effort is a good thing.
>
> Chuck
>
>
 True, it is quite likely to be a programming error, but then again,
 there are many cases where it isn't.  Is the problem strictly that we 
 are
 trying to downcast the float to an int, or is it that we are trying to
 downcast to a lower precision?  Is there a way for one to explicitly 
 relax
 the same_kind restriction?

>>>
>>> I think the problem is down casting across kinds, with the result
>>> that floats are truncated and the imaginary parts of imaginaries might 
>>> be
>>> discarded. That is, the value, not just the precision, of the rhs 
>>> changes.
>>> So I'd favor an explicit cast in code like this, i.e., cast the rhs to 
>>> an
>>> integ

Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Ralf Gommers
On Tue, Sep 18, 2012 at 10:52 PM, Benjamin Root  wrote:

>
>
> On Tue, Sep 18, 2012 at 4:42 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Tue, Sep 18, 2012 at 2:33 PM, Travis Oliphant wrote:
>>
>>>
>>> On Sep 18, 2012, at 2:44 PM, Charles R Harris wrote:
>>>
>>>
>>>
>>> On Tue, Sep 18, 2012 at 1:35 PM, Benjamin Root  wrote:
>>>


 On Tue, Sep 18, 2012 at 3:25 PM, Charles R Harris <
 charlesr.har...@gmail.com> wrote:

>
>
> On Tue, Sep 18, 2012 at 1:13 PM, Benjamin Root wrote:
>
>>
>>
>> On Tue, Sep 18, 2012 at 2:47 PM, Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root wrote:
>>>


 On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris <
 charlesr.har...@gmail.com> wrote:

>
>
> On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant <
> tra...@continuum.io> wrote:
>
>>
>> On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:
>>
>> > Consider the following code:
>> >
>> > import numpy as np
>> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>> > a *= float(255) / 15
>> >
>> > In v1.6.x, this yields:
>> > array([17, 34, 51, 68, 85], dtype=int16)
>> >
>> > But in master, this throws an exception about failing to cast
>> via same_kind.
>> >
>> > Note that numpy was smart about this operation before, consider:
>> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>> > a *= float(128) / 256
>>
>> > yields:
>> > array([0, 1, 1, 2, 2], dtype=int16)
>> >
>> > Of course, this is different than if one does it in a
>> non-in-place manner:
>> > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
>> >
>> > which yields an array with floating point dtype in both
>> versions.  I can appreciate the arguments for preventing this kind of
>> implicit casting between non-same_kind dtypes, but I argue that 
>> because the
>> operation is in-place, then I (as the programmer) am explicitly 
>> stating
>> that I desire to utilize the current array to store the results of 
>> the
>> operation, dtype and all.  Obviously, we can't completely turn off 
>> this
>> rule (for example, an in-place addition between integer array and a
>> datetime64 makes no sense), but surely there is some sort of happy 
>> medium
>> that would allow these sort of operations to take place?
>> >
>> > Lastly, if it is determined that it is desirable to allow
>> in-place operations to continue working like they have before, I 
>> would like
>> to see such a fix in v1.7 because if it isn't in 1.7, then other 
>> libraries
>> (such as matplotlib, where this issue was first found) would have to 
>> change
>> their code anyway just to be compatible with numpy.
>>
>> I agree that in-place operations should allow different casting
>> rules.  There are different opinions on this, of course, but 
>> generally this
>> is how NumPy has worked in the past.
>>
>> We did decide to change the default casting rule to "same_kind"
>> but making an exception for in-place seems reasonable.
>>
>
> I think that in these cases same_kind will flag what are most
> likely programming errors and sloppy code. It is easy to be explicit 
> and
> doing so will make the code more readable because it will be 
> immediately
> obvious what the multiplicand is without the need to recall what the 
> numpy
> casting rules are in this exceptional case. IISTR several mentions of 
> this
> before (Gael?), and in some of those cases it turned out that bugs 
> were
> being turned up. Catching bugs with minimal effort is a good thing.
>
> Chuck
>
>
 True, it is quite likely to be a programming error, but then again,
 there are many cases where it isn't.  Is the problem strictly that we 
 are
 trying to downcast the float to an int, or is it that we are trying to
 downcast to a lower precision?  Is there a way for one to explicitly 
 relax
 the same_kind restriction?

>>>
>>> I think the problem is down casting across kinds, with the result
>>> that floats are truncated and the imaginary parts of imaginaries might 
>>> be
>>> discarded. That is, the value, not just the precision, of the rhs 
>>> changes.
>>> So I'd favor an explicit cast in code like this, i.e., cast the rhs to 
>>> an
>>> inte

Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Benjamin Root
On Tue, Sep 18, 2012 at 4:42 PM, Charles R Harris  wrote:

>
>
> On Tue, Sep 18, 2012 at 2:33 PM, Travis Oliphant wrote:
>
>>
>> On Sep 18, 2012, at 2:44 PM, Charles R Harris wrote:
>>
>>
>>
>> On Tue, Sep 18, 2012 at 1:35 PM, Benjamin Root  wrote:
>>
>>>
>>>
>>> On Tue, Sep 18, 2012 at 3:25 PM, Charles R Harris <
>>> charlesr.har...@gmail.com> wrote:
>>>


 On Tue, Sep 18, 2012 at 1:13 PM, Benjamin Root  wrote:

>
>
> On Tue, Sep 18, 2012 at 2:47 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root wrote:
>>
>>>
>>>
>>> On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris <
>>> charlesr.har...@gmail.com> wrote:
>>>


 On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant <
 tra...@continuum.io> wrote:

>
> On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:
>
> > Consider the following code:
> >
> > import numpy as np
> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> > a *= float(255) / 15
> >
> > In v1.6.x, this yields:
> > array([17, 34, 51, 68, 85], dtype=int16)
> >
> > But in master, this throws an exception about failing to cast
> via same_kind.
> >
> > Note that numpy was smart about this operation before, consider:
> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> > a *= float(128) / 256
>
> > yields:
> > array([0, 1, 1, 2, 2], dtype=int16)
> >
> > Of course, this is different than if one does it in a
> non-in-place manner:
> > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
> >
> > which yields an array with floating point dtype in both
> versions.  I can appreciate the arguments for preventing this kind of
> implicit casting between non-same_kind dtypes, but I argue that 
> because the
> operation is in-place, then I (as the programmer) am explicitly 
> stating
> that I desire to utilize the current array to store the results of the
> operation, dtype and all.  Obviously, we can't completely turn off 
> this
> rule (for example, an in-place addition between integer array and a
> datetime64 makes no sense), but surely there is some sort of happy 
> medium
> that would allow these sort of operations to take place?
> >
> > Lastly, if it is determined that it is desirable to allow
> in-place operations to continue working like they have before, I 
> would like
> to see such a fix in v1.7 because if it isn't in 1.7, then other 
> libraries
> (such as matplotlib, where this issue was first found) would have to 
> change
> their code anyway just to be compatible with numpy.
>
> I agree that in-place operations should allow different casting
> rules.  There are different opinions on this, of course, but 
> generally this
> is how NumPy has worked in the past.
>
> We did decide to change the default casting rule to "same_kind"
> but making an exception for in-place seems reasonable.
>

 I think that in these cases same_kind will flag what are most
 likely programming errors and sloppy code. It is easy to be explicit 
 and
 doing so will make the code more readable because it will be 
 immediately
 obvious what the multiplicand is without the need to recall what the 
 numpy
 casting rules are in this exceptional case. IISTR several mentions of 
 this
 before (Gael?), and in some of those cases it turned out that bugs were
 being turned up. Catching bugs with minimal effort is a good thing.

 Chuck


>>> True, it is quite likely to be a programming error, but then again,
>>> there are many cases where it isn't.  Is the problem strictly that we 
>>> are
>>> trying to downcast the float to an int, or is it that we are trying to
>>> downcast to a lower precision?  Is there a way for one to explicitly 
>>> relax
>>> the same_kind restriction?
>>>
>>
>> I think the problem is down casting across kinds, with the result
>> that floats are truncated and the imaginary parts of imaginaries might be
>> discarded. That is, the value, not just the precision, of the rhs 
>> changes.
>> So I'd favor an explicit cast in code like this, i.e., cast the rhs to an
>> integer.
>>
>> It is true that this forces downstream to code up to a higher
>> standard, but I don't see that as a bad thing, especially if it exposes
>> bugs. And it isn't difficult to fix.
>>
>> Chuck
>>
>>
> Mind you, in my c

Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Charles R Harris
On Tue, Sep 18, 2012 at 2:33 PM, Travis Oliphant wrote:

>
> On Sep 18, 2012, at 2:44 PM, Charles R Harris wrote:
>
>
>
> On Tue, Sep 18, 2012 at 1:35 PM, Benjamin Root  wrote:
>
>>
>>
>> On Tue, Sep 18, 2012 at 3:25 PM, Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, Sep 18, 2012 at 1:13 PM, Benjamin Root  wrote:
>>>


 On Tue, Sep 18, 2012 at 2:47 PM, Charles R Harris <
 charlesr.har...@gmail.com> wrote:

>
>
> On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root wrote:
>
>>
>>
>> On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>>
>>>
>>> On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant <
>>> tra...@continuum.io> wrote:
>>>

 On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:

 > Consider the following code:
 >
 > import numpy as np
 > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
 > a *= float(255) / 15
 >
 > In v1.6.x, this yields:
 > array([17, 34, 51, 68, 85], dtype=int16)
 >
 > But in master, this throws an exception about failing to cast via
 same_kind.
 >
 > Note that numpy was smart about this operation before, consider:
 > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
 > a *= float(128) / 256

 > yields:
 > array([0, 1, 1, 2, 2], dtype=int16)
 >
 > Of course, this is different than if one does it in a
 non-in-place manner:
 > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
 >
 > which yields an array with floating point dtype in both versions.
  I can appreciate the arguments for preventing this kind of implicit
 casting between non-same_kind dtypes, but I argue that because the
 operation is in-place, then I (as the programmer) am explicitly stating
 that I desire to utilize the current array to store the results of the
 operation, dtype and all.  Obviously, we can't completely turn off this
 rule (for example, an in-place addition between integer array and a
 datetime64 makes no sense), but surely there is some sort of happy 
 medium
 that would allow these sort of operations to take place?
 >
 > Lastly, if it is determined that it is desirable to allow
 in-place operations to continue working like they have before, I would 
 like
 to see such a fix in v1.7 because if it isn't in 1.7, then other 
 libraries
 (such as matplotlib, where this issue was first found) would have to 
 change
 their code anyway just to be compatible with numpy.

 I agree that in-place operations should allow different casting
 rules.  There are different opinions on this, of course, but generally 
 this
 is how NumPy has worked in the past.

 We did decide to change the default casting rule to "same_kind" but
 making an exception for in-place seems reasonable.

>>>
>>> I think that in these cases same_kind will flag what are most likely
>>> programming errors and sloppy code. It is easy to be explicit and doing 
>>> so
>>> will make the code more readable because it will be immediately obvious
>>> what the multiplicand is without the need to recall what the numpy 
>>> casting
>>> rules are in this exceptional case. IISTR several mentions of this 
>>> before
>>> (Gael?), and in some of those cases it turned out that bugs were being
>>> turned up. Catching bugs with minimal effort is a good thing.
>>>
>>> Chuck
>>>
>>>
>> True, it is quite likely to be a programming error, but then again,
>> there are many cases where it isn't.  Is the problem strictly that we are
>> trying to downcast the float to an int, or is it that we are trying to
>> downcast to a lower precision?  Is there a way for one to explicitly 
>> relax
>> the same_kind restriction?
>>
>
> I think the problem is down casting across kinds, with the result that
> floats are truncated and the imaginary parts of imaginaries might be
> discarded. That is, the value, not just the precision, of the rhs changes.
> So I'd favor an explicit cast in code like this, i.e., cast the rhs to an
> integer.
>
> It is true that this forces downstream to code up to a higher
> standard, but I don't see that as a bad thing, especially if it exposes
> bugs. And it isn't difficult to fix.
>
> Chuck
>
>
 Mind you, in my case, casting the rhs as an integer before doing the
 multiplication would be a bug, since our value for the rhs is usually
 between zero and one.  Multiplying first by the integer numerator before
 dividing by the integer denominator would 

Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Travis Oliphant

On Sep 18, 2012, at 2:44 PM, Charles R Harris wrote:

> 
> 
> On Tue, Sep 18, 2012 at 1:35 PM, Benjamin Root  wrote:
> 
> 
> On Tue, Sep 18, 2012 at 3:25 PM, Charles R Harris  
> wrote:
> 
> 
> On Tue, Sep 18, 2012 at 1:13 PM, Benjamin Root  wrote:
> 
> 
> On Tue, Sep 18, 2012 at 2:47 PM, Charles R Harris  
> wrote:
> 
> 
> On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root  wrote:
> 
> 
> On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris  
> wrote:
> 
> 
> On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant  wrote:
> 
> On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:
> 
> > Consider the following code:
> >
> > import numpy as np
> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> > a *= float(255) / 15
> >
> > In v1.6.x, this yields:
> > array([17, 34, 51, 68, 85], dtype=int16)
> >
> > But in master, this throws an exception about failing to cast via same_kind.
> >
> > Note that numpy was smart about this operation before, consider:
> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> > a *= float(128) / 256
> 
> > yields:
> > array([0, 1, 1, 2, 2], dtype=int16)
> >
> > Of course, this is different than if one does it in a non-in-place manner:
> > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
> >
> > which yields an array with floating point dtype in both versions.  I can 
> > appreciate the arguments for preventing this kind of implicit casting 
> > between non-same_kind dtypes, but I argue that because the operation is 
> > in-place, then I (as the programmer) am explicitly stating that I desire to 
> > utilize the current array to store the results of the operation, dtype and 
> > all.  Obviously, we can't completely turn off this rule (for example, an 
> > in-place addition between integer array and a datetime64 makes no sense), 
> > but surely there is some sort of happy medium that would allow these sort 
> > of operations to take place?
> >
> > Lastly, if it is determined that it is desirable to allow in-place 
> > operations to continue working like they have before, I would like to see 
> > such a fix in v1.7 because if it isn't in 1.7, then other libraries (such 
> > as matplotlib, where this issue was first found) would have to change their 
> > code anyway just to be compatible with numpy.
> 
> I agree that in-place operations should allow different casting rules.  There 
> are different opinions on this, of course, but generally this is how NumPy 
> has worked in the past.
> 
> We did decide to change the default casting rule to "same_kind" but making an 
> exception for in-place seems reasonable.
> 
> I think that in these cases same_kind will flag what are most likely 
> programming errors and sloppy code. It is easy to be explicit and doing so 
> will make the code more readable because it will be immediately obvious what 
> the multiplicand is without the need to recall what the numpy casting rules 
> are in this exceptional case. IISTR several mentions of this before (Gael?), 
> and in some of those cases it turned out that bugs were being turned up. 
> Catching bugs with minimal effort is a good thing.
> 
> Chuck 
> 
> 
> True, it is quite likely to be a programming error, but then again, there are 
> many cases where it isn't.  Is the problem strictly that we are trying to 
> downcast the float to an int, or is it that we are trying to downcast to a 
> lower precision?  Is there a way for one to explicitly relax the same_kind 
> restriction?
> 
> I think the problem is down casting across kinds, with the result that floats 
> are truncated and the imaginary parts of imaginaries might be discarded. That 
> is, the value, not just the precision, of the rhs changes. So I'd favor an 
> explicit cast in code like this, i.e., cast the rhs to an integer.
> 
> It is true that this forces downstream to code up to a higher standard, but I 
> don't see that as a bad thing, especially if it exposes bugs. And it isn't 
> difficult to fix.
> 
> Chuck 
> 
> 
> Mind you, in my case, casting the rhs as an integer before doing the 
> multiplication would be a bug, since our value for the rhs is usually between 
> zero and one.  Multiplying first by the integer numerator before dividing by 
> the integer denominator would likely cause issues with overflowing the 16 bit 
> integer.
> 
> 
> For the case in point I'd do
> 
> In [1]: a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> 
> In [2]: a //= 2 
> 
> In [3]: a
> Out[3]: array([0, 1, 1, 2, 2], dtype=int16) 
> 
> Although I expect you would want something different in practice. But the 
> current code already looks fragile to me and I think it is a good thing you 
> are taking a closer look at it. If you really intend going through a float, 
> then it should be something like
> 
> a = (a*(float(128)/256)).astype(int16)
> 
> Chuck
> 
> 
> And thereby losing the memory benefit of an in-place multiplication?
> 
> What makes you think you are getting that? I'd have to check the numpy  C 
> source, but I expect the multiplication is handled just as I wrote it ou

Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Eric Firing
On 2012/09/18 9:25 AM, Charles R Harris wrote:
>
>
> On Tue, Sep 18, 2012 at 1:13 PM, Benjamin Root  > wrote:
>
>
>
> On Tue, Sep 18, 2012 at 2:47 PM, Charles R Harris
> mailto:charlesr.har...@gmail.com>> wrote:
>
>
>
> On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root  > wrote:
>
>
>
> On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris
>  > wrote:
>
>
>
> On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant
> mailto:tra...@continuum.io>> wrote:
>
>
> On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:
>
>  > Consider the following code:
>  >
>  > import numpy as np
>  > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>  > a *= float(255) / 15
>  >
>  > In v1.6.x, this yields:
>  > array([17, 34, 51, 68, 85], dtype=int16)
>  >
>  > But in master, this throws an exception about
> failing to cast via same_kind.
>  >
>  > Note that numpy was smart about this operation
> before, consider:
>  > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>  > a *= float(128) / 256
>
>  > yields:
>  > array([0, 1, 1, 2, 2], dtype=int16)
>  >
>  > Of course, this is different than if one does it
> in a non-in-place manner:
>  > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
>  >
>  > which yields an array with floating point dtype
> in both versions.  I can appreciate the arguments
> for preventing this kind of implicit casting between
> non-same_kind dtypes, but I argue that because the
> operation is in-place, then I (as the programmer) am
> explicitly stating that I desire to utilize the
> current array to store the results of the operation,
> dtype and all.  Obviously, we can't completely turn
> off this rule (for example, an in-place addition
> between integer array and a datetime64 makes no
> sense), but surely there is some sort of happy
> medium that would allow these sort of operations to
> take place?
>  >
>  > Lastly, if it is determined that it is desirable
> to allow in-place operations to continue working
> like they have before, I would like to see such a
> fix in v1.7 because if it isn't in 1.7, then other
> libraries (such as matplotlib, where this issue was
> first found) would have to change their code anyway
> just to be compatible with numpy.
>
> I agree that in-place operations should allow
> different casting rules.  There are different
> opinions on this, of course, but generally this is
> how NumPy has worked in the past.
>
> We did decide to change the default casting rule to
> "same_kind" but making an exception for in-place
> seems reasonable.
>
>
> I think that in these cases same_kind will flag what are
> most likely programming errors and sloppy code. It is
> easy to be explicit and doing so will make the code more
> readable because it will be immediately obvious what the
> multiplicand is without the need to recall what the
> numpy casting rules are in this exceptional case. IISTR
> several mentions of this before (Gael?), and in some of
> those cases it turned out that bugs were being turned
> up. Catching bugs with minimal effort is a good thing.
>
> Chuck
>
>
> True, it is quite likely to be a programming error, but then
> again, there are many cases where it isn't.  Is the problem
> strictly that we are trying to downcast the float to an int,
> or is it that we are trying to downcast to a lower
> precision?  Is there a way for one to explicitly relax the
> same_kind restriction?
>
>
> I think the problem is down casting across kinds, with the
> result that floats are truncated and the imaginary parts of
> imaginaries might be discarded. That is, the value, not

Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Charles R Harris
On Tue, Sep 18, 2012 at 1:35 PM, Benjamin Root  wrote:

>
>
> On Tue, Sep 18, 2012 at 3:25 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Tue, Sep 18, 2012 at 1:13 PM, Benjamin Root  wrote:
>>
>>>
>>>
>>> On Tue, Sep 18, 2012 at 2:47 PM, Charles R Harris <
>>> charlesr.har...@gmail.com> wrote:
>>>


 On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root wrote:

>
>
> On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant > > wrote:
>>
>>>
>>> On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:
>>>
>>> > Consider the following code:
>>> >
>>> > import numpy as np
>>> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>>> > a *= float(255) / 15
>>> >
>>> > In v1.6.x, this yields:
>>> > array([17, 34, 51, 68, 85], dtype=int16)
>>> >
>>> > But in master, this throws an exception about failing to cast via
>>> same_kind.
>>> >
>>> > Note that numpy was smart about this operation before, consider:
>>> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>>> > a *= float(128) / 256
>>>
>>> > yields:
>>> > array([0, 1, 1, 2, 2], dtype=int16)
>>> >
>>> > Of course, this is different than if one does it in a non-in-place
>>> manner:
>>> > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
>>> >
>>> > which yields an array with floating point dtype in both versions.
>>>  I can appreciate the arguments for preventing this kind of implicit
>>> casting between non-same_kind dtypes, but I argue that because the
>>> operation is in-place, then I (as the programmer) am explicitly stating
>>> that I desire to utilize the current array to store the results of the
>>> operation, dtype and all.  Obviously, we can't completely turn off this
>>> rule (for example, an in-place addition between integer array and a
>>> datetime64 makes no sense), but surely there is some sort of happy 
>>> medium
>>> that would allow these sort of operations to take place?
>>> >
>>> > Lastly, if it is determined that it is desirable to allow in-place
>>> operations to continue working like they have before, I would like to 
>>> see
>>> such a fix in v1.7 because if it isn't in 1.7, then other libraries 
>>> (such
>>> as matplotlib, where this issue was first found) would have to change 
>>> their
>>> code anyway just to be compatible with numpy.
>>>
>>> I agree that in-place operations should allow different casting
>>> rules.  There are different opinions on this, of course, but generally 
>>> this
>>> is how NumPy has worked in the past.
>>>
>>> We did decide to change the default casting rule to "same_kind" but
>>> making an exception for in-place seems reasonable.
>>>
>>
>> I think that in these cases same_kind will flag what are most likely
>> programming errors and sloppy code. It is easy to be explicit and doing 
>> so
>> will make the code more readable because it will be immediately obvious
>> what the multiplicand is without the need to recall what the numpy 
>> casting
>> rules are in this exceptional case. IISTR several mentions of this before
>> (Gael?), and in some of those cases it turned out that bugs were being
>> turned up. Catching bugs with minimal effort is a good thing.
>>
>> Chuck
>>
>>
> True, it is quite likely to be a programming error, but then again,
> there are many cases where it isn't.  Is the problem strictly that we are
> trying to downcast the float to an int, or is it that we are trying to
> downcast to a lower precision?  Is there a way for one to explicitly relax
> the same_kind restriction?
>

 I think the problem is down casting across kinds, with the result that
 floats are truncated and the imaginary parts of imaginaries might be
 discarded. That is, the value, not just the precision, of the rhs changes.
 So I'd favor an explicit cast in code like this, i.e., cast the rhs to an
 integer.

 It is true that this forces downstream to code up to a higher standard,
 but I don't see that as a bad thing, especially if it exposes bugs. And it
 isn't difficult to fix.

 Chuck


>>> Mind you, in my case, casting the rhs as an integer before doing the
>>> multiplication would be a bug, since our value for the rhs is usually
>>> between zero and one.  Multiplying first by the integer numerator before
>>> dividing by the integer denominator would likely cause issues with
>>> overflowing the 16 bit integer.
>>>
>>>
>> For the case in point I'd do
>>
>> In [1]: a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>>
>> In [2]: a //= 2
>>
>> In [3]: a
>> Out[3]: array([0, 1, 1, 2, 2], dtype=int16)
>>
>> Although I expect you would wa

Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Benjamin Root
On Tue, Sep 18, 2012 at 3:25 PM, Charles R Harris  wrote:

>
>
> On Tue, Sep 18, 2012 at 1:13 PM, Benjamin Root  wrote:
>
>>
>>
>> On Tue, Sep 18, 2012 at 2:47 PM, Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root  wrote:
>>>


 On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris <
 charlesr.har...@gmail.com> wrote:

>
>
> On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant 
> wrote:
>
>>
>> On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:
>>
>> > Consider the following code:
>> >
>> > import numpy as np
>> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>> > a *= float(255) / 15
>> >
>> > In v1.6.x, this yields:
>> > array([17, 34, 51, 68, 85], dtype=int16)
>> >
>> > But in master, this throws an exception about failing to cast via
>> same_kind.
>> >
>> > Note that numpy was smart about this operation before, consider:
>> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>> > a *= float(128) / 256
>>
>> > yields:
>> > array([0, 1, 1, 2, 2], dtype=int16)
>> >
>> > Of course, this is different than if one does it in a non-in-place
>> manner:
>> > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
>> >
>> > which yields an array with floating point dtype in both versions.
>>  I can appreciate the arguments for preventing this kind of implicit
>> casting between non-same_kind dtypes, but I argue that because the
>> operation is in-place, then I (as the programmer) am explicitly stating
>> that I desire to utilize the current array to store the results of the
>> operation, dtype and all.  Obviously, we can't completely turn off this
>> rule (for example, an in-place addition between integer array and a
>> datetime64 makes no sense), but surely there is some sort of happy medium
>> that would allow these sort of operations to take place?
>> >
>> > Lastly, if it is determined that it is desirable to allow in-place
>> operations to continue working like they have before, I would like to see
>> such a fix in v1.7 because if it isn't in 1.7, then other libraries (such
>> as matplotlib, where this issue was first found) would have to change 
>> their
>> code anyway just to be compatible with numpy.
>>
>> I agree that in-place operations should allow different casting
>> rules.  There are different opinions on this, of course, but generally 
>> this
>> is how NumPy has worked in the past.
>>
>> We did decide to change the default casting rule to "same_kind" but
>> making an exception for in-place seems reasonable.
>>
>
> I think that in these cases same_kind will flag what are most likely
> programming errors and sloppy code. It is easy to be explicit and doing so
> will make the code more readable because it will be immediately obvious
> what the multiplicand is without the need to recall what the numpy casting
> rules are in this exceptional case. IISTR several mentions of this before
> (Gael?), and in some of those cases it turned out that bugs were being
> turned up. Catching bugs with minimal effort is a good thing.
>
> Chuck
>
>
 True, it is quite likely to be a programming error, but then again,
 there are many cases where it isn't.  Is the problem strictly that we are
 trying to downcast the float to an int, or is it that we are trying to
 downcast to a lower precision?  Is there a way for one to explicitly relax
 the same_kind restriction?

>>>
>>> I think the problem is down casting across kinds, with the result that
>>> floats are truncated and the imaginary parts of imaginaries might be
>>> discarded. That is, the value, not just the precision, of the rhs changes.
>>> So I'd favor an explicit cast in code like this, i.e., cast the rhs to an
>>> integer.
>>>
>>> It is true that this forces downstream to code up to a higher standard,
>>> but I don't see that as a bad thing, especially if it exposes bugs. And it
>>> isn't difficult to fix.
>>>
>>> Chuck
>>>
>>>
>> Mind you, in my case, casting the rhs as an integer before doing the
>> multiplication would be a bug, since our value for the rhs is usually
>> between zero and one.  Multiplying first by the integer numerator before
>> dividing by the integer denominator would likely cause issues with
>> overflowing the 16 bit integer.
>>
>>
> For the case in point I'd do
>
> In [1]: a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>
> In [2]: a //= 2
>
> In [3]: a
> Out[3]: array([0, 1, 1, 2, 2], dtype=int16)
>
> Although I expect you would want something different in practice. But the
> current code already looks fragile to me and I think it is a good thing you
> are taking a closer look at it. If you really intend going through a float,
> then it should be something like
>
> a = (a*(float(128)/

Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Charles R Harris
On Tue, Sep 18, 2012 at 1:13 PM, Benjamin Root  wrote:

>
>
> On Tue, Sep 18, 2012 at 2:47 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root  wrote:
>>
>>>
>>>
>>> On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris <
>>> charlesr.har...@gmail.com> wrote:
>>>


 On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant 
 wrote:

>
> On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:
>
> > Consider the following code:
> >
> > import numpy as np
> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> > a *= float(255) / 15
> >
> > In v1.6.x, this yields:
> > array([17, 34, 51, 68, 85], dtype=int16)
> >
> > But in master, this throws an exception about failing to cast via
> same_kind.
> >
> > Note that numpy was smart about this operation before, consider:
> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> > a *= float(128) / 256
>
> > yields:
> > array([0, 1, 1, 2, 2], dtype=int16)
> >
> > Of course, this is different than if one does it in a non-in-place
> manner:
> > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
> >
> > which yields an array with floating point dtype in both versions.  I
> can appreciate the arguments for preventing this kind of implicit casting
> between non-same_kind dtypes, but I argue that because the operation is
> in-place, then I (as the programmer) am explicitly stating that I desire 
> to
> utilize the current array to store the results of the operation, dtype and
> all.  Obviously, we can't completely turn off this rule (for example, an
> in-place addition between integer array and a datetime64 makes no sense),
> but surely there is some sort of happy medium that would allow these sort
> of operations to take place?
> >
> > Lastly, if it is determined that it is desirable to allow in-place
> operations to continue working like they have before, I would like to see
> such a fix in v1.7 because if it isn't in 1.7, then other libraries (such
> as matplotlib, where this issue was first found) would have to change 
> their
> code anyway just to be compatible with numpy.
>
> I agree that in-place operations should allow different casting rules.
>  There are different opinions on this, of course, but generally this is 
> how
> NumPy has worked in the past.
>
> We did decide to change the default casting rule to "same_kind" but
> making an exception for in-place seems reasonable.
>

 I think that in these cases same_kind will flag what are most likely
 programming errors and sloppy code. It is easy to be explicit and doing so
 will make the code more readable because it will be immediately obvious
 what the multiplicand is without the need to recall what the numpy casting
 rules are in this exceptional case. IISTR several mentions of this before
 (Gael?), and in some of those cases it turned out that bugs were being
 turned up. Catching bugs with minimal effort is a good thing.

 Chuck


>>> True, it is quite likely to be a programming error, but then again,
>>> there are many cases where it isn't.  Is the problem strictly that we are
>>> trying to downcast the float to an int, or is it that we are trying to
>>> downcast to a lower precision?  Is there a way for one to explicitly relax
>>> the same_kind restriction?
>>>
>>
>> I think the problem is down casting across kinds, with the result that
>> floats are truncated and the imaginary parts of imaginaries might be
>> discarded. That is, the value, not just the precision, of the rhs changes.
>> So I'd favor an explicit cast in code like this, i.e., cast the rhs to an
>> integer.
>>
>> It is true that this forces downstream to code up to a higher standard,
>> but I don't see that as a bad thing, especially if it exposes bugs. And it
>> isn't difficult to fix.
>>
>> Chuck
>>
>>
> Mind you, in my case, casting the rhs as an integer before doing the
> multiplication would be a bug, since our value for the rhs is usually
> between zero and one.  Multiplying first by the integer numerator before
> dividing by the integer denominator would likely cause issues with
> overflowing the 16 bit integer.
>
>
For the case in point I'd do

In [1]: a = np.array([1, 2, 3, 4, 5], dtype=np.int16)

In [2]: a //= 2

In [3]: a
Out[3]: array([0, 1, 1, 2, 2], dtype=int16)

Although I expect you would want something different in practice. But the
current code already looks fragile to me and I think it is a good thing you
are taking a closer look at it. If you really intend going through a float,
then it should be something like

a = (a*(float(128)/256)).astype(int16)

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Benjamin Root
On Tue, Sep 18, 2012 at 3:19 PM, Ralf Gommers wrote:

>
>
> On Tue, Sep 18, 2012 at 9:13 PM, Benjamin Root  wrote:
>
>>
>>
>> On Tue, Sep 18, 2012 at 2:47 PM, Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>>
>>>
>>> On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root  wrote:
>>>


 On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris <
 charlesr.har...@gmail.com> wrote:

>
>
> On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant 
> wrote:
>
>>
>> On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:
>>
>> > Consider the following code:
>> >
>> > import numpy as np
>> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>> > a *= float(255) / 15
>> >
>> > In v1.6.x, this yields:
>> > array([17, 34, 51, 68, 85], dtype=int16)
>> >
>> > But in master, this throws an exception about failing to cast via
>> same_kind.
>> >
>> > Note that numpy was smart about this operation before, consider:
>> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>> > a *= float(128) / 256
>>
>> > yields:
>> > array([0, 1, 1, 2, 2], dtype=int16)
>> >
>> > Of course, this is different than if one does it in a non-in-place
>> manner:
>> > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
>> >
>> > which yields an array with floating point dtype in both versions.
>>  I can appreciate the arguments for preventing this kind of implicit
>> casting between non-same_kind dtypes, but I argue that because the
>> operation is in-place, then I (as the programmer) am explicitly stating
>> that I desire to utilize the current array to store the results of the
>> operation, dtype and all.  Obviously, we can't completely turn off this
>> rule (for example, an in-place addition between integer array and a
>> datetime64 makes no sense), but surely there is some sort of happy medium
>> that would allow these sort of operations to take place?
>> >
>> > Lastly, if it is determined that it is desirable to allow in-place
>> operations to continue working like they have before, I would like to see
>> such a fix in v1.7 because if it isn't in 1.7, then other libraries (such
>> as matplotlib, where this issue was first found) would have to change 
>> their
>> code anyway just to be compatible with numpy.
>>
>> I agree that in-place operations should allow different casting
>> rules.  There are different opinions on this, of course, but generally 
>> this
>> is how NumPy has worked in the past.
>>
>> We did decide to change the default casting rule to "same_kind" but
>> making an exception for in-place seems reasonable.
>>
>
> I think that in these cases same_kind will flag what are most likely
> programming errors and sloppy code. It is easy to be explicit and doing so
> will make the code more readable because it will be immediately obvious
> what the multiplicand is without the need to recall what the numpy casting
> rules are in this exceptional case. IISTR several mentions of this before
> (Gael?), and in some of those cases it turned out that bugs were being
> turned up. Catching bugs with minimal effort is a good thing.
>
> Chuck
>
>
 True, it is quite likely to be a programming error, but then again,
 there are many cases where it isn't.  Is the problem strictly that we are
 trying to downcast the float to an int, or is it that we are trying to
 downcast to a lower precision?  Is there a way for one to explicitly relax
 the same_kind restriction?

>>>
>>> I think the problem is down casting across kinds, with the result that
>>> floats are truncated and the imaginary parts of imaginaries might be
>>> discarded. That is, the value, not just the precision, of the rhs changes.
>>> So I'd favor an explicit cast in code like this, i.e., cast the rhs to an
>>> integer.
>>>
>>> It is true that this forces downstream to code up to a higher standard,
>>> but I don't see that as a bad thing, especially if it exposes bugs. And it
>>> isn't difficult to fix.
>>>
>>> Chuck
>>>
>>>
>> Mind you, in my case, casting the rhs as an integer before doing the
>> multiplication would be a bug, since our value for the rhs is usually
>> between zero and one.  Multiplying first by the integer numerator before
>> dividing by the integer denominator would likely cause issues with
>> overflowing the 16 bit integer.
>>
>
> Then you'd have to do
>
>
> >>> a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> >>> np.multiply(a, 0.5, out=a, casting="unsafe")
>
> array([0, 1, 1, 2, 2], dtype=int16)
>
> Ralf
>
>
That is exactly what I am looking for!  When did the "casting" kwarg come
about?  I am unfamiliar with it.

Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Ralf Gommers
On Tue, Sep 18, 2012 at 9:13 PM, Benjamin Root  wrote:

>
>
> On Tue, Sep 18, 2012 at 2:47 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root  wrote:
>>
>>>
>>>
>>> On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris <
>>> charlesr.har...@gmail.com> wrote:
>>>


 On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant 
 wrote:

>
> On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:
>
> > Consider the following code:
> >
> > import numpy as np
> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> > a *= float(255) / 15
> >
> > In v1.6.x, this yields:
> > array([17, 34, 51, 68, 85], dtype=int16)
> >
> > But in master, this throws an exception about failing to cast via
> same_kind.
> >
> > Note that numpy was smart about this operation before, consider:
> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> > a *= float(128) / 256
>
> > yields:
> > array([0, 1, 1, 2, 2], dtype=int16)
> >
> > Of course, this is different than if one does it in a non-in-place
> manner:
> > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
> >
> > which yields an array with floating point dtype in both versions.  I
> can appreciate the arguments for preventing this kind of implicit casting
> between non-same_kind dtypes, but I argue that because the operation is
> in-place, then I (as the programmer) am explicitly stating that I desire 
> to
> utilize the current array to store the results of the operation, dtype and
> all.  Obviously, we can't completely turn off this rule (for example, an
> in-place addition between integer array and a datetime64 makes no sense),
> but surely there is some sort of happy medium that would allow these sort
> of operations to take place?
> >
> > Lastly, if it is determined that it is desirable to allow in-place
> operations to continue working like they have before, I would like to see
> such a fix in v1.7 because if it isn't in 1.7, then other libraries (such
> as matplotlib, where this issue was first found) would have to change 
> their
> code anyway just to be compatible with numpy.
>
> I agree that in-place operations should allow different casting rules.
>  There are different opinions on this, of course, but generally this is 
> how
> NumPy has worked in the past.
>
> We did decide to change the default casting rule to "same_kind" but
> making an exception for in-place seems reasonable.
>

 I think that in these cases same_kind will flag what are most likely
 programming errors and sloppy code. It is easy to be explicit and doing so
 will make the code more readable because it will be immediately obvious
 what the multiplicand is without the need to recall what the numpy casting
 rules are in this exceptional case. IISTR several mentions of this before
 (Gael?), and in some of those cases it turned out that bugs were being
 turned up. Catching bugs with minimal effort is a good thing.

 Chuck


>>> True, it is quite likely to be a programming error, but then again,
>>> there are many cases where it isn't.  Is the problem strictly that we are
>>> trying to downcast the float to an int, or is it that we are trying to
>>> downcast to a lower precision?  Is there a way for one to explicitly relax
>>> the same_kind restriction?
>>>
>>
>> I think the problem is down casting across kinds, with the result that
>> floats are truncated and the imaginary parts of imaginaries might be
>> discarded. That is, the value, not just the precision, of the rhs changes.
>> So I'd favor an explicit cast in code like this, i.e., cast the rhs to an
>> integer.
>>
>> It is true that this forces downstream to code up to a higher standard,
>> but I don't see that as a bad thing, especially if it exposes bugs. And it
>> isn't difficult to fix.
>>
>> Chuck
>>
>>
> Mind you, in my case, casting the rhs as an integer before doing the
> multiplication would be a bug, since our value for the rhs is usually
> between zero and one.  Multiplying first by the integer numerator before
> dividing by the integer denominator would likely cause issues with
> overflowing the 16 bit integer.
>

Then you'd have to do

>>> a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>>> np.multiply(a, 0.5, out=a, casting="unsafe")
array([0, 1, 1, 2, 2], dtype=int16)

Ralf
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Benjamin Root
On Tue, Sep 18, 2012 at 2:47 PM, Charles R Harris  wrote:

>
>
> On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root  wrote:
>
>>
>>
>> On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>>
>>>
>>> On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant wrote:
>>>

 On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:

 > Consider the following code:
 >
 > import numpy as np
 > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
 > a *= float(255) / 15
 >
 > In v1.6.x, this yields:
 > array([17, 34, 51, 68, 85], dtype=int16)
 >
 > But in master, this throws an exception about failing to cast via
 same_kind.
 >
 > Note that numpy was smart about this operation before, consider:
 > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
 > a *= float(128) / 256

 > yields:
 > array([0, 1, 1, 2, 2], dtype=int16)
 >
 > Of course, this is different than if one does it in a non-in-place
 manner:
 > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
 >
 > which yields an array with floating point dtype in both versions.  I
 can appreciate the arguments for preventing this kind of implicit casting
 between non-same_kind dtypes, but I argue that because the operation is
 in-place, then I (as the programmer) am explicitly stating that I desire to
 utilize the current array to store the results of the operation, dtype and
 all.  Obviously, we can't completely turn off this rule (for example, an
 in-place addition between integer array and a datetime64 makes no sense),
 but surely there is some sort of happy medium that would allow these sort
 of operations to take place?
 >
 > Lastly, if it is determined that it is desirable to allow in-place
 operations to continue working like they have before, I would like to see
 such a fix in v1.7 because if it isn't in 1.7, then other libraries (such
 as matplotlib, where this issue was first found) would have to change their
 code anyway just to be compatible with numpy.

 I agree that in-place operations should allow different casting rules.
  There are different opinions on this, of course, but generally this is how
 NumPy has worked in the past.

 We did decide to change the default casting rule to "same_kind" but
 making an exception for in-place seems reasonable.

>>>
>>> I think that in these cases same_kind will flag what are most likely
>>> programming errors and sloppy code. It is easy to be explicit and doing so
>>> will make the code more readable because it will be immediately obvious
>>> what the multiplicand is without the need to recall what the numpy casting
>>> rules are in this exceptional case. IISTR several mentions of this before
>>> (Gael?), and in some of those cases it turned out that bugs were being
>>> turned up. Catching bugs with minimal effort is a good thing.
>>>
>>> Chuck
>>>
>>>
>> True, it is quite likely to be a programming error, but then again, there
>> are many cases where it isn't.  Is the problem strictly that we are trying
>> to downcast the float to an int, or is it that we are trying to downcast to
>> a lower precision?  Is there a way for one to explicitly relax the
>> same_kind restriction?
>>
>
> I think the problem is down casting across kinds, with the result that
> floats are truncated and the imaginary parts of imaginaries might be
> discarded. That is, the value, not just the precision, of the rhs changes.
> So I'd favor an explicit cast in code like this, i.e., cast the rhs to an
> integer.
>
> It is true that this forces downstream to code up to a higher standard,
> but I don't see that as a bad thing, especially if it exposes bugs. And it
> isn't difficult to fix.
>
> Chuck
>
>
Mind you, in my case, casting the rhs as an integer before doing the
multiplication would be a bug, since our value for the rhs is usually
between zero and one.  Multiplying first by the integer numerator before
dividing by the integer denominator would likely cause issues with
overflowing the 16 bit integer.

Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Charles R Harris
On Tue, Sep 18, 2012 at 1:08 PM, Travis Oliphant wrote:

>
> On Sep 18, 2012, at 1:47 PM, Charles R Harris wrote:
>
>
>
> On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root  wrote:
>
>>
>>
>> On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>>
>>>
>>> On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant wrote:
>>>

 On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:

 > Consider the following code:
 >
 > import numpy as np
 > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
 > a *= float(255) / 15
 >
 > In v1.6.x, this yields:
 > array([17, 34, 51, 68, 85], dtype=int16)
 >
 > But in master, this throws an exception about failing to cast via
 same_kind.
 >
 > Note that numpy was smart about this operation before, consider:
 > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
 > a *= float(128) / 256

 > yields:
 > array([0, 1, 1, 2, 2], dtype=int16)
 >
 > Of course, this is different than if one does it in a non-in-place
 manner:
 > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
 >
 > which yields an array with floating point dtype in both versions.  I
 can appreciate the arguments for preventing this kind of implicit casting
 between non-same_kind dtypes, but I argue that because the operation is
 in-place, then I (as the programmer) am explicitly stating that I desire to
 utilize the current array to store the results of the operation, dtype and
 all.  Obviously, we can't completely turn off this rule (for example, an
 in-place addition between integer array and a datetime64 makes no sense),
 but surely there is some sort of happy medium that would allow these sort
 of operations to take place?
 >
 > Lastly, if it is determined that it is desirable to allow in-place
 operations to continue working like they have before, I would like to see
 such a fix in v1.7 because if it isn't in 1.7, then other libraries (such
 as matplotlib, where this issue was first found) would have to change their
 code anyway just to be compatible with numpy.

 I agree that in-place operations should allow different casting rules.
  There are different opinions on this, of course, but generally this is how
 NumPy has worked in the past.

 We did decide to change the default casting rule to "same_kind" but
 making an exception for in-place seems reasonable.

>>>
>>> I think that in these cases same_kind will flag what are most likely
>>> programming errors and sloppy code. It is easy to be explicit and doing so
>>> will make the code more readable because it will be immediately obvious
>>> what the multiplicand is without the need to recall what the numpy casting
>>> rules are in this exceptional case. IISTR several mentions of this before
>>> (Gael?), and in some of those cases it turned out that bugs were being
>>> turned up. Catching bugs with minimal effort is a good thing.
>>>
>>> Chuck
>>>
>>>
>> True, it is quite likely to be a programming error, but then again, there
>> are many cases where it isn't.  Is the problem strictly that we are trying
>> to downcast the float to an int, or is it that we are trying to downcast to
>> a lower precision?  Is there a way for one to explicitly relax the
>> same_kind restriction?
>>
>
> I think the problem is down casting across kinds, with the result that
> floats are truncated and the imaginary parts of imaginaries might be
> discarded. That is, the value, not just the precision, of the rhs changes.
> So I'd favor an explicit cast in code like this, i.e., cast the rhs to an
> integer.
>
> It is true that this forces downstream to code up to a higher standard,
> but I don't see that as a bad thing, especially if it exposes bugs. And it
> isn't difficult to fix.
>
>
> Shouldn't we be issuing a warning, though?   Even if the desire is to
> change the casting rules?   The fact that multiple codes are breaking and
> need to be "upgraded" seems like a hard thing to require of someone going
> straight from 1.6 to 1.7. That's what I'm opposed to.
>

I think a warning would do just as well. I'd tend to regard the broken
codes as already broken, but that's just me ;)


>
> All of these efforts move NumPy to its use as a library instead of an
> interactive "environment" where it started which is a good direction to
> move, but managing this move in the context of a very large user-community
> is the challenge we have.
>

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Travis Oliphant

On Sep 18, 2012, at 1:47 PM, Charles R Harris wrote:

> 
> 
> On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root  wrote:
> 
> 
> On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris  
> wrote:
> 
> 
> On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant  wrote:
> 
> On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:
> 
> > Consider the following code:
> >
> > import numpy as np
> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> > a *= float(255) / 15
> >
> > In v1.6.x, this yields:
> > array([17, 34, 51, 68, 85], dtype=int16)
> >
> > But in master, this throws an exception about failing to cast via same_kind.
> >
> > Note that numpy was smart about this operation before, consider:
> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> > a *= float(128) / 256
> 
> > yields:
> > array([0, 1, 1, 2, 2], dtype=int16)
> >
> > Of course, this is different than if one does it in a non-in-place manner:
> > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
> >
> > which yields an array with floating point dtype in both versions.  I can 
> > appreciate the arguments for preventing this kind of implicit casting 
> > between non-same_kind dtypes, but I argue that because the operation is 
> > in-place, then I (as the programmer) am explicitly stating that I desire to 
> > utilize the current array to store the results of the operation, dtype and 
> > all.  Obviously, we can't completely turn off this rule (for example, an 
> > in-place addition between integer array and a datetime64 makes no sense), 
> > but surely there is some sort of happy medium that would allow these sort 
> > of operations to take place?
> >
> > Lastly, if it is determined that it is desirable to allow in-place 
> > operations to continue working like they have before, I would like to see 
> > such a fix in v1.7 because if it isn't in 1.7, then other libraries (such 
> > as matplotlib, where this issue was first found) would have to change their 
> > code anyway just to be compatible with numpy.
> 
> I agree that in-place operations should allow different casting rules.  There 
> are different opinions on this, of course, but generally this is how NumPy 
> has worked in the past.
> 
> We did decide to change the default casting rule to "same_kind" but making an 
> exception for in-place seems reasonable.
> 
> I think that in these cases same_kind will flag what are most likely 
> programming errors and sloppy code. It is easy to be explicit and doing so 
> will make the code more readable because it will be immediately obvious what 
> the multiplicand is without the need to recall what the numpy casting rules 
> are in this exceptional case. IISTR several mentions of this before (Gael?), 
> and in some of those cases it turned out that bugs were being turned up. 
> Catching bugs with minimal effort is a good thing.
> 
> Chuck 
> 
> 
> True, it is quite likely to be a programming error, but then again, there are 
> many cases where it isn't.  Is the problem strictly that we are trying to 
> downcast the float to an int, or is it that we are trying to downcast to a 
> lower precision?  Is there a way for one to explicitly relax the same_kind 
> restriction?
> 
> I think the problem is down casting across kinds, with the result that floats 
> are truncated and the imaginary parts of imaginaries might be discarded. That 
> is, the value, not just the precision, of the rhs changes. So I'd favor an 
> explicit cast in code like this, i.e., cast the rhs to an integer.
> 
> It is true that this forces downstream to code up to a higher standard, but I 
> don't see that as a bad thing, especially if it exposes bugs. And it isn't 
> difficult to fix.

Shouldn't we be issuing a warning, though?   Even if the desire is to change 
the casting rules?   The fact that multiple codes are breaking and need to be 
"upgraded" seems like a hard thing to require of someone going straight from 
1.6 to 1.7. That's what I'm opposed to.   

All of these efforts move NumPy to its use as a library instead of an 
interactive "environment" where it started which is a good direction to move, 
but managing this move in the context of a very large user-community is the 
challenge we have. 

-Travis




> 
> Chuck 
> 
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion

___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Charles R Harris
On Tue, Sep 18, 2012 at 11:39 AM, Benjamin Root  wrote:

>
>
> On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant wrote:
>>
>>>
>>> On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:
>>>
>>> > Consider the following code:
>>> >
>>> > import numpy as np
>>> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>>> > a *= float(255) / 15
>>> >
>>> > In v1.6.x, this yields:
>>> > array([17, 34, 51, 68, 85], dtype=int16)
>>> >
>>> > But in master, this throws an exception about failing to cast via
>>> same_kind.
>>> >
>>> > Note that numpy was smart about this operation before, consider:
>>> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>>> > a *= float(128) / 256
>>>
>>> > yields:
>>> > array([0, 1, 1, 2, 2], dtype=int16)
>>> >
>>> > Of course, this is different than if one does it in a non-in-place
>>> manner:
>>> > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
>>> >
>>> > which yields an array with floating point dtype in both versions.  I
>>> can appreciate the arguments for preventing this kind of implicit casting
>>> between non-same_kind dtypes, but I argue that because the operation is
>>> in-place, then I (as the programmer) am explicitly stating that I desire to
>>> utilize the current array to store the results of the operation, dtype and
>>> all.  Obviously, we can't completely turn off this rule (for example, an
>>> in-place addition between integer array and a datetime64 makes no sense),
>>> but surely there is some sort of happy medium that would allow these sort
>>> of operations to take place?
>>> >
>>> > Lastly, if it is determined that it is desirable to allow in-place
>>> operations to continue working like they have before, I would like to see
>>> such a fix in v1.7 because if it isn't in 1.7, then other libraries (such
>>> as matplotlib, where this issue was first found) would have to change their
>>> code anyway just to be compatible with numpy.
>>>
>>> I agree that in-place operations should allow different casting rules.
>>>  There are different opinions on this, of course, but generally this is how
>>> NumPy has worked in the past.
>>>
>>> We did decide to change the default casting rule to "same_kind" but
>>> making an exception for in-place seems reasonable.
>>>
>>
>> I think that in these cases same_kind will flag what are most likely
>> programming errors and sloppy code. It is easy to be explicit and doing so
>> will make the code more readable because it will be immediately obvious
>> what the multiplicand is without the need to recall what the numpy casting
>> rules are in this exceptional case. IISTR several mentions of this before
>> (Gael?), and in some of those cases it turned out that bugs were being
>> turned up. Catching bugs with minimal effort is a good thing.
>>
>> Chuck
>>
>>
> True, it is quite likely to be a programming error, but then again, there
> are many cases where it isn't.  Is the problem strictly that we are trying
> to downcast the float to an int, or is it that we are trying to downcast to
> a lower precision?  Is there a way for one to explicitly relax the
> same_kind restriction?
>

I think the problem is down casting across kinds, with the result that
floats are truncated and the imaginary parts of imaginaries might be
discarded. That is, the value, not just the precision, of the rhs changes.
So I'd favor an explicit cast in code like this, i.e., cast the rhs to an
integer.

It is true that this forces downstream to code up to a higher standard, but
I don't see that as a bad thing, especially if it exposes bugs. And it
isn't difficult to fix.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-18 Thread Benjamin Root
On Mon, Sep 17, 2012 at 9:33 PM, Charles R Harris  wrote:

>
>
> On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant wrote:
>
>>
>> On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:
>>
>> > Consider the following code:
>> >
>> > import numpy as np
>> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>> > a *= float(255) / 15
>> >
>> > In v1.6.x, this yields:
>> > array([17, 34, 51, 68, 85], dtype=int16)
>> >
>> > But in master, this throws an exception about failing to cast via
>> same_kind.
>> >
>> > Note that numpy was smart about this operation before, consider:
>> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
>> > a *= float(128) / 256
>>
>> > yields:
>> > array([0, 1, 1, 2, 2], dtype=int16)
>> >
>> > Of course, this is different than if one does it in a non-in-place
>> manner:
>> > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
>> >
>> > which yields an array with floating point dtype in both versions.  I
>> can appreciate the arguments for preventing this kind of implicit casting
>> between non-same_kind dtypes, but I argue that because the operation is
>> in-place, then I (as the programmer) am explicitly stating that I desire to
>> utilize the current array to store the results of the operation, dtype and
>> all.  Obviously, we can't completely turn off this rule (for example, an
>> in-place addition between integer array and a datetime64 makes no sense),
>> but surely there is some sort of happy medium that would allow these sort
>> of operations to take place?
>> >
>> > Lastly, if it is determined that it is desirable to allow in-place
>> operations to continue working like they have before, I would like to see
>> such a fix in v1.7 because if it isn't in 1.7, then other libraries (such
>> as matplotlib, where this issue was first found) would have to change their
>> code anyway just to be compatible with numpy.
>>
>> I agree that in-place operations should allow different casting rules.
>>  There are different opinions on this, of course, but generally this is how
>> NumPy has worked in the past.
>>
>> We did decide to change the default casting rule to "same_kind" but
>> making an exception for in-place seems reasonable.
>>
>
> I think that in these cases same_kind will flag what are most likely
> programming errors and sloppy code. It is easy to be explicit and doing so
> will make the code more readable because it will be immediately obvious
> what the multiplicand is without the need to recall what the numpy casting
> rules are in this exceptional case. IISTR several mentions of this before
> (Gael?), and in some of those cases it turned out that bugs were being
> turned up. Catching bugs with minimal effort is a good thing.
>
> Chuck
>
>
True, it is quite likely to be a programming error, but then again, there
are many cases where it isn't.  Is the problem strictly that we are trying
to downcast the float to an int, or is it that we are trying to downcast to
a lower precision?  Is there a way for one to explicitly relax the
same_kind restriction?

Thanks,
Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-17 Thread Charles R Harris
On Mon, Sep 17, 2012 at 3:40 PM, Travis Oliphant wrote:

>
> On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:
>
> > Consider the following code:
> >
> > import numpy as np
> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> > a *= float(255) / 15
> >
> > In v1.6.x, this yields:
> > array([17, 34, 51, 68, 85], dtype=int16)
> >
> > But in master, this throws an exception about failing to cast via
> same_kind.
> >
> > Note that numpy was smart about this operation before, consider:
> > a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> > a *= float(128) / 256
>
> > yields:
> > array([0, 1, 1, 2, 2], dtype=int16)
> >
> > Of course, this is different than if one does it in a non-in-place
> manner:
> > np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
> >
> > which yields an array with floating point dtype in both versions.  I can
> appreciate the arguments for preventing this kind of implicit casting
> between non-same_kind dtypes, but I argue that because the operation is
> in-place, then I (as the programmer) am explicitly stating that I desire to
> utilize the current array to store the results of the operation, dtype and
> all.  Obviously, we can't completely turn off this rule (for example, an
> in-place addition between integer array and a datetime64 makes no sense),
> but surely there is some sort of happy medium that would allow these sort
> of operations to take place?
> >
> > Lastly, if it is determined that it is desirable to allow in-place
> operations to continue working like they have before, I would like to see
> such a fix in v1.7 because if it isn't in 1.7, then other libraries (such
> as matplotlib, where this issue was first found) would have to change their
> code anyway just to be compatible with numpy.
>
> I agree that in-place operations should allow different casting rules.
>  There are different opinions on this, of course, but generally this is how
> NumPy has worked in the past.
>
> We did decide to change the default casting rule to "same_kind" but making
> an exception for in-place seems reasonable.
>

I think that in these cases same_kind will flag what are most likely
programming errors and sloppy code. It is easy to be explicit and doing so
will make the code more readable because it will be immediately obvious
what the multiplicand is without the need to recall what the numpy casting
rules are in this exceptional case. IISTR several mentions of this before
(Gael?), and in some of those cases it turned out that bugs were being
turned up. Catching bugs with minimal effort is a good thing.

Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


Re: [Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-17 Thread Travis Oliphant

On Sep 17, 2012, at 8:42 AM, Benjamin Root wrote:

> Consider the following code:
> 
> import numpy as np
> a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> a *= float(255) / 15
> 
> In v1.6.x, this yields:
> array([17, 34, 51, 68, 85], dtype=int16)
> 
> But in master, this throws an exception about failing to cast via same_kind.
> 
> Note that numpy was smart about this operation before, consider:
> a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
> a *= float(128) / 256

> yields:
> array([0, 1, 1, 2, 2], dtype=int16)
> 
> Of course, this is different than if one does it in a non-in-place manner:
> np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5
> 
> which yields an array with floating point dtype in both versions.  I can 
> appreciate the arguments for preventing this kind of implicit casting between 
> non-same_kind dtypes, but I argue that because the operation is in-place, 
> then I (as the programmer) am explicitly stating that I desire to utilize the 
> current array to store the results of the operation, dtype and all.  
> Obviously, we can't completely turn off this rule (for example, an in-place 
> addition between integer array and a datetime64 makes no sense), but surely 
> there is some sort of happy medium that would allow these sort of operations 
> to take place?
> 
> Lastly, if it is determined that it is desirable to allow in-place operations 
> to continue working like they have before, I would like to see such a fix in 
> v1.7 because if it isn't in 1.7, then other libraries (such as matplotlib, 
> where this issue was first found) would have to change their code anyway just 
> to be compatible with numpy.

I agree that in-place operations should allow different casting rules.  There 
are different opinions on this, of course, but generally this is how NumPy has 
worked in the past.  

We did decide to change the default casting rule to "same_kind" but making an 
exception for in-place seems reasonable. 

-Travis




___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion


[Numpy-discussion] Regression: in-place operations (possibly intentional)

2012-09-17 Thread Benjamin Root
Consider the following code:

import numpy as np
a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
a *= float(255) / 15

In v1.6.x, this yields:
array([17, 34, 51, 68, 85], dtype=int16)

But in master, this throws an exception about failing to cast via same_kind.

Note that numpy was smart about this operation before, consider:
a = np.array([1, 2, 3, 4, 5], dtype=np.int16)
a *= float(128) / 256

yields:
array([0, 1, 1, 2, 2], dtype=int16)

Of course, this is different than if one does it in a non-in-place manner:
np.array([1, 2, 3, 4, 5], dtype=np.int16) * 0.5

which yields an array with floating point dtype in both versions.  I can
appreciate the arguments for preventing this kind of implicit casting
between non-same_kind dtypes, but I argue that because the operation is
in-place, then I (as the programmer) am explicitly stating that I desire to
utilize the current array to store the results of the operation, dtype and
all.  Obviously, we can't completely turn off this rule (for example, an
in-place addition between integer array and a datetime64 makes no sense),
but surely there is some sort of happy medium that would allow these sort
of operations to take place?

Lastly, if it is determined that it is desirable to allow in-place
operations to continue working like they have before, I would like to see
such a fix in v1.7 because if it isn't in 1.7, then other libraries (such
as matplotlib, where this issue was first found) would have to change their
code anyway just to be compatible with numpy.

Cheers!
Ben Root
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion