On 02/13/2012 08:07 PM, Charles R Harris wrote:
Let it go, Travis. It's a waste of time.
(Off-list) Chuck, I really appreciate your consistent good sense; this
is just one of many examples. Thank you for all your numpy work.
Eric
___
On Tue, Feb 14, 2012 at 12:05 AM, Eric Firing efir...@hawaii.edu wrote:
On 02/13/2012 08:07 PM, Charles R Harris wrote:
Let it go, Travis. It's a waste of time.
(Off-list) Chuck, I really appreciate your consistent good sense; this
is just one of many examples. Thank you for all your
On Tue, Feb 14, 2012 at 12:58 AM, Travis Oliphant tra...@continuum.iowrote:
The lack of commutativity wasn't in precision, it was in the typecodes,
and was there from the beginning. That caused confusion. A current cause of
confusion is the many to one relation of, say, int32 and long,
On Mon, 2012-02-13 at 22:56 -0600, Travis Oliphant wrote:
But, I am also aware of *a lot* of users who never voice their opinion
on this list, and a lot of features that they want and need and are
currently working around the limitations of NumPy to get.These are
going to be my primary
On Feb 14, 2012, at 1:47 PM, Charles R Harris wrote:
clip
About the behavior in question, I would frame this as a specific case with
argument for and against like so:
The Current Behavior
In [1]: array([127], int8) + 127
Out[1]: array([-2], dtype=int8)
In [2]: array([127], int8) + 128
On Feb 14, 2012, at 7:04 AM, Henry Gomersall wrote:
On Mon, 2012-02-13 at 22:56 -0600, Travis Oliphant wrote:
But, I am also aware of *a lot* of users who never voice their opinion
on this list, and a lot of features that they want and need and are
currently working around the limitations
On Tue, 2012-02-14 at 14:14 -0600, Travis Oliphant wrote:
Is that a prompt for feedback? :)
Absolutely. That's the reason I'm getting more active on this list.
But, at the same time, we all need to be aware of the tens of
thousands of users of NumPy who don't use the mailing list and who
Hi,
I recently noticed a change in the upcasting rules in numpy 1.6.0 /
1.6.1 and I just wanted to check it was intentional.
For all versions of numpy I've tested, we have:
import numpy as np
Adata = np.array([127], dtype=np.int8)
Bdata = np.int16(127)
(Adata + Bdata).dtype
dtype('int8')
Hmmm. This seems like a regression. The scalar casting API was fairly
intentional.
What is the reason for the change?
--
Travis Oliphant
(on a mobile)
512-826-7480
On Feb 13, 2012, at 6:25 PM, Matthew Brett matthew.br...@gmail.com wrote:
Hi,
I recently noticed a change in the
On Mon, Feb 13, 2012 at 5:00 PM, Travis Oliphant tra...@continuum.iowrote:
Hmmm. This seems like a regression. The scalar casting API was fairly
intentional.
What is the reason for the change?
In order to make 1.6 ABI-compatible with 1.5, I basically had to rewrite
this subsystem. There
The problem is that these sorts of things take a while to emerge. The original
system was more consistent than I think you give it credit. What you are
seeing is that most people get NumPy from distributions and are relying on us
to keep things consistent.
The scalar coercion rules were
I believe the main lessons to draw from this are just how incredibly
important a complete test suite and staying on top of code reviews are. I'm
of the opinion that any explicit design choice of this nature should be
reflected in the test suite, so that if someone changes it years later,
they get
On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant tra...@continuum.iowrote:
I disagree with your assessment of the subscript operator, but I'm sure we
will have plenty of time to discuss that. I don't think it's correct to
compare the corner cases of the fancy indexing and regular indexing to
It hasn't changed: since float is of a fundamentally different kind of
data, it's expected to upcast the result.
However, if I may add a personal comment on numpy's casting rules: until
now, I've found them confusing and somewhat inconsistent. Some of the
inconsistencies I've found were bugs,
On Monday, February 13, 2012, Charles R Harris charlesr.har...@gmail.com
wrote:
On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant tra...@continuum.io
wrote:
I disagree with your assessment of the subscript operator, but I'm sure
we will have plenty of time to discuss that. I don't think it's
On Mon, Feb 13, 2012 at 8:04 PM, Travis Oliphant tra...@continuum.iowrote:
I disagree with your assessment of the subscript operator, but I'm sure we
will have plenty of time to discuss that. I don't think it's correct to
compare the corner cases of the fancy indexing and regular indexing to
These are great suggestions. I am happy to start digging into the code. I'm
also happy to re-visit any and all design decisions for NumPy 2.0 (with a
strong-eye towards helping people migrate and documenting the results). Mark,
I think you have done an excellent job of working with a
On Feb 13, 2012, at 10:14 PM, Charles R Harris wrote:
On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant tra...@continuum.io wrote:
I disagree with your assessment of the subscript operator, but I'm sure we
will have plenty of time to discuss that. I don't think it's correct to
compare
On Monday, February 13, 2012, Travis Oliphant tra...@continuum.io wrote:
On Feb 13, 2012, at 10:14 PM, Charles R Harris wrote:
On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant tra...@continuum.io
wrote:
I disagree with your assessment of the subscript operator, but I'm sure
we will have
No argument on any of this. It's just that this needs to happen at NumPy
2.0, not in the NumPy 1.X series. I think requiring a re-compile is
far-less onerous than changing the type-coercion subtly in a 1.5 to 1.6
release. That's my major point, and I'm surprised others are
No argument on any of this. It's just that this needs to happen at NumPy
2.0, not in the NumPy 1.X series. I think requiring a re-compile is
far-less onerous than changing the type-coercion subtly in a 1.5 to 1.6
release. That's my major point, and I'm surprised others are
On Mon, Feb 13, 2012 at 11:00 PM, Travis Oliphant tra...@continuum.iowrote:
No argument on any of this. It's just that this needs to happen at
NumPy 2.0, not in the NumPy 1.X series. I think requiring a re-compile is
far-less onerous than changing the type-coercion subtly in a 1.5 to
On Mon, Feb 13, 2012 at 11:07 PM, Travis Oliphant tra...@continuum.iowrote:
No argument on any of this. It's just that this needs to happen at
NumPy 2.0, not in the NumPy 1.X series. I think requiring a re-compile is
far-less onerous than changing the type-coercion subtly in a 1.5 to
On Mon, Feb 13, 2012 at 10:38 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
On Mon, Feb 13, 2012 at 11:07 PM, Travis Oliphant tra...@continuum.iowrote:
No argument on any of this. It's just that this needs to happen at
NumPy 2.0, not in the NumPy 1.X series. I think
On Mon, Feb 13, 2012 at 10:48 PM, Mark Wiebe mwwi...@gmail.com wrote:
On Mon, Feb 13, 2012 at 10:38 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
On Mon, Feb 13, 2012 at 11:07 PM, Travis Oliphant tra...@continuum.iowrote:
No argument on any of this. It's just that this
You might be right, Chuck. I would like to investigate more, however.
What I fear is that there are *a lot* of users still on NumPy 1.3 and NumPy
1.5. The fact that we haven't heard any complaints, yet, does not mean to
me that we aren't creating headache for people later who have
The lack of commutativity wasn't in precision, it was in the typecodes, and
was there from the beginning. That caused confusion. A current cause of
confusion is the many to one relation of, say, int32 and long, longlong which
varies platform to platform. I think that confusion is a more
27 matches
Mail list logo