You can get polygon buffer from http://angusj.com/delphi/clipper.php and
make cython interface to it.
HTH
Niki
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I have a pretty silly question about initializing an array a to a given
scalar value, say A.
Most of the time I use a=np.ones(shape)*A which seems the most
widespread idiom, but I got recently interested in getting some
performance improvement.
I tried a=np.zeros(shape)+A, based on
Hi,
Apologies if the following is a trivial question. I wish to index the
columns of the following 2D array
In [78]: neighbourhoods
Out[78]:
array([[8, 0, 1],
[0, 1, 2],
[1, 2, 3],
[2, 3, 4],
[3, 4, 5],
[4, 5, 6],
[5, 6, 7],
[6, 7, 8],
[7,
I think the following is what you want:
neighborhoods[range(9),perf[neighbourhoods].argmax(axis=1)]
-Travis
On Feb 13, 2012, at 1:26 PM, William Furnass wrote:
np.array( [neighbourhoods[i][perf[neighbourhoods].argmax(axis=1)[i]]
for i in xrange(neighbourhoods.shape[0])] )
On Mon, Feb 13, 2012 at 1:01 AM, Niki Spahiev niki.spah...@gmail.com wrote:
You can get polygon buffer from http://angusj.com/delphi/clipper.php and
make cython interface to it.
This should be built into GEOS as well, and the shapely package
provides a python wrapper already.
-Chris
HTH
Thank you, that does the trick.
Regards,
Will
On 13 February 2012 19:39, Travis Oliphant tra...@continuum.io wrote:
I think the following is what you want:
neighborhoods[range(9),perf[neighbourhoods].argmax(axis=1)]
-Travis
On Feb 13, 2012, at 1:26 PM, William Furnass wrote:
np.array(
On Mon, Feb 13, 2012 at 12:12 AM, Travis Oliphant tra...@continuum.iowrote:
I'm wondering about using one of these commercial issue tracking plans for
NumPy and would like thoughts and comments.Both of these plans allow
Open Source projects to have unlimited plans for free.
Free usage of
On Mon, Feb 13, 2012 at 12:12 AM, Travis Oliphant tra...@continuum.io wrote:
I'm wondering about using one of these commercial issue tracking plans for
NumPy and would like thoughts and comments.Both of these plans allow Open
Source projects to have unlimited plans for free.
Free
Hi,
On Mon, Feb 13, 2012 at 12:44 PM, Travis Oliphant tra...@continuum.io wrote:
On Mon, Feb 13, 2012 at 12:12 AM, Travis Oliphant tra...@continuum.io
wrote:
I'm wondering about using one of these commercial issue tracking plans for
NumPy and would like thoughts and comments. Both of
Le 13/02/2012 19:17, eat a écrit :
wouldn't it be nice if you could just write:
a= np.empty(shape).fill(A)
this would be possible if .fill(.) just returned self.
Thanks for the tip. I noticed several times this was not working
(because of course, in the mean time, I forgot it...)
but I had
-- Forwarded message --
From: Andrea Gavana andrea.gav...@gmail.com
Date: Feb 13, 2012 11:31 PM
Subject: Re: [Numpy-discussion] Creating parallel curves
To: Jonathan Hilmer jkhil...@gmail.com
Thank you Jonathan for this, it's exactly what I was looking for. I' ll try
it tomorrow
On 2/13/12 2:56 PM, Matthew Brett wrote:
I have the impression that the Cython / SAGE team are happy with their
Jenkins configuration.
I'm not aware of a Jenkins buildbot system for Sage, though I think
Cython uses such a system: https://sage.math.washington.edu:8091/hudson/
We do have a
Hi,
On Mon, Feb 13, 2012 at 2:33 PM, jason-s...@creativetrax.com wrote:
On 2/13/12 2:56 PM, Matthew Brett wrote:
I have the impression that the Cython / SAGE team are happy with their
Jenkins configuration.
I'm not aware of a Jenkins buildbot system for Sage, though I think
Cython uses
On Mon, Feb 13, 2012 at 12:56 PM, Matthew Brett matthew.br...@gmail.com wrote:
I have the impression that the Cython / SAGE team are happy with their
Jenkins configuration.
So are we in IPython, thanks to Thomas Kluyver's recent leadership on
this front it's now running quite smoothly:
Hi,
I have a short piece of code where the use of an index array feels
right, but incurs a severe performance penalty: It's about an order
of magnitude slower than all other operations with arrays of that
size.
It comes up in a piece of code which is doing a large number of on
the fly histograms
On Feb 13, 2012, at 3:55 PM, Fernando Perez wrote:
...
- Extra operators/PEP 225. Here's a summary from the last time we
went over this, years ago at Scipy 2008:
http://mail.scipy.org/pipermail/numpy-discussion/2008-October/038234.html,
and the current status of the document we wrote about
On Mon, Feb 13, 2012 at 3:46 PM, Travis Vaught tra...@vaught.net wrote:
- Extra operators/PEP 225. Here's a summary from the last time we
went over this, years ago at Scipy 2008:
http://mail.scipy.org/pipermail/numpy-discussion/2008-October/038234.html,
and the current status of the document
On Mon, Feb 13, 2012 at 6:23 PM, Marcel Oliver
m.oli...@jacobs-university.de wrote:
Hi,
I have a short piece of code where the use of an index array feels
right, but incurs a severe performance penalty: It's about an order
of magnitude slower than all other operations with arrays of that
Hi,
I recently noticed a change in the upcasting rules in numpy 1.6.0 /
1.6.1 and I just wanted to check it was intentional.
For all versions of numpy I've tested, we have:
import numpy as np
Adata = np.array([127], dtype=np.int8)
Bdata = np.int16(127)
(Adata + Bdata).dtype
dtype('int8')
How would you fix it? I shouldn't speculate without profiling, but I'll be
naughty. Presumably the problem is that python turns that into something
like
hist[i,j] = hist[i,j] + 1
which means there's no way for numpy to avoid creating a temporary array.
So maybe this could be fixed by adding a
On Mon, Feb 13, 2012 at 7:30 PM, Nathaniel Smith n...@pobox.com wrote:
How would you fix it? I shouldn't speculate without profiling, but I'll be
naughty. Presumably the problem is that python turns that into something
like
hist[i,j] = hist[i,j] + 1
which means there's no way for numpy to
On Mon, Feb 13, 2012 at 7:46 PM, Wes McKinney wesmck...@gmail.com wrote:
On Mon, Feb 13, 2012 at 7:30 PM, Nathaniel Smith n...@pobox.com wrote:
How would you fix it? I shouldn't speculate without profiling, but I'll be
naughty. Presumably the problem is that python turns that into something
On Mon, Feb 13, 2012 at 7:48 PM, Wes McKinney wesmck...@gmail.com wrote:
On Mon, Feb 13, 2012 at 7:46 PM, Wes McKinney wesmck...@gmail.com wrote:
On Mon, Feb 13, 2012 at 7:30 PM, Nathaniel Smith n...@pobox.com wrote:
How would you fix it? I shouldn't speculate without profiling, but I'll be
I'd like the ability to make in (i.e., __contains__) return
something other than a bool.
Also, the ability to make the x y z syntax would be useful. It's
been suggested that the ability to override the boolean operators
(and, or, not) would be the way to do this (pep 335), though I'm not
100%
Hmmm. This seems like a regression. The scalar casting API was fairly
intentional.
What is the reason for the change?
--
Travis Oliphant
(on a mobile)
512-826-7480
On Feb 13, 2012, at 6:25 PM, Matthew Brett matthew.br...@gmail.com wrote:
Hi,
I recently noticed a change in the
Hi,
I've also just noticed this oddity:
In [17]: np.can_cast('c', 'u1')
Out[17]: False
OK so far, but...
In [18]: np.can_cast('c', [('f1', 'u1')])
Out[18]: True
In [19]: np.can_cast('c', [('f1', 'u1')], 'safe')
Out[19]: True
In [20]: np.can_cast(np.ones(10, dtype='c'), [('f1', 'u1')])
On Mon, Feb 13, 2012 at 5:00 PM, Travis Oliphant tra...@continuum.iowrote:
Hmmm. This seems like a regression. The scalar casting API was fairly
intentional.
What is the reason for the change?
In order to make 1.6 ABI-compatible with 1.5, I basically had to rewrite
this subsystem. There
I'm wondering about using one of these commercial issue tracking plans for
NumPy and would like thoughts and comments.Both of these plans allow Open
Source projects to have unlimited plans for free.
JIRA:
http://www.atlassian.com/software/jira/overview/tour/code-integration
At work
It might be nice to turn the matrix class into a short class hierarchy,
something like this:
class MatrixBase
class DenseMatrix(MatrixBase)
class TriangularMatrix(MatrixBase) # Maybe a few variations of upper/lower
triangular and whether the diagonal is stored
class SymmetricMatrix(MatrixBase)
I took a look into the code to see what is causing this, and the reason is
that nothing has ever been implemented to deal with the fields. This means
it falls back to treating all struct dtypes as if they were a plain void
dtype, which allows anything to be cast to it.
While I was redoing the
The problem is that these sorts of things take a while to emerge. The original
system was more consistent than I think you give it credit. What you are
seeing is that most people get NumPy from distributions and are relying on us
to keep things consistent.
The scalar coercion rules were
I believe the main lessons to draw from this are just how incredibly
important a complete test suite and staying on top of code reviews are. I'm
of the opinion that any explicit design choice of this nature should be
reflected in the test suite, so that if someone changes it years later,
they get
On 02/13/2012 06:19 PM, Mark Wiebe wrote:
It might be nice to turn the matrix class into a short class hierarchy,
something like this:
class MatrixBase
class DenseMatrix(MatrixBase)
class TriangularMatrix(MatrixBase) # Maybe a few variations of
upper/lower triangular and whether the
On Monday, February 13, 2012, Aaron Meurer asmeu...@gmail.com wrote:
I'd like the ability to make in (i.e., __contains__) return
something other than a bool.
Also, the ability to make the x y z syntax would be useful. It's
been suggested that the ability to override the boolean operators
On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant tra...@continuum.iowrote:
I disagree with your assessment of the subscript operator, but I'm sure we
will have plenty of time to discuss that. I don't think it's correct to
compare the corner cases of the fancy indexing and regular indexing to
It hasn't changed: since float is of a fundamentally different kind of
data, it's expected to upcast the result.
However, if I may add a personal comment on numpy's casting rules: until
now, I've found them confusing and somewhat inconsistent. Some of the
inconsistencies I've found were bugs,
On Monday, February 13, 2012, Charles R Harris charlesr.har...@gmail.com
wrote:
On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant tra...@continuum.io
wrote:
I disagree with your assessment of the subscript operator, but I'm sure
we will have plenty of time to discuss that. I don't think it's
On Mon, Feb 13, 2012 at 8:04 PM, Travis Oliphant tra...@continuum.iowrote:
I disagree with your assessment of the subscript operator, but I'm sure we
will have plenty of time to discuss that. I don't think it's correct to
compare the corner cases of the fancy indexing and regular indexing to
These are great suggestions. I am happy to start digging into the code. I'm
also happy to re-visit any and all design decisions for NumPy 2.0 (with a
strong-eye towards helping people migrate and documenting the results). Mark,
I think you have done an excellent job of working with a
On Feb 13, 2012, at 10:14 PM, Charles R Harris wrote:
On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant tra...@continuum.io wrote:
I disagree with your assessment of the subscript operator, but I'm sure we
will have plenty of time to discuss that. I don't think it's correct to
compare
On Monday, February 13, 2012, Travis Oliphant tra...@continuum.io wrote:
On Feb 13, 2012, at 10:14 PM, Charles R Harris wrote:
On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant tra...@continuum.io
wrote:
I disagree with your assessment of the subscript operator, but I'm sure
we will have
No argument on any of this. It's just that this needs to happen at NumPy
2.0, not in the NumPy 1.X series. I think requiring a re-compile is
far-less onerous than changing the type-coercion subtly in a 1.5 to 1.6
release. That's my major point, and I'm surprised others are
No argument on any of this. It's just that this needs to happen at NumPy
2.0, not in the NumPy 1.X series. I think requiring a re-compile is
far-less onerous than changing the type-coercion subtly in a 1.5 to 1.6
release. That's my major point, and I'm surprised others are
On Mon, Feb 13, 2012 at 11:00 PM, Travis Oliphant tra...@continuum.iowrote:
No argument on any of this. It's just that this needs to happen at
NumPy 2.0, not in the NumPy 1.X series. I think requiring a re-compile is
far-less onerous than changing the type-coercion subtly in a 1.5 to
On Mon, Feb 13, 2012 at 11:07 PM, Travis Oliphant tra...@continuum.iowrote:
No argument on any of this. It's just that this needs to happen at
NumPy 2.0, not in the NumPy 1.X series. I think requiring a re-compile is
far-less onerous than changing the type-coercion subtly in a 1.5 to
On Mon, Feb 13, 2012 at 10:38 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
On Mon, Feb 13, 2012 at 11:07 PM, Travis Oliphant tra...@continuum.iowrote:
No argument on any of this. It's just that this needs to happen at
NumPy 2.0, not in the NumPy 1.X series. I think
I think the problem is quite easy to solve, without changing the
documentation behaviour.
The doc says:
Help on built-in function arange in module numpy.core.multiarray:
/
arange(...)
arange([start,] stop[, step,], dtype=None)
Return evenly spaced values within a given interval.
On Mon, Feb 13, 2012 at 10:48 PM, Mark Wiebe mwwi...@gmail.com wrote:
On Mon, Feb 13, 2012 at 10:38 PM, Charles R Harris
charlesr.har...@gmail.com wrote:
On Mon, Feb 13, 2012 at 11:07 PM, Travis Oliphant tra...@continuum.iowrote:
No argument on any of this. It's just that this
You might be right, Chuck. I would like to investigate more, however.
What I fear is that there are *a lot* of users still on NumPy 1.3 and NumPy
1.5. The fact that we haven't heard any complaints, yet, does not mean to
me that we aren't creating headache for people later who have
2012/2/13 Andrea Gavana andrea.gav...@gmail.com
-- Forwarded message --
From: Andrea Gavana andrea.gav...@gmail.com
Date: Feb 13, 2012 11:31 PM
Subject: Re: [Numpy-discussion] Creating parallel curves
To: Jonathan Hilmer jkhil...@gmail.com
Thank you Jonathan for this, it's
The lack of commutativity wasn't in precision, it was in the typecodes, and
was there from the beginning. That caused confusion. A current cause of
confusion is the many to one relation of, say, int32 and long, longlong which
varies platform to platform. I think that confusion is a more
51 matches
Mail list logo