I'm pleased to announce SciPy 0.7.0. SciPy is a package of tools for
science and engineering for Python. It includes modules for
statistics, optimization, integration, linear algebra, Fourier
transforms, signal and image processing, ODE solvers,
and more.
This release comes sixteen months
Stéfan van der Walt ste...@sun.ac.za writes:
2009/2/10 Stéfan van der Walt ste...@sun.ac.za:
x = np.arange(dim)
y = np.arange(dim)[:, None]
z = np.arange(dim)[:, None, None]
Do not operate heavy machinery or attempt broadcasting while tired or
under the influence. That order was
Hello list,
I am not sure, if I understood everything of the discussion on the
named-axis-idea of numpy-arrays, since I am only a *user* of numpy. I
never subclassed the numpy-array-class ;-)
However, I have the need to store meta-information for my arrays. I do
this with a stand-alone class
Hi,
I started to set up a PPA for scipy on launchpad, which enables to
build ubuntu packages for various distributions/architectures. The
link is there:
https://edge.launchpad.net/~scipy/+archive/ppa
So you just need to add one line to your /etc/apt/sources.list, and
you will get uptodate numpy
On Wed, Feb 11, 2009 at 9:46 PM, David Cournapeau courn...@gmail.com wrote:
Hi,
I started to set up a PPA for scipy on launchpad, which enables to
build ubuntu packages for various distributions/architectures. The
link is there:
https://edge.launchpad.net/~scipy/+archive/ppa
So you just
Announcing Numexpr 1.2
Numexpr is a fast numerical expression evaluator for NumPy. With it,
expressions that operate on arrays (like 3*a+4*b) are accelerated
and use less memory than doing the same calculation in Python.
The main feature added
Hello,
I am writing some Cython code and have noted that the buffer interface
offers very little speedup for PyObject arrays. In trying to rewrite the
same code using the C API in Cython, I find I can't get PyArray_SETITEM to
work, in a call like:
PyArray_SETITEM(result, void *
Wes McKinney wrote:
I am writing some Cython code and have noted that the buffer interface
offers very little speedup for PyObject arrays. In trying to rewrite the
same code using the C API in Cython, I find I can't get PyArray_SETITEM to
work, in a call like:
PyArray_SETITEM(result, void *
I actually got it to work-- the function prototype in the pxi file was
wrong, needed to be:
int PyArray_SETITEM(object obj, void* itemptr, object item)
This still doesn't explain why the buffer interface was slow.
The general problem here is an indexed array (by dates or strings, for
example),
Wed, 11 Feb 2009 22:21:30 +, Andrew Jaffe wrote:
[clip]
Maybe I misunderstand the proposal, but, actually, I think this is
completely the wrong semantics for axis= anyway. axis= in numpy
refers to what is also a dimension, not a column.
I think the proposal was to add the ability to
Pauli Virtanen wrote:
Wed, 11 Feb 2009 22:21:30 +, Andrew Jaffe wrote:
[clip]
Maybe I misunderstand the proposal, but, actually, I think this is
completely the wrong semantics for axis= anyway. axis= in numpy
refers to what is also a dimension, not a column.
I think the proposal was
On Wed, Feb 11, 2009 at 4:46 AM, David Cournapeau courn...@gmail.com wrote:
Hi,
I started to set up a PPA for scipy on launchpad, which enables to
build ubuntu packages for various distributions/architectures. The
link is there:
https://edge.launchpad.net/~scipy/+archive/ppa
Cool, thanks.
On Thu, Feb 12, 2009 at 8:11 AM, Fernando Perez fperez@gmail.com wrote:
On Wed, Feb 11, 2009 at 4:46 AM, David Cournapeau courn...@gmail.com wrote:
Hi,
I started to set up a PPA for scipy on launchpad, which enables to
build ubuntu packages for various distributions/architectures. The
On Tue, Feb 10, 2009 at 9:52 PM, Brent Pedersen bpede...@gmail.com wrote:
On Tue, Feb 10, 2009 at 9:40 PM, A B python6...@gmail.com wrote:
Hi,
How do I write a loadtxt command to read in the following file and
store each data point as the appropriate data type:
12|h|34.5|44.5
Pierre,
I noticed that using dtype=None with a heterogeneous set of data, trying to
use unpack=True to get the columns into separate arrays (instead of a
structured array) doesn't work. I've attached a patch that, in the case of
dtype=None, unpacks the fields in the final array into a list of
On Feb 11, 2009, at 11:38 PM, Ryan May wrote:
Pierre,
I noticed that using dtype=None with a heterogeneous set of data,
trying to use unpack=True to get the columns into separate arrays
(instead of a structured array) doesn't work. I've attached a patch
that, in the case of
Hi all,
As of r6358, I checked in the functionality to allow selection by
multiple fields along with a couple of tests.
ary['field1', 'field3'] raises an error
ary[['field1', 'field3']] is the correct spelling and returns a copy of
the data in those fields in a new array.
-Travis
On Wed, Feb 11, 2009 at 6:27 PM, A B python6...@gmail.com wrote:
On Tue, Feb 10, 2009 at 9:52 PM, Brent Pedersen bpede...@gmail.com wrote:
On Tue, Feb 10, 2009 at 9:40 PM, A B python6...@gmail.com wrote:
Hi,
How do I write a loadtxt command to read in the following file and
store each data
Hi Travis
2009/2/12 Travis E. Oliphant oliph...@enthought.com:
ary['field1', 'field3'] raises an error
ary[['field1', 'field3']] is the correct spelling and returns a copy of
the data in those fields in a new array.
Is there absolutely no way of returning the result as a view?
Regards
Hi,
I have the following data structure:
col1 | col2 | col3
20080101|key1|4
20080201|key1|6
20080301|key1|5
20080301|key2|3.4
20080601|key2|5.6
For each key in the second column, I would like to create an array
where for all unique values in the first column, there will be either
a value or
Stéfan van der Walt wrote:
Hi Travis
2009/2/12 Travis E. Oliphant oliph...@enthought.com:
ary['field1', 'field3'] raises an error
ary[['field1', 'field3']] is the correct spelling and returns a copy of
the data in those fields in a new array.
Is there absolutely no way of
On Wed, Feb 11, 2009 at 23:24, A B python6...@gmail.com wrote:
Hi,
I have the following data structure:
col1 | col2 | col3
20080101|key1|4
20080201|key1|6
20080301|key1|5
20080301|key2|3.4
20080601|key2|5.6
For each key in the second column, I would like to create an array
where for
2009/2/6 Brian Granger ellisonbg@gmail.com:
Great, what is the best way of rolling this into numpy?
I've committed your patch.
Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
Hi,
This is relevant for anyone who would like to speed up array based
codes using threads.
I have a simple loop that I have implemented using Cython:
def backstep(np.ndarray opti, np.ndarray optf,
int istart, int iend, double p, double q):
cdef int j
cdef double *pi
Thanks much!
Brian
On Wed, Feb 11, 2009 at 9:44 PM, Stéfan van der Walt ste...@sun.ac.za wrote:
2009/2/6 Brian Granger ellisonbg@gmail.com:
Great, what is the best way of rolling this into numpy?
I've committed your patch.
Cheers
Stéfan
___
On Wed, Feb 11, 2009 at 23:46, Brian Granger ellisonbg@gmail.com wrote:
Hi,
This is relevant for anyone who would like to speed up array based
codes using threads.
I have a simple loop that I have implemented using Cython:
def backstep(np.ndarray opti, np.ndarray optf,
int
2009/2/12 A B python6...@gmail.com:
Actually, I was using two different machines and it appears that the
version of numpy available on Ubuntu is seriously out of date (1.0.4).
Wonder why ...
See the recent post here
Eric Jones tried to do this with pthreads in C some time ago. His work is
here:
http://svn.scipy.org/svn/numpy/branches/multicore/
The lock overhead makes it usually not worthwhile.
I was under the impression that Eric's implementation didn't use a
thread pool. Thus I thought the
On Wed, Feb 11, 2009 at 6:17 PM, David Cournapeau courn...@gmail.com wrote:
Unfortunately, it does require some work, because hardy uses g77
instead of gfortran, so the source package has to be different (once
hardy is done, all the one below would be easy, though). I am not sure
how to do
On Thu, Feb 12, 2009 at 00:03, Brian Granger ellisonbg@gmail.com wrote:
Eric Jones tried to do this with pthreads in C some time ago. His work is
here:
http://svn.scipy.org/svn/numpy/branches/multicore/
The lock overhead makes it usually not worthwhile.
I was under the impression
On Wed, Feb 11, 2009 at 11:52:40PM -0600, Robert Kern wrote:
This seem like pretty heavy solutions though.
From a programmer's perspective, it seems to me like OpenMP is a muck
lighter weight solution than pthreads.
From a programmer's perspective, because, IMHO, openmp is implemented
using
Robert Kern wrote:
Eric Jones tried to do this with pthreads in C some time ago. His work is
here:
http://svn.scipy.org/svn/numpy/branches/multicore/
The lock overhead makes it usually not worthwhile.
I am curious: would you know what would be different in numpy's case
compared to
Gael Varoquaux wrote:
From a programmer's perspective, because, IMHO, openmp is implemented
using pthreads.
Since openmp also exists on windows, I doubt that it is required that
openmp uses pthread :)
On linux, with gcc, using -fopenmp implies -pthread, so I guess it uses
pthread (can you be
I am curious: would you know what would be different in numpy's case
compared to matlab array model concerning locks ? Matlab, up to
recently, only spreads BLAS/LAPACK on multi-cores, but since matlab 7.3
(or 7.4), it also uses multicore for mathematical functions (cos,
etc...). So at least
Good point. Is it possible to tell what array size it switches over
to using multiple threads?
Yes.
http://svn.scipy.org/svn/numpy/branches/multicore/numpy/core/threadapi.py
Sorry, I was curious about what Matlab does in this respect. But,
this is very useful and I will look at it.
Brian Granger wrote:
I am curious: would you know what would be different in numpy's case
compared to matlab array model concerning locks ? Matlab, up to
recently, only spreads BLAS/LAPACK on multi-cores, but since matlab 7.3
(or 7.4), it also uses multicore for mathematical functions (cos,
36 matches
Mail list logo