Dear Bartocz,
thank you very much for proposing a tutorial on advanced NumPy for
Euroscipy 2016! I think it's an awesome idea! Before the call for
proposals, I did a survey about the subjects that people were interested
in for the advanced tutorials, and advanced NumPy scored very high (see
the
Hello,
early bird registration for Euroscipy 2012 is soon coming to an end, with
the deadline on July 22nd. Don't forget to register soon! Reduced fees
are available for academics, students and speakers. Registration takes
place online on http://www.euroscipy.org/conference/euroscipy2012.
The committee of the Euroscipy 2012 conference has extended the deadline
for abstract submission to **Monday May 7th, midnight** (Brussels time).
Up to then, new abstracts may be submitted on
http://www.euroscipy.org/conference/euroscipy2012, and already-submitted
abstracts can be modified.
We
Hello,
this is a reminder of the approaching deadline for abstract submission at
the Euroscipy 2012 conference: the deadline is April 30, in one week.
Euroscipy 2012 will be held in **Brussels**, **August 23-27**, at the
Université Libre de Bruxelles (ULB, Solbosch Campus).
The EuroSciPy
- Emmanuelle Gouillart
- Kael Hanson
- Konrad Hinsen
- Hans Petter Langtangen
Dear all,
After some delay due to technical problems, registration for Euroscipy
2011 is now open! Please go to
http://www.euroscipy.org/conference/euroscipy2011, login to your account
if you have one, or create a new account (right side of the upper banner
of the Euroscipy webpage), then
Bottani (MSC, Paris Diderot)
* Gianfranco Durin (ISI, Turin)
* Emmanuelle Gouillart (Saint-Gobain Research)
* Konrad Hinsen (CBM Université d'Orleans)
* Vivien Lecomte (LPMA Paris Diderot)
* Chris Myers (Department of Physics, Cornell University)
* Michael Schindler (LPCT, EPSCI, Paris)
* Georg
Hello,
a, b, c = np.array([10]), np.array([2]), np.array([7])
min_val = np.minimum(a, b, c)
min_val
array([2])
max_val = np.maximum(a, b, c)
max_val
array([10])
min_val
array([10])
(I'm using numpy 1.4, and I observed the same behavior with numpy
2.0.0.dev8600 on another machine).
Hi Zach and Derek,
thank you very much for your quick and clear answers. Of course the third
parameter is the out array, I was just being very stupid! (I had read the
documentation though, but somehow it didn't make it to my brain :-) Sorry...
Read the documentation for numpy.minimum and
Hi Thomas,
broadcasting rules are only for ufuncs (and by extension, some numpy
functions using ufuncs). Indexing obeys different rules and always starts
by the first dimension.
However, you don't have to use broadcasting for such indexing operations:
a[:, c] = 0
zeroes columns indexed by c.
Hi Manuel,
you may save your odf file as a csv (comma separated value) file with
OpenOffice, then use np.loadtxt, specifying the 'delimiter' keyword:
myarray = np.loadtxt('myfile.csv', delimiter=',')
Cheers,
Emmanuelle
On Tue, Jan 05, 2010 at 10:14:54PM +0100, Manuel Wittchen wrote:
Hi,
is
Hello Manuel,
the discrete difference of a numpy array can be written in a very
natural way, without loops. Below are two possible ways to do it:
a = np.arange(10)**2
a
array([ 0, 1, 4, 9, 16, 25, 36, 49, 64, 81])
a[1:] - a[:-1]
array([ 1, 3, 5, 7, 9, 11, 13, 15, 17])
np.diff(a) #
Hello Damien,
broadcasting can solve your problem (see
http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html):
(A[np.newaxis,:]**B[:,np.newaxis]).sum(axis=0)
gives the result you want.
(assuming import numpy as np, which is considered as a better practice
as from numpy import *)
Hi,
ind = np.searchsorted(A, b)
values = A[:ind]
Cheers,
Emmanuelle
On Fri, Aug 14, 2009 at 02:27:23PM +0200, Mark Bakker wrote:
Hello List,
I am trying to find a quick way to do the following:
I have a *sorted* array of real numbers, say array A, sorted in ascending
order
On Fri, Aug 07, 2009 at 11:55:45PM -0400, josef.p...@gmail.com wrote:
On Fri, Aug 7, 2009 at 10:17 PM, a...@ajackson.org wrote:
Thanks! That helps a lot.
Thanks for improving the docs.
Many thanks for taking the time of finding out what this distribution
really is, and improving the docs. I
Hello,
I am making a lot of use of atleast_1d and atleast_2d in my routines.
Does anybody know whether this will slow down my code significantly?
if there is no need to make copies (i.e. if you take arrays as
parameters (?)), calls to atleast_1d and atleast_2d should be
Hi Andrew,
%timeit is an Ipython magic command that uses the timeit module,
see
http://ipython.scipy.org/doc/stable/html/interactive/reference.html?highlight=timeit
for more information about how to use it. So you were right to suppose
that it is not a normal Python.
import numpy as np
a = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float32)
b = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float64)
%timeit -n 10 np.sin(a)
10 loops, best of 3: 8.67 ms per loop
%timeit -n 10 np.sin(b)
10 loops, best of 3: 9.29 ms per loop
OK, I'm
On Mon, Aug 03, 2009 at 08:17:21AM -0700, Keith Goodman wrote:
On Mon, Aug 3, 2009 at 7:21 AM, Emmanuelle
Gouillartemmanuelle.gouill...@normalesup.org wrote:
import numpy as np
a = np.arange(0.0, 1000, (2 * 3.14159) / 1000, dtype=np.float32)
b = np.arange(0.0, 1000, (2 * 3.14159) /
Dear users of Numpy and Scipy,
here is an informal report on the last event of the Python African Tour,
which took place in Dakar (Senegal) on July 6-10th. It might interest
only a fraction of the lists, so I apologize for the spamming.
What is the Python African Tour?
Hello,
I'm using numpy.memmap to open big 3-D arrays of Xray tomography
data. After I have created a new array using memmap, I modify the
contrast of every Z-slice (along the first dimension) inside a for loop,
for a better visualization of the data. Although I call memmap.flush
Hi Pauli,
thank you for your answer! I was indeed measuring the memory used
with top, which is not the best tool for understanding what really
happens. I monitored free during the execution of my program and
indeed, the used numbers on the +/-buffers/cache line stays roughly
run also quite well on my computer. I'll let
you know!
Thanks again,
Emmanuelle
On Wed, Jul 01, 2009 at 03:04:08PM +0200, Francesc Alted wrote:
A Wednesday 01 July 2009 10:17:51 Emmanuelle Gouillart escrigué:
Hello,
I'm using numpy.memmap to open big 3-D arrays
a = np.empty(3*n.size, np.int)
a[::3]=n
a[1::3]=m
a[2::3]=o
or
np.array(zip(n,m,o)).ravel()
but the first solution is faster, even if you have to write more :D
Emmanuelle
On Sun, Jun 14, 2009 at 04:11:29PM +0200, Robert wrote:
whats the right way to efficiently weave arrays like this ? :
Hi Fred,
here is another solution
A = np.arange(99).reshape((33,3)
mask = (A==np.array([0,1,2]))
np.nonzero(np.prod(mask, axis=1))[0]
array([0]
I found it to be less elegant than Josef's solution changing the dtype of
the array, but it may be easier to understand if you're not very familiar
Hi Jorge,
roi = aimg[10:20,45:50,:]
are you working with 3-D images? I didn't know PIL was able to handle 3D
images.
I wasn't able to reproduce the behavior you observed with a simple
example:
In [20]: base = np.arange(25).reshape((5,5))
In [21]: base
Out[21]:
array([[ 0, 1, 2, 3, 4],
Hi Nacer,
I read your message on the african python tour mailing list. I am
looking for someone able to help me find the right scientific package
for parallel computing (data mining of frequent patterns). I am
involve in a DEA course and need such a package to illustrate the
topics I
Hello Timmie,
numpy.vectorize(myfunc) should do what you want.
Cheers,
Emmanuelle
Hello,
I am developing a module which bases its calculations
on another specialised module.
My module uses numpy arrays a lot.
The problem is that the other module I am building
upon, does not work with
Hello Paul,
although I'm not an expert either, it seems to me you could improve your
code a lot by using numpy.mgrid
Below is a short example of what you could do
coordinates = numpy.mgrid[0:R, 0:R, 0:R]
X, Y, Z = coordinates[0].ravel(), coordinates[1].ravel(),coordinates[2].ravel()
bits =
29 matches
Mail list logo