Hello,
Let's say we have two arrays A and B of shapes (1, 2000) and (1,
4000).
If I do C=numpy.concatenate((A, B), axis=1), I get a new array of
dimension (1, 6000) with duplication of memory.
I am looking for a way to have a non contiguous array C in which the
left (1, 2000)
A Wednesday 02 September 2009 05:50:57 Robert Kern escrigué:
On Tue, Sep 1, 2009 at 21:11, Jorge Scandaliarisjorgesmbox...@yahoo.es
wrote:
David Warde-Farley dwf at cs.toronto.edu writes:
If you actually want to save multiple arrays, you can use
savez('fname', *[a,b,c]) and they will be
On Wed, Sep 02, 2009 at 09:40:49AM +0200, V. Armando Solé wrote:
Let's say we have two arrays A and B of shapes (1, 2000) and (1,
4000).
If I do C=numpy.concatenate((A, B), axis=1), I get a new array of
dimension (1, 6000) with duplication of memory.
I am looking for a way to
Hello Sturla,
I had a quick look at your code.
Looks fine.
A few notes...
In select you should replace numpy with np.
In _median how can you, if n==2, use s[] if s is not defined?
What if n==1?
Also, I think when returning an empty array, it should be of
the same type you would get in the other
As Gaël pointed out you cannot create A, B and then C
as the concatenation of A and B without duplicating
the vectors.
I am looking for a way to have a non contiguous array C in which the
left (1, 2000) elements point to A and the right (1, 4000)
elements point to B.
But you can
Gael Varoquaux wrote:
You cannot in the numpy memory model. The numpy memory model defines an
array as something that has regular strides to jump from an element to
the next one.
I expected problems in the suggested case (concatenating columns) but I
did not expect the problem would be so
Thanks David, Robert and Francesc for comments and suggestions. It's nice having
options, but that also means one has to choose ;)
I will have a closer look at pytables. The thing that got me scared about it
was the word database. I have close to zero experience using or, even worst,
designing
Citi, Luca wrote:
As Gaël pointed out you cannot create A, B and then C
as the concatenation of A and B without duplicating
the vectors.
But you can still re-link A to the left elements
and B to the right ones afterwards by using views into C.
Thanks for the hint. In my case the A
A Wednesday 02 September 2009 11:20:55 Jorge Scandaliaris escrigué:
Thanks David, Robert and Francesc for comments and suggestions. It's nice
having options, but that also means one has to choose ;)
I will have a closer look at pytables. The thing that got me scared about
it was the word
Hi,
depending on the needs you have you might be interested in my minimal
implementation of what I call a
mock-ndarray.
I needed somthing like this to analyze higher dimensional stacks of 2d
images and what I needed was mostly the indexing features of
nd-arrays.
A mockarray is initialized with a
Is there a way to constrain an old-style compilation just to make a code
work? I have similar problems with other old pieces of code.
Use -arch i686 in the CFLAGS and LDFLAGS. I think.
Unfortunately, it seems not to have any effect.
I'll try something else.
Thanks anyway.
Hi everyone,
In case anyone is interested, I just set up a google group to discuss
GPU-based simulation for our Python neural simulator Brian:
http://groups.google.fr/group/brian-on-gpu
Our simulator relies heavily Numpy. I would be very happy if the GPU
experts here would like to share their
Sturla Molden wrote:
Dag Sverre Seljebotn skrev:
Nitpick: This will fail on large arrays. I guess numpy.npy_intp is the
right type to use in this case?
By the way, here is a more polished version, does it look ok?
Hello,
I want to be able to parse a binary file which hold information regarding to
experiment configuration and data obviously. Both configuration and data
sections are variable-length. A chuck this data is shown as below (after a
binary read operation)
Dag Sverre Seljebotn skrev:
a) Is the cast to numpy.npy_intp really needed? I'm pretty sure shape is
defined as numpy.npy_intp*.
I don't know Cython internals in detail but you do, I so take your word
for it. I thought shape was a tuple of Python ints.
b) If you want higher
Citi, Luca skrev:
Hello Sturla,
In _median how can you, if n==2, use s[] if s is not defined?
What if n==1?
That was a typo.
Also, I think when returning an empty array, it should be of
the same type you would get in the other cases.
Currently median returns numpy.nan for empty input
V. Armando Solé skrev:
I am looking for a way to have a non contiguous array C in which the
left (1, 2000) elements point to A and the right (1, 4000)
elements point to B.
Any hint will be appreciated.
If you know in advance that A and B are going to be duplicated, you can
use
On Wed, Sep 2, 2009 at 09:38, Gökhan Severgokhanse...@gmail.com wrote:
Hello,
I want to be able to parse a binary file which hold information regarding to
experiment configuration and data obviously. Both configuration and data
sections are variable-length. A chuck this data is shown as below
Sebastian Haase skrev:
A mockarray is initialized with a list of nd-arrays. The result is a
mock array having one additional dimention in front.
This is important, because often in the case of 'concatenation' a real
concatenation is not needed. But then there is a common tool called
Matlab,
Gökhan Sever skrev:
What would be wisest and fastest way to tackle this issue?
Get the format, read the binary data directly, skip the ascii/regex part.
I sometimes use recarrays with formatted binary data; just constructing
a dtype and use numpy.fromfile to read. That works when the binary
Hello,
I know I am splitting the hair, but should not
np.bitwise_and.identity be -1 instead of 1?
I mean, something with all the bits set?
I am checking whether all elements of a vector 'v'
have a certain bit 'b' set:
if np.bitwise_and.reduce(v) (1 b):
# do something
If v is empty, the
On Wed, Sep 2, 2009 at 11:11, Citi, Lucalc...@essex.ac.uk wrote:
Hello,
I know I am splitting the hair, but should not
np.bitwise_and.identity be -1 instead of 1?
I mean, something with all the bits set?
Probably. However, the .identity parts of ufuncs were designed mostly
to support multiply
Robert Kern robert.kern at gmail.com writes:
Looks good! Where can we get the code? Can this be specialized for 1D
functions?
Re code: sure, I'll be happy to post it if anyone points me to a real test
case or two, to help me understand the envelope -- 100^2 - 500^2 grid ?
(Splines on regular
On Wed, Sep 2, 2009 at 11:33, denis bzowydenis-bz...@t-online.de wrote:
Robert Kern robert.kern at gmail.com writes:
Looks good! Where can we get the code? Can this be specialized for 1D
functions?
Re code: sure, I'll be happy to post it if anyone points me to a real test
case or two, to
I forgot to mention I also support transpose.
-S.
On Wed, Sep 2, 2009 at 5:23 PM, Sturla Moldenstu...@molden.no wrote:
Sebastian Haase skrev:
A mockarray is initialized with a list of nd-arrays. The result is a
mock array having one additional dimention in front.
This is important, because
On Wed, Sep 2, 2009 at 10:11 AM, Robert Kern robert.k...@gmail.com wrote:
On Wed, Sep 2, 2009 at 09:38, Gökhan Severgokhanse...@gmail.com wrote:
Hello,
I want to be able to parse a binary file which hold information regarding
to
experiment configuration and data obviously. Both
On Wed, Sep 2, 2009 at 10:34 AM, Sturla Molden stu...@molden.no wrote:
Gökhan Sever skrev:
What would be wisest and fastest way to tackle this issue?
Get the format, read the binary data directly, skip the ascii/regex part.
I sometimes use recarrays with formatted binary data; just
On Wed, 2 Sep 2009, Dag Sverre Seljebotn wrote:
Sturla Molden wrote:
Dag Sverre Seljebotn skrev:
Nitpick: This will fail on large arrays. I guess numpy.npy_intp is the
right type to use in this case?
By the way, here is a more polished version, does it look ok?
On Wed, Sep 2, 2009 at 11:53, Gökhan Severgokhanse...@gmail.com wrote:
How to use recarrays with variable-length data fields as well as metadata?
You don't.
--
Robert Kern
I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt
If I understand the problem...
if you are 100% sure that ', ' only occurs between fields
and never within, you can use the 'split' method of the string
which could be faster than regexp in this simple case.
___
NumPy-Discussion mailing list
Thank you, Robert, for the quick reply.
I just saw the line
#define PyUFunc_None -1
in the ufuncobject.h file.
It is always the same, you choose a sentinel thinking
that it doesn't conflict with any possible value and
you later find there is one such case.
As said it is not a big deal.
I
On Wed, Sep 2, 2009 at 12:01 PM, Citi, Luca lc...@essex.ac.uk wrote:
If I understand the problem...
if you are 100% sure that ', ' only occurs between fields
and never within, you can use the 'split' method of the string
which could be faster than regexp in this simple case.
On Wed, Sep 2, 2009 at 12:04 PM, Robert Kern robert.k...@gmail.com wrote:
On Wed, Sep 2, 2009 at 11:53, Gökhan Severgokhanse...@gmail.com wrote:
How to use recarrays with variable-length data fields as well as
metadata?
You don't.
--
Robert Kern
I have come to believe that the whole
On Wed, Sep 2, 2009 at 12:27, Gökhan Severgokhanse...@gmail.com wrote:
On Wed, Sep 2, 2009 at 12:01 PM, Citi, Luca lc...@essex.ac.uk wrote:
If I understand the problem...
if you are 100% sure that ', ' only occurs between fields
and never within, you can use the 'split' method of the string
On Wed, Sep 2, 2009 at 12:33, Gökhan Severgokhanse...@gmail.com wrote:
How your find suggestion work? It just returns the location of the first
occurrence.
http://docs.python.org/library/stdtypes.html#str.find
str.find(sub[, start[, end]])
Return the lowest index in the string where
On Wed, Sep 2, 2009 at 12:29 PM, Robert Kern robert.k...@gmail.com wrote:
On Wed, Sep 2, 2009 at 12:27, Gökhan Severgokhanse...@gmail.com wrote:
On Wed, Sep 2, 2009 at 12:01 PM, Citi, Luca lc...@essex.ac.uk wrote:
If I understand the problem...
if you are 100% sure that ', ' only
On Wed, Sep 2, 2009 at 12:29 PM, Robert Kern robert.k...@gmail.com wrote:
On Wed, Sep 2, 2009 at 12:27, Gökhan Severgokhanse...@gmail.com wrote:
On Wed, Sep 2, 2009 at 12:01 PM, Citi, Luca lc...@essex.ac.uk wrote:
If I understand the problem...
if you are 100% sure that ', ' only
On Wed, Sep 2, 2009 at 12:46 PM, Robert Kern robert.k...@gmail.com wrote:
On Wed, Sep 2, 2009 at 12:33, Gökhan Severgokhanse...@gmail.com wrote:
How your find suggestion work? It just returns the location of the first
occurrence.
http://docs.python.org/library/stdtypes.html#str.find
On Wed, Sep 2, 2009 at 13:28, Gökhan Severgokhanse...@gmail.com wrote:
Put the reference manual in:
http://drop.io/1plh5rt
First few pages describe the data format they use.
Ah. The fields are *not* delimited by a fixed value. Regexes are no
help to you for pulling out the information you
On Mon, Aug 31, 2009 at 9:06 PM, Sturla Moldenstu...@molden.no wrote:
We recently has a discussion regarding an optimization of NumPy's median
to average O(n) complexity. After some searching, I found out there is a
selection algorithm competitive in speed with Hoare's quick select. It
has
This one line causes python to core dump on linux.
numpy.lexsort([
numpy.array(['-','-','-','-','-','-','-','-','-','-','-','-','-'])[::-1],numpy.array([732685.,
732685., 732685., 732685., 732685., 732685.,732685., 732685.,
732685., 732685., 732685., 732685., 732679.])[::-1]])
Here's
I am unable to build numpy on Snow Leopard. The error that I am getting is
shown below. It is a linking issue related to the change in the the default
behavior of gcc under Snow Leopard. Before it used to compile for the 32 bit
i386 architecture, now the default is the 64 bit x86_64 architecture.
On Wed, Sep 2, 2009 at 4:37 PM, Robert Kern robert.k...@gmail.com wrote:
On Wed, Sep 2, 2009 at 17:23, Jeremy Mayesjeremy.ma...@gmail.com wrote:
This one line causes python to core dump on linux.
numpy.lexsort([
Hello fellow numy users,
I posted some questions on histograms recently [1, 2] but still couldn't
find a solution.
I am trying to create a inverse cumulative histogram [3] which shall
look like [4] but with the higher values at the left.
The classification shall follow this exemplary rule:
I experience the same problem.
A few more additional test cases:
In [1]: import numpy
In [2]: numpy.lexsort([numpy.arange(5)[::-1].copy(), numpy.arange(5)])
Out[2]: array([0, 1, 2, 3, 4])
In [3]: numpy.lexsort([numpy.arange(5)[::-1].copy(), numpy.arange(5.)])
Out[3]: array([0, 1, 2, 3, 4])
In
On Wed, Sep 2, 2009 at 1:25 PM, Chad Netzer chad.net...@gmail.com wrote:
On Mon, Aug 31, 2009 at 9:06 PM, Sturla Moldenstu...@molden.no wrote:
We recently has a discussion regarding an optimization of NumPy's median
to average O(n) complexity. After some searching, I found out there is a
On Wed, Sep 2, 2009 at 5:19 PM, Citi, Luca lc...@essex.ac.uk wrote:
I experience the same problem.
A few more additional test cases:
In [1]: import numpy
In [2]: numpy.lexsort([numpy.arange(5)[::-1].copy(), numpy.arange(5)])
Out[2]: array([0, 1, 2, 3, 4])
In [3]:
On Wed, Sep 2, 2009 at 18:15, Tim Michelsentimmichel...@gmx-topmail.de wrote:
Hello fellow numy users,
I posted some questions on histograms recently [1, 2] but still couldn't
find a solution.
I am trying to create a inverse cumulative histogram [3] which shall
look like [4] but with the
On Wed, Sep 2, 2009 at 5:19 PM, Citi, Luca lc...@essex.ac.uk wrote:
I experience the same problem.
A few more additional test cases:
In [1]: import numpy
In [2]: numpy.lexsort([numpy.arange(5)[::-1].copy(), numpy.arange(5)])
Out[2]: array([0, 1, 2, 3, 4])
In [3]:
On Wed, Sep 2, 2009 at 7:26 PM, Robert Kernrobert.k...@gmail.com wrote:
On Wed, Sep 2, 2009 at 18:15, Tim Michelsentimmichel...@gmx-topmail.de
wrote:
Hello fellow numy users,
I posted some questions on histograms recently [1, 2] but still couldn't
find a solution.
I am trying to create a
Hello Robert and Josef,
thanks for the quick answers! I really appreciate this.
I am trying to create a inverse cumulative histogram [3] which shall
look like [4] but with the higher values at the left.
Okay. That is completely different from what you've asked before.
You are right.
But it's
31/08/09 @ 14:37 (-0400), thus spake Pierre GM:
On Aug 31, 2009, at 2:33 PM, Ernest Adrogué wrote:
30/08/09 @ 13:19 (-0400), thus spake Pierre GM:
I can't reproduce that with a recent SVN version (r7348). What
version
of numpy are you using ?
Version 1.2.1
That must be that.
On Wed, Sep 2, 2009 at 19:11, Tim Michelsentimmichel...@gmx-topmail.de wrote:
Hello Robert and Josef,
thanks for the quick answers! I really appreciate this.
I am trying to create a inverse cumulative histogram [3] which shall
look like [4] but with the higher values at the left.
Okay. That
On Wed, Sep 2, 2009 at 4:23 PM, Jeremy Mayes jeremy.ma...@gmail.com wrote:
This one line causes python to core dump on linux.
numpy.lexsort([
numpy.array(['-','-','-','-','-','-','-','-','-','-','-','-','-'])[::-1],numpy.array([732685.,
732685., 732685., 732685., 732685., 732685.,732685.,
On Wed, Sep 2, 2009 at 1:58 PM, Robert Kern robert.k...@gmail.com wrote:
On Wed, Sep 2, 2009 at 13:28, Gökhan Severgokhanse...@gmail.com wrote:
Put the reference manual in:
http://drop.io/1plh5rt
First few pages describe the data format they use.
Ah. The fields are *not* delimited by
Chad Netzer skrev:
By the way, as far as I can tell, the above algorithm is exactly the
same idea as a non-recursive Hoare (ie. quicksort) selection: Do the
partition, then only proceed to the sub-partition that must contain
the nth element.My version is a bit more general, allowing
On Wed, Sep 2, 2009 at 23:59, Gökhan Severgokhanse...@gmail.com wrote:
Robert,
You must have thrown a couple RTFM's while replying my emails :)
Not really. There's no manual for this. Greg Wilson's _Data Crunching_
may be a good general introduction to how to think about these
problems.
On Thu, Sep 3, 2009 at 00:09, Sturla Moldenstu...@molden.no wrote:
Chad Netzer skrev:
I'd also like to, if possible, have a specialized 2D version, since
image media filtering is one of my interests, and the C version works
on 1D (raveled) arrays only.
I agree. NumPy (or SciPy) could have a
On Wed, Sep 2, 2009 at 10:28 PM, Robert Kernrobert.k...@gmail.com wrote:
When he is talking about 2D, I believe he is referring to median
filtering rather than computing the median along an axis. I.e.,
replacing each pixel with the median of a specified neighborhood
around the pixel.
That's
Robert Kern skrev:
When he is talking about 2D, I believe he is referring to median
filtering rather than computing the median along an axis. I.e.,
replacing each pixel with the median of a specified neighborhood
around the pixel.
That's not something numpy's median function should be
Chad Netzer wrote:
But Charles Harris's earlier suggestion of some hard coded medians for
common filter template sizes (ie 3x3, 5x5, etc.) may be a nice
addition to scipy, especially if it can be generalized somewhat to
other filters.
For 2D images try looking into PIL :
61 matches
Mail list logo