Not sure if this is really relevant to the original message, but here is my
opinion. I think that the numpy/scipy community would greatly benefit from a
platform enabling easy sharing of code written by users. This would provide a
database of solved problems, where people could dig without
Oh, I didn't even know it existed!
Not sure if this is really relevant to the original message, but here is my
opinion. I think that the numpy/scipy community would greatly benefit from a
platform enabling easy sharing of code written by users. This would provide a
database of solved
Thanks for all the feedback (on the SSD too). For what concerns biggus
library, for working on larger-than-memory arrays, this is really interesting,
but unfortunately I don't have time to test it at the moment, I will try to
have a look at it in the future. I hope to see something like that
Hi everybody, I hope this has not been discussed before, I couldn't find a
solution elsewhere.
I need to read some binary data, and I am using numpy.fromfile to do this.
Since the files are huge, and would make me run out of memory, I need to read
data skipping some records (I am reading data
This solution does not work for me since I have an offset before the data that
is not a multiple of the datatype (it's a header containing various stuff).
I'll at pytables.
# Exploit the operating system's virtual memory manager to get a
virtual copy of the entire file in memory
# (This does not
wrote:
On Wed, Mar 13, 2013 at 1:45 PM, Andrea Cimatoribus
andrea.cimatori...@nioz.nl wrote:
Hi everybody, I hope this has not been discussed before, I couldn't find a
solution elsewhere.
I need to read some binary data, and I am using numpy.fromfile to do this.
Since the files are huge
: mercoledì 13 marzo 2013 15.32
A: Discussion of Numerical Python
Oggetto: Re: [Numpy-discussion] R: fast numpy.fromfile skipping data chunks
On Wed, Mar 13, 2013 at 2:18 PM, Andrea Cimatoribus
andrea.cimatori...@nioz.nl wrote:
This solution does not work for me since I have an offset before the data
-discussion-boun...@scipy.org] per
conto di Andrea Cimatoribus
Inviato: mercoledì 13 marzo 2013 15.37
A: Discussion of Numerical Python
Oggetto: [Numpy-discussion] R: R: fast numpy.fromfile skipping data chunks
Indeed, but that offset it should be a multiple of the byte-size of dtype as
the help says
Indeed, but that offset it should be a multiple of the byte-size of dtype as
the help says.
My mistake, sorry, even if the help says so, it seems that this is not the case
in the actual code. Still, the problem with the size of the available data
(which is not necessarily a multiple of dtype
15.53
A: Discussion of Numerical Python
Oggetto: Re: [Numpy-discussion] R: R: R: fast numpy.fromfile skipping data
chunks
On Wed, Mar 13, 2013 at 2:46 PM, Andrea Cimatoribus
andrea.cimatori...@nioz.nl wrote:
Indeed, but that offset it should be a multiple of the byte-size of dtype
-discussion-boun...@scipy.org [numpy-discussion-boun...@scipy.org] per
conto di Nathaniel Smith [n...@pobox.com]
Inviato: mercoledì 13 marzo 2013 16.43
A: Discussion of Numerical Python
Oggetto: Re: [Numpy-discussion] R: R: R: R: fast numpy.fromfile skipping
data chunks
On 13 Mar 2013 15:16, Andrea
Dear all,
I would like to avoid the use of a boolean array (mask) in the following
statement:
mask = (A != 0.)
B = A[mask]
in order to be able to move this bit of code in a cython script (boolean
arrays are not yet implemented there, and they slow down execution a lot as
they can't be
12 matches
Mail list logo