A Thursday 03 February 2011 15:01:28 Bartosz Telenczuk escrigué:
> > Uh, no. tables.Expr only supports simple element-wise operations
> > whose output has the same shape than operands (so `nonzero` is not
> > supported). Also, it cannot carry out operations that makes use of
> > different indices
Hi again,
> Choosing the slice size should be not difficult, just something that is
> not too large or too small (anything between 1 MB ~ 10 MB should do
> fine). The only thing to have in mind is that your slices should not
> exceed your available memory. PyTables will automatically determin
A Thursday 03 February 2011 13:35:55 Bartosz Telenczuk escrigué:
> Is there a way to iterate over chunks, or should I just check
> chunkshape and adjust indices appropriately?
Well, I'd say that, in general, the suggested approach should work for
most of arrays. For example, if the arrays are 3-
Hi Francesc,
Thanks for you answers.
> Yes, just try loading data in chunks. For example, let's say that your
> array is bidimensional; I think something like should work:
>
> carray = tables.createCArray(...)
> for i,row in enumerate(your_memmap_array):
>carray[i] = row
>
> Maybe using
Hey Bartosz,
A Wednesday 02 February 2011 09:37:14 Bartosz Telenczuk escrigué:
> Hi,
>
> I am trying to implement efficient out-of-memory computations on
> large arrays. I have two questions:
>
> 1) My data is stored in binary files, which I read using
> numpy.memmap. Is there a way to efficient
Hi,
I am trying to implement efficient out-of-memory computations on large arrays.
I have two questions:
1) My data is stored in binary files, which I read using numpy.memmap. Is there
a way to efficiently copy from memmap to CArray without reading all data into
memory first? I suppose I could