Dear Paulo,

On Fri, Feb 13, 2015 at 9:57 AM, Paulo van Breugel
<[email protected]> wrote:
> I guess this is because the calculations are done in-memory? Any way to
> avoid this memory problem when using large data sets (something like working
> with memmap objects?)

With memmap you still have a limits of 2Gb I guess, you should try: dask

Dask Array implements the NumPy ndarray interface using blocked
algorithms, cutting up the large array into many small arrays. This
lets us compute on arrays larger than memory using all of our cores.
We coordinate these blocked algorithms using dask graphs.

http://dask.readthedocs.org/en/latest/array.html

I didn't have a chance to try it yet, but it support a numpy array
syntax, and since you are using quite basic functionalities I think
you should be able to work with it.

All the best

Pietro
_______________________________________________
grass-dev mailing list
[email protected]
http://lists.osgeo.org/mailman/listinfo/grass-dev

Reply via email to