An alternative solution may be 
https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html 
<https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html>

If you are sure your subsequent computation against the array data has enough 
locality to avoid thrashing, I think numpy.memmap would work for you, i.e. to 
use an explicit disk file serving as swap.

My env does a lot mmap'ing on disk data files by C++ (after Python read meta 
data), then wrap as ndarray, that's enough to run out-of-core programs as long 
as data access patterns fit in physical RAM at any instant, then even scanning 
the whole dataset is okay along the time axis (in realworld not data).

Memory (address space) fragmentation is a problem, besides OS' `nofile` (number 
of file handles held open) limitation, if too many small data files involved, 
we are in switching to a solution with FUSE based fs with virtual large file 
viewing many small files on remote storage server.

Cheers,
Compl

> On 2020-03-25, at 02:35, Stanley Seibert <sseib...@anaconda.com> wrote:
> 
> In addition to what Sebastian said about memory fragmentation and OS limits 
> about memory allocations, I do think it will be hard to work with an array 
> that close to the memory limit in NumPy regardless.  Almost any operation 
> will need to make a temporary array and exceed your memory limit.  You might 
> want to look at Dask Array for a NumPy-like API for working with chunked 
> arrays that can be staged in and out of memory:
> 
> https://docs.dask.org/en/latest/array.html 
> <https://docs.dask.org/en/latest/array.html>
> 
> As a bonus, Dask will also let you make better use of the large number of CPU 
> cores that you likely have in your 1.9 TB RAM system.  :)
> 
> On Tue, Mar 24, 2020 at 1:00 PM Keyvis Damptey <quantkey...@gmail.com 
> <mailto:quantkey...@gmail.com>> wrote:
> Hi Numpy dev community,
> 
> I'm keyvis, a statistical data scientist.
> 
> I'm currently using numpy in python 3.8.2 64-bit for a clustering problem, on 
> a machine with 1.9 TB RAM. When I try using np.zeros to create a 600,000 by 
> 600,000 matrix of dtype=np.float32 it says
> "Unable to allocate 1.31 TiB for an array with shape (600000, 600000) and 
> data type float32"
> 
> I used psutils to determine how much RAM python thinks it has access to and 
> it return with 1.8 TB approx.
> 
> Is there some way I can fix numpy to create these large arrays?
> Thanks for your time and consideration
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org <mailto:NumPy-Discussion@python.org>
> https://mail.python.org/mailman/listinfo/numpy-discussion 
> <https://mail.python.org/mailman/listinfo/numpy-discussion>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion

Reply via email to