On 6 February 2016 at 23:56, Elliot Hallmark <permafact...@gmail.com> wrote:
> Now, I would like to have these arrays shared between processes spawned via 
> multiprocessing (for fast interprocess communication purposes, not for 
> parallelizing work on an array).  I don't care about mapping to a file on 
> disk, and I don't want disk I/O happening.  I don't care (really) about data 
> being copied in memory on resize.  I *do* want the array to be resized "in 
> place", so that the child processes can still access the arrays from the 
> object they were initialized with.

If you are only reading in parallel, and you can afford the extra
dependency, one alternative way to do this would be to use an
expandable array from HDF5:

http://www.pytables.org/usersguide/libref/homogenous_storage.html#earrayclassdescr

To avoid I/O, your file can live in RAM.

http://www.pytables.org/cookbook/inmemory_hdf5_files.html
_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to