Hello,
I wanted to use PyTables in conjunction with multiprocessing for some
embarrassingly parallel tasks.
However, it seems that it is not possible. In the following (very
stupid) example, X is a Carray of size (100, 10) stored in the file
test.hdf5:
import tables
import multiprocessing
#
On Thu, Jul 11, 2013 at 2:49 PM, Mathieu Dubois wrote:
> Hello,
>
> I wanted to use PyTables in conjunction with multiprocessing for some
> embarrassingly parallel tasks.
>
> However, it seems that it is not possible. In the following (very
> stupid) example, X is a Carray of size (100, 10) store
Le 11/07/2013 21:56, Anthony Scopatz a écrit :
On Thu, Jul 11, 2013 at 2:49 PM, Mathieu Dubois
mailto:duboismathieu_g...@yahoo.fr>> wrote:
Hello,
I wanted to use PyTables in conjunction with multiprocessing for some
embarrassingly parallel tasks.
However, it seems that it
Hi Mathieu,
I think you should try opening a new file handle per process. The
following works for me on v3.0:
import tables
import random
import multiprocessing
# Reload the data
# Use multiprocessing to perform a simple computation (column average)
def f(filename):
h5file = tables.openFi
Hi Anthony,
Thank you very much for your answer (it works). I will try to remodel my
code around this trick but I'm not sure it's possible because I use a
framework that need arrays.
Can somebody explain what is going on? I was thinking that PyTables keep
weakref to the file for lazy loading