Re: [Pytables-users] PyTables and Multiprocessing

2013-07-12 Thread Anthony Scopatz
On Fri, Jul 12, 2013 at 1:51 AM, Mathieu Dubois wrote: > Hi Anthony, > > Thank you very much for your answer (it works). I will try to remodel my > code around this trick but I'm not sure it's possible because I use a > framework that need arrays. > I think that this method still works. You ca

Re: [Pytables-users] PyTables and Multiprocessing

2013-07-11 Thread Mathieu Dubois
Hi Anthony, Thank you very much for your answer (it works). I will try to remodel my code around this trick but I'm not sure it's possible because I use a framework that need arrays. Can somebody explain what is going on? I was thinking that PyTables keep weakref to the file for lazy loading

Re: [Pytables-users] PyTables and Multiprocessing

2013-07-11 Thread Anthony Scopatz
Hi Mathieu, I think you should try opening a new file handle per process. The following works for me on v3.0: import tables import random import multiprocessing # Reload the data # Use multiprocessing to perform a simple computation (column average) def f(filename): h5file = tables.openFi

Re: [Pytables-users] PyTables and Multiprocessing

2013-07-11 Thread Mathieu Dubois
Le 11/07/2013 21:56, Anthony Scopatz a écrit : On Thu, Jul 11, 2013 at 2:49 PM, Mathieu Dubois mailto:duboismathieu_g...@yahoo.fr>> wrote: Hello, I wanted to use PyTables in conjunction with multiprocessing for some embarrassingly parallel tasks. However, it seems that it

Re: [Pytables-users] PyTables and Multiprocessing

2013-07-11 Thread Anthony Scopatz
On Thu, Jul 11, 2013 at 2:49 PM, Mathieu Dubois wrote: > Hello, > > I wanted to use PyTables in conjunction with multiprocessing for some > embarrassingly parallel tasks. > > However, it seems that it is not possible. In the following (very > stupid) example, X is a Carray of size (100, 10) store