use column %i' % (name, column)
rtn = h5file.root.X[:, column].mean()
h5file.close()
return rtn
p = multiprocessing.Pool(2)
col_mean = p.map(f, ['test.hdf5', 'test.hdf5', 'test.hdf5'])
Be well
Anthony
On Thu, Jul 11, 2013 at 3:43 PM, Mathieu Dubois
Le 11/07/2013 21:56, Anthony Scopatz a écrit :
On Thu, Jul 11, 2013 at 2:49 PM, Mathieu Dubois
mailto:duboismathieu_g...@yahoo.fr>> wrote:
Hello,
I wanted to use PyTables in conjunction with multiprocessing for some
embarrassingly parallel tasks.
However, it seems t
Hello,
I wanted to use PyTables in conjunction with multiprocessing for some
embarrassingly parallel tasks.
However, it seems that it is not possible. In the following (very
stupid) example, X is a Carray of size (100, 10) stored in the file
test.hdf5:
import tables
import multiprocessing
#
Le 05/07/2013 16:54, Anthony Scopatz a écrit :
On Fri, Jul 5, 2013 at 8:40 AM, Francesc Alted <mailto:fal...@gmail.com>> wrote:
On 7/5/13 1:33 AM, Mathieu Dubois wrote:
> tables.tableExtension.Table._createTable
(tables/tableExtension.c:2181)
>>
>>
Le 05/07/2013 00:31, Anthony Scopatz a écrit :
On Thu, Jul 4, 2013 at 4:13 PM, Mathieu Dubois
mailto:duboismathieu_g...@yahoo.fr>> wrote:
Hello,
I'm a beginner with Pyable.
I wanted to store a database in a HDF5 file using PyTable. The DB is
made by a CSV
Hello,
I'm a beginner with Pyable.
I wanted to store a database in a HDF5 file using PyTable. The DB is
made by a CSV file (which contains the subject information) and a lot of
images (I work on MRI so the images are 3 dimensional float32 arrays of
shape (121, 145, 121)). The relation is very