A Friday 10 December 2010 01:13:50 Leigh Orf escrigué:
> I am fully convinced now that you can't run hdf5's compression code
> in parallel, where the number of cores compressing is greater than
> the number of files being written. You can only run the compression
> filters 1:1 to each compressed file. I wrote up my sequential code
> and quickly realized that while I can write it such that each core
> will compress the data, the compression itself will happen
> sequentially, not in parallel.

In case you want to make use of all your cores, you may want to use 
Blosc (http://blosc.pytables.org/trac), which allows to use any number 
of cores (the number is configurable) by using multithreading.  It uses 
a static pool of threads, so it is pretty efficient (although it works 
best for relatively large chunksizes, typically >= 1 MB).

-- 
Francesc Alted

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to