Hi, 

just to report some partial progress: I have changed my program to produce
contiguous instead of chunked Datasets on each of the many Group nodes; the
ability to resize was not really necessary after I introduced an in-memory
cache for the data to be written. 

Result: file size decreased to 2/3, so only 5x overhead now. Much better.
Incidentally, there seems to be no speed difference for writing between
chunked and contiguous. This probably means that my bottleneck for writing
is not the raw disk speed.

cheers, Nils
-- 
View this message in context: 
http://hdf-forum.184993.n3.nabble.com/file-h5-tar-gz-is-20-times-smaller-what-s-wrong-tp2509949p2568080.html
Sent from the hdf-forum mailing list archive at Nabble.com.

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to