Hi Elena, 

as was pointed out, I was unfair in blaming h5py. It turns out that HDF5
supports unlimited dimensions only in chunked Datasets. 



> Would it be possible for you to write a C program that does the same thing
> as your Python script?
> 
No, unfortunately. I don't know enough C to do that and don't have the time
to learn it now.



> Can you send us output of "h5dump -H -p" on your file? Also, could you
> please run h5stat on the file and post that output too?
> 
Neil Fortner was already kind enough to do that by following my link to the
file in the OP, see his post
http://hdf-forum.184993.n3.nabble.com/file-h5-tar-gz-is-20-times-smaller-what-s-wrong-tp2509949p2512318.html

Another idea I had was to make only one big Dataset which contains all of
the actual data, and then using the hierarchy of Groups in the file only to
store region references to the big Dataset. In that way, I could possibly
avoid the overhead of chunk indexes for the small portions of data that are
currently spread all over the nested Groups. Does that sound like a good
idea?

Nils 

-- 
View this message in context: 
http://hdf-forum.184993.n3.nabble.com/file-h5-tar-gz-is-20-times-smaller-what-s-wrong-tp2509949p2525464.html
Sent from the hdf-forum mailing list archive at Nabble.com.

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to