Unfortunately, I think that is not the good explanation.
The bad file size is 386M.
The correct file size is 135M.
So I think they cannot be considered as small datasets. Moreover, what is
strange is that the two files contain the same data, produced sequentially
using a simulation mode (for real data, the compressed file is about 10M). 
That is why I'm talking about random or non-deterministic compression.




--
View this message in context: 
http://hdf-forum.184993.n3.nabble.com/hdf5-compression-problem-tp4025575p4025587.html
Sent from the hdf-forum mailing list archive at Nabble.com.

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to