I'm trying to find the optimal compression for 2- and 3-dimensional matrices of recorded data. These data sets contain data that doesn't change much over time (and time is being used as the first axis). I thought that by using shuffle, I might get better compression, but instead the resulting files were larger than without shuffle.

Is shuffle meant to work with compound types? Are there things I need to be considering in the organization of the axes of the data set in order to better encourage compression?



_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to