Setting aside the strange sizing issues in the earlier messages for a moment...

Let's say I have a data set, dimensioned ( 26, 160, 1051200 )
and chunked ( 1, 15, 240 )

As I understand it, each individual chunk in the file will be in the following order:
[ 0, 0, 0-239 ] - [ 0, 14, 0-239 ]

and the chunks will be ordered thus:
[ 0, 0, 0 ], [ 0, 0, 240 ] ... [ 0, 0, 1051200 ], [ 0, 15, 0 ], [ 0, 15, 240 ] ... [ 0, 15, 1051200 ]
and so on...

Is that correct?

Should I expect peak read performance by reading one chunk at a time in that order, assuming each chunk is 1MB in size, as is the cache?

I notice there are functions for examining the hit % of the metadata cache... any chance of equivalent functions for the raw data chunk cache?



_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to