or is it like Mark
describes, you want a compressed representation on disk and to work with
full matrices in memory?

Well, I wasn't suggesting sparse just on disk but dense in memory. But, my 
response may have been confusing.

I was just trying to get around the fact that unless a data-producer winds up 
writing additional stuff to the file, like maybe a used-blocks-map, then a 
reader has no means of knowing which blocks are empty or not except to attempt 
to read some and then examine for all-fill-value or not.

The down side is that you wind up instantiating fully-populated blocks in 
memory when the block didn't exist in the file. But, if you do this piece-wise 
as opposed to a single H5Dread call for the entire matrix *and* if you design 
your H5Dread calls to align with one or more full blocks, you can build up the 
equivalent sparse representation in memory *without* paying an initial price of 
instantiating a full, dense matrix and then taring it down to build the final 
sparse representation.

Does that make sense?

Mark

_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to