A Monday 12 April 2010 20:43:03 Stamminger, Johannes escrigué:
> > Maybe I try something like using multiple packet tables in parallel with
> > each using different array sizes ... ?
> 
> That does the trick!!! With three packet tables with optimized sizes I
> now achieved to receive a file of size 30M within 14s. Unfortunately may
> previous mentioned ZIP-comparison numbers were wrong: the simple zip'ing
> takes 42s resulting in a 23M file.
> 
> But with this approach and considering to have with the hdf the data
> accessible in a improved manner it looks promising. Tomorrow I will have
> to maintain additionally a packet table keeping references to the arrays
> in the original order ...

Mmh, if you don't want to have the nuisance of maintaining several tables for 
keeping your data, another possibility would be to compress your data before 
injecting it into variable length types in HDF5.  You will have to deal with 
the zlib API to do so, but probably that would be easier than what you are 
planning.  And you would get better results in terms of efficiency too.  The 
drawback is that you won't be able to read your data by using standard HDF5 
tools (like HDFView, for example). 

-- 
Francesc Alted

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to