I haven't looked at how memory (file offsets) are allocated very closely.....but

If using chunking, hdf maintains a table of chunks and the addresses of these 
are available and I believe they can be generated by the VFD so that chunks can 
be placed in some special manner (I saw some code responsible for this I'm 
sure).

If chunking is not used and a 'flat file' is created (in memory) is it still 
possible for the VFD to break datasets into pieces, or must they be linear. 
Presumably if N datasets are created, the VFD can mix them around a bit to fit 
free space in the file, but I'm wondering if a single dataset has to be 
continuous or not.

thanks

JB

--
John Biddiscombe,                            email:biddisco @ cscs.ch
http://www.cscs.ch/
CSCS, Swiss National Supercomputing Centre  | Tel:  +41 (91) 610.82.07
Via Cantonale, 6928 Manno, Switzerland      | Fax:  +41 (91) 610.82.82

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to