Hello,

I would like to know how HDF5 uses "cache" or "delayed writting". For
instance, if I use a Table and add data using H5TBappend_records, should I
add one record at a time or should I buffer some more data to add more than
one record at once ? Which one is best ?
Does HDF5 buffer data before writting it to disk ?

Also, is Table efficient ? I have a lot of data to add every seconds, each
in a separate Table (about 5000 sample every 50ms, with one sample per
Table).
Is it best to use Table or should I use a lower API (for instance to keep
the Dataset open and avoid open/close each time) ?

Thank you for your input,
Regards,
Guillaume.

--
View this message in context: 
http://hdf-forum.184993.n3.nabble.com/Best-way-to-write-using-HDF5-and-Table-performance-tp3292733p3292733.html
Sent from the hdf-forum mailing list archive at Nabble.com.

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to