I wonder about the existing techniques aimed at avoiding data file
corruption, be it from concurrent writes or process/machine death.  I am
currently employing serialized copy-on-write semantics, but this will only
work for infrequent writes on small files.

How do people get around these problems, perhaps there is mechanism that can
sit on top of HDF5 to take care of this for me?

Many thanks

--
View this message in context: 
http://hdf-forum.184993.n3.nabble.com/hdf5-resilience-tp3179027p3179027.html
Sent from the hdf-forum mailing list archive at Nabble.com.

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to