Hi all,

I have searched for an answer, but none of the messages I found was
clear enough... so here is it again.
I have an application (long term AE monitoring) like this:
- 1 writer which constantly writes (appends! data only) to a few
tables; data rate might be very low to quite high.
- 1 (seldom more, but it shouldn't matter) reader, which needs to
process the data at the same time; people use "real time" words, but
all I need is a reasonable small delay in reading "new" data
- the data structure is created/fixed on start
- copying the existing data to another file is not an option

Is is possible that the reader be able to read the "new" data? A
"flush" is acceptable from the writer, but no other time consuming
blocking...
I tried to play with caching and other settings, but nothing was very clear.
Simply closing/reopening the reader seems to work (I see the data
present at time of open), but I need to be sure it is safe.

In a very optimistic approach, appending data would NOT create
inconsistencies in the file... The metadata should be re-written to
file in a "clever" order...

Another relatively simple solution is to control/synchronize metadata
access (write/read), to ensure that data is consistent; is it
possible?

I'm new to hdf5, so any help/hints are appreciated.

BTW, I found this statement about netcdf, which is exactly what I
need... Being based on hdf5, I hope there is a simple solution.

http://www.unidata.ucar.edu/software/netcdf/docs/netcdf/Limitations.html
"Finally, for classic and 64-bit offset files, concurrent access to a
netCDF dataset is limited. One writer and multiple readers may access
data in a single dataset simultaneously, but there is no support for
multiple concurrent writers."

Regards,
Gabriel

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to