Dear list,

we have some use cases (in robotics and medical instrumentation) in which
we want to use HDF5 to communicate compound data structures between
different sub-systems. The idea is to use the "memory driver" (H5FD_CORE)
with standard HDF5 file operations to write a compound data structure on
one device, to send over the raw buffer to another device, where the
compound data structure is read from the received buffer. (Our use cases
target "fast" communication with relatively "small" data structures, at
least in the context of many HDF5 applications such as HPC.)

Some questions I have in this context:
- where can we find how many bytes exactly the in-memory HDF5 compound data
  structure occupies?
- how can we make sure that all HDF5 data (raw data + meta data) is stored 
contiguously?
-do we have to use one buffer for the raw data and one for the meta data?
- how do we make sure that the sending over of the raw buffer data does not
  lead to possible problems with differences in, say, little endian and big
  endian systems?

We have already experimented with the H5FDdsm project
 <https://hpcforge.org/projects/h5fddsm/>
but this is based on a full MPI middleware, which is often too big and slow
for some of our "realtime" use cases. The examples that come with this
project "work", but the documentation is not really very clear about how
exactly they solve the above-mentioned problems of ours. (I am convinced
that their code _does_ solve our problem, but I just don't find how...)

Any information or pointers to code snippets are highly appreciated! Thanks!

Best regards,

Herman Bruyninckx


_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org

Reply via email to