I had to deal with such problem, too. But with the slight difference (?)
that I had to store a large number of values for the same timestamp. And
with the knowledge, that on later reading of the data, only one/few
values related to one timestamp is/are needed.

I chose to store the timestamp and every value in dedicated datasets for
each. With a "loose" coupling by the the dataset index of the values
only. The n-th timestamp in the timestamp dataset relates to all n-th
values in the other datasets.

This reduces the i/o on reading data - as with the compound solution a
complete record is to be read even with only few of the record values
being needed.

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to