Hello,
I have several MPI processes each generating unknown number of values
and I want to write these values into a HDF5 file. Since I don't know
how many values will be generated (by each process), I cannot use one
single big dataset, but I have to use separate chunked dataset for each
process. That is - every process needs access only to its own dataset
and don't care about the others. Unfortunately, I'm forced to call
operations such as H5Dcreate collectively. Isn't there a way how to
create and write dataset only within one process, if I know no other
process will use it?
What's much worse, the H5Dset_extent() / H5Dextend() operations must be
called collectively. But every my process generates data independently,
so if one process needs to extend its own dataset, other processes don't
care and even don't know about it! How to solve this?
Please help,
Daniel
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org