Hi again,

I solved the issue with data structure - this is now known for each process, so I can properly create groups and datasets collectively in all parallel processes.
Now I've got another problem:
When I create all datasets collectively, initially they have "0" size and "unlimited" max-size (so dataset is chunked), as I don't know if there will be any data to write by the given process. Then, when the given process knows what data should write to the given dataset, I try to call H5Dset_extent(), but it seems this function must be also called collectively (!)... That means each process should know what is the size of data to write in all other processes. Am I correct?
How to solve that issue?

I found very similar issue described a long time ago (December 2012) here:
https://lists.hdfgroup.org/pipermail/hdf-forum_lists.hdfgroup.org/2012-December/006322.html

and my current problem is properly described here in this thread:
https://lists.hdfgroup.org/pipermail/hdf-forum_lists.hdfgroup.org/2012-December/006337.html

Further in this thread someone said: "This is a limitation currently in the HDF5 standard that we plan to relax in the future."
Is this already solved now (after 5 years of development)?

Best regards,
Rafal


W dniu 2017-09-29 o 13:07, Rafal Lichwala pisze:
Hi Jarom, Hi All,

Thank you very much for the concrete answer! But...
What in case the given process (which has no data to write) does not know the structure of the given dataset (no data => no info about structure), so it cannot produce a proper datatype for collective H5Dwrite() call - even if "space" is properly marked by H5Sselect_none() ?... It seems H5Dwrite() requires datatype also to be identical in all processes during collective call... I cannot use H5Tcommit() to share the datatype between processes in file, because this function is also collective...
Can you see any solution for this problem?

Best regards,
Rafal


W dniu 2017-09-28 o 16:10, Nelson, Jarom pisze:
herr_t H5Sselect_none(hid_t space_id);
https://support.hdfgroup.org/HDF5/hdf5-quest.html#par-nodata
https://www.hdfgroup.org/2015/08/parallel-io-with-hdf5/
https://support.hdfgroup.org/HDF5/PHDF5/

Jarom

-----Original Message-----
From: Hdf-forum [mailto:hdf-forum-boun...@lists.hdfgroup.org] On Behalf Of Rafal Lichwala
Sent: Thursday, September 28, 2017 6:52 AM
To: hdf-forum@lists.hdfgroup.org
Subject: Re: [Hdf-forum] high level API for parallel version of HDF5 library

Hi,

Thank you for an answer and example codes.
Creating metadata (groups, datasets) is clear now and works fine, but I've got the last doubt: what in case I'm running 4 MPI processes but only 3 of them have some data to be written to the given dataset.
Since the H5Dwrite() call is in collective mode, my program hangs...
How to solve this?

Regards,
Rafal



W dniu 2017-09-27 o 22:50, Nelson, Jarom pisze:
Calls that affect the metadata need to be collective so that each
process has a consistent view of what the file metadata should be.

https://support.hdfgroup.org/HDF5/doc/RM/CollectiveCalls.html

Something like this (or the attached):

plist_id = H5Pcreate(H5P_FILE_ACCESS);

H5Pset_fapl_mpio(plist_id, comm, info);

H5Pset_all_coll_metadata_ops( plist_id, true );

file_id = H5Fcreate(H5FILE_NAME, H5F_ACC_TRUNC, H5P_DEFAULT,
plist_id);

H5Pclose(plist_id);

for(int procid = 0; i < mpi_size; ++i) {

    hid_t gr_id = H5Gcreate(file_id, std::to_string(procid).c_str(),
H5P_DEFAULT, H5P_DEFAULT, H5P_DEFAULT);

    H5Gclose(gr_id);

}

H5Fclose(file_id);

-----Original Message-----
From: Hdf-forum [mailto:hdf-forum-boun...@lists.hdfgroup.org] On
Behalf Of Rafal Lichwala
Sent: Wednesday, September 27, 2017 12:32 AM
To: hdf-forum@lists.hdfgroup.org
Subject: Re: [Hdf-forum] high level API for parallel version of HDF5
library

Hi Barbara, Hi All,

Thank you for your answer. That's clear now about H5TBmake_table()
call, but...

H5Gcreate() in not a high level API, isn't it?

So why I cannot use it in parallel processes?

Maybe I'm just doing something wrong, so could you please provide me a
short example how to create a set of groups (each one is the process

number) running 4 parallel MPI processes? You can limit the example
code to the sequence of HDF5 calls only...

My current code works fine for just one process, but when I try it for
2 (or more) parallel processes the result file is corrupted:

plist_id = H5Pcreate(H5P_FILE_ACCESS);

H5Pset_fapl_mpio(plist_id, comm, info);

H5Pset_all_coll_metadata_ops( plist_id, true ); file_id =
H5Fcreate(H5FILE_NAME, H5F_ACC_TRUNC, H5P_DEFAULT, plist_id);
H5Pclose(plist_id); hid_t gr_id = H5Gcreate(file_id,
std::to_string(procid).c_str(), H5P_DEFAULT, H5P_DEFAULT,
H5P_DEFAULT); H5Gclose(gr_id); H5Fclose(file_id);

Best regards,

Rafal

W dniu 2017-09-25 o 22:20, Barbara Jones pisze:

  > Hi Rafal,

  >

  > No, the HDF5 High Level APIs are not supported in the parallel
version of HDF5.

  >

  > -Barbara

  > h...@hdfgroup.org <mailto:h...@hdfgroup.org>

  >

  > -----Original Message-----

  > From: Hdf-forum [mailto:hdf-forum-boun...@lists.hdfgroup.org] On
Behalf Of Rafal Lichwala

  > Sent: Monday, September 18, 2017 8:53 AM

  > To: hdf-forum@lists.hdfgroup.org
<mailto:hdf-forum@lists.hdfgroup.org>

  > Subject: [Hdf-forum] high level API for parallel version of HDF5
library

  >

  > Hi,

  >

  > Can I use high level API function calls (H5TBmake_table(...)) in
parallel version of the HDF5 library?

  > There are no property list parameters for that function calls...

  >

  > Regards,

  > Rafal

  >

  >

  > _______________________________________________

  > Hdf-forum is for HDF software users discussion.

  > Hdf-forum@lists.hdfgroup.org <mailto:Hdf-forum@lists.hdfgroup.org>

  >
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
g

  > Twitter: https://twitter.com/hdf5

  >

  > _______________________________________________

  > Hdf-forum is for HDF software users discussion.

  > Hdf-forum@lists.hdfgroup.org <mailto:Hdf-forum@lists.hdfgroup.org>

  >
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
g

  > Twitter: https://twitter.com/hdf5

  >

_______________________________________________

Hdf-forum is for HDF software users discussion.

Hdf-forum@lists.hdfgroup.org <mailto:Hdf-forum@lists.hdfgroup.org>

http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
g

Twitter: https://twitter.com/hdf5



_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.or
g
Twitter: https://twitter.com/hdf5




_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5


_______________________________________________
Hdf-forum is for HDF software users discussion.
Hdf-forum@lists.hdfgroup.org
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to