Hi,
Using hdf1.8.5 and 1.8.6 pre2; openmpi 1.4.3 on linux rhel4 and rhel5
In a case where the hdf5 operations aren't using MPI but build an h5
file exclusive to individual MPI jobs/processes:
The create:
currentFileID = H5Fcreate(filePath.c_str(), H5F_ACC_TRUNC, H5P_DEFAULT,
H5P_DEFAULT);
and many file operations using the hl methods including packet table,
tables and datasets etc. perform successfully.
Then near the individual processes' end the
H5Fclose(currentFileID);
is called but doesn't return. A check for open objects says only one
file object is open but no other objects(group, dataset etc). No other
software or process is acting on this h5; it is named exclusively for
the one job it is associated with.
This isn't a parallel hdf5 in MPI attempt. In another scenario parallel
hdf5 is working the collective way just fine. This current issue is for
people who don't have or want a parallel file system and I made a
coarsed grained MPI to run independent jobs for these folks. Each job
has its own h5 opened with H5Fcreate(filePath.c_str(), H5F_ACC_TRUNC,
H5P_DEFAULT, H5P_DEFAULT);
Where should I look?
I'll try to make a small example test case for show and tell.
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org