Hello,
I’m trying to do a parallel write to an HDF5 file using HDF5 Version 1.8.6 with OpenMPI Version 1.4.3. A simple Fortran program that demonstrates the problem I’m having is attached. The example program attempts to write the “phim” array to an external file. The “phim” array is a function of position (IM, JM, and KM) and energy (IGM). The space domain is divided between two processors along the K-plane: Processor 0 receives K=[1,4], Processor 1 receives K=[5,8]. The intention is to write a single file with the combined space domains, with the dimensions ordered as follows: IM, JM, KM, IGM – with the IM index changing most rapidly, and the IGM index changing least rapidly. The example code that I’ve attached doesn’t appear to produce any errors, but the output is clearly incorrect. At IG=34, processor 0 correctly writes data, but processor 1 does not. And beyond IG=34, both processors do not write the desired data. I suspect that I’m doing something wrong with my hyperslab selection. In particular, I wonder if my specification of the “block” and “cnt” arrays is correct. My (non-example) code seems to work properly with a single processor, but fails with errors when I try splitting up the space domain between two or more processors. Any suggestions? Thanks! Greg
hdfproblem.f90
Description: Binary data
_______________________________________________ Hdf-forum is for HDF software users discussion. [email protected] http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
