Hi Suman,

Parallel HDF5 doesn't support multiple write feature.
So this would result in only one process (out of many) would perform any writing IO. (even dset name and data value (indicate process) can be from different process)

You may be able to do it by completely serialize writing per process from create/reopen/close file, but that would defeat the purpose of parallel.

Jonathan

On 12/11/2012 12:49 AM, Suman Vajjala wrote:
Hello All,

I am trying to write a HDF5 file in parallel in a specific format. Each rank has variable length data and data of each process has to be created as a separate dataset e.g. /data_i (i is the rank). I have used the following code but to no success:

/////Code
    data = (int *) malloc(sizeof(int)*dimsf[0]*dimsf[1]);
    for (i=0; i < dimsf[0]*dimsf[1]; i++) {
        data[i] = mpi_rank;
    }

    plist_id = H5Pcreate(H5P_FILE_ACCESS);
    H5Pset_fapl_mpio(plist_id, comm, info);

file_id = H5Fcreate(H5FILE_NAME, H5F_ACC_TRUNC, H5P_DEFAULT, plist_id);
    H5Pclose(plist_id);

    plist_id = H5Pcreate(H5P_DATASET_XFER);
    H5Pset_dxpl_mpio(plist_id, H5FD_MPIO_INDEPENDENT);

    /*
     * Create the dataspace for the dataset.
     */
    filespace = H5Screate_simple(RANK, dimsf, NULL);

    i = mpi_rank;
    sprintf(name,"/node%d",i);
dset_id = H5Dcreate(file_id, name, H5T_NATIVE_INT, filespace,H5P_DEFAULT, H5P_DEFAULT, H5P_DEFAULT);

status = H5Dwrite(dset_id, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, H5P_DEFAULT,
data);
    H5Dclose(dset_id);

  Can you please shed light on how to do it?

Regards
Suman Vajjala




_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to