Hello!

Here is my context: I compute velocities in a 3D mesh. This mesh is divide in 
subdomain (cubes) for every MPI job. I can dump all the domain thanks to 
parallel hdf5 routines.

But, I would like to dump also a slice of this domain via parallel hdf5 (I 
mean, I want to use a collective write). In this case, the slice is only 
included in a few subdomains. So, for the other subdomains/mpi job, the 
dimensions of their hyperslab part are null. 

For a collective I/O, all mpi-job must to pass through the CALL 
h5dwrite_f(...,...,data,dims,...). Some mpi-job (in fact, a lot!) have data and 
dims equal to zero.

My code crashs! HDF5 complains about 

"H5Screate_simple(): zero sized dimension for non-unlimited dimension"

How can I handle this without create a smaller communicator??

I can put my code if it could help!
Thanks!

Stephane
  
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org

Reply via email to