On 10/28/2014 04:28 AM, Angel de Vicente wrote:
Hi Rob,

Rob Latham <[email protected]> writes:

On 10/27/2014 08:21 AM, Angel de Vicente wrote:
Hi,

is anyone aware of troubles with PHDF5 and IntelMPI? A test code to
reads an HDF5 file in parallel has trouble when scaling if I run it with
IntelMPI, but no trouble if I run it, for example, with POE.

The curie web site says "Global File System" and "Lustre", so I don't know which
one you're using.

If it's lustre, maybe this will help you:

https://press3.mcs.anl.gov/romio/2014/06/12/romio-and-intel-mpi/

thanks, but this issue is not happening in CURIE, but in MareNostrum,
which uses GPFS.


Good to know. While intel mpi does not include any GPFS optimizations, there's really only one optimization that matters for GPFS writes: aligning ROMIO file domains to file system block boundaries.

Set the MPI-IO hint "striping_unit" to the GPFS block size.

Setting MPI-IO hints through HDF5 requires property lists and some other gyrations. Here's a good example, except you would set different hints:

https://wickie.hlrs.de/platforms/index.php/MPI-IO#Adapting_HDF5.27s_MPI_I.2FO_parameters_to_prevent_locking_on_Lustre

Determinig the gpfs block size, if you don't know it already, is as simple as 'stat -f'

==rob



--
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA

_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to