Kor,

> I am trying to incorporate support for parallel I/O in my HPX program. I
> make use of HDF5, which in turn uses MPI I/O for parallel I/O. To use
> HDF5 for parallel I/O, one thing I need to do is to pass the MPI
> communicator. I have trouble obtaining this communicator.
>
> I found hpx::util::mpi_environment::communicator(), which seems exactly
> what I need, but asserting hpx::util::mpi_environment::enabled() fails.
> It seems MPI is not initialized (yet?).
>
> The command line looks like this:
>
> srun \
>      --ntasks $nr_tasks \
>      --cpus-per-task=$nr_cpus_per_task \
>      --cores-per-socket=$nr_cores_per_socket \
>      --kill-on-bad-exit \
>      $my_command $my_arguments \
>          --hpx:ini="hpx.parcel.mpi.enable=1" \
>          --hpx:ini="hpx.os_threads=$nr_cores_per_socket" \
>          --hpx:bind=$cpu_binding
>
> My questions are:
> - Can I assume that MPI is initialized after the HPX runtime is?

Generally yes - if you have the MPI parcelport enabled at configuration time
and you run the executable through some driver that enables MPI (either
mpiexec/mpirun or - if your slurm environment does it - srun.

> - How can I obtain the MPI communicator HPX uses? Or should I maybe manage
> a communicator specifically for I/O myself?

hpx::util::mpi_environment::communicator(), I would however suggest that you
use your own communicator to avoid interference with HPX.

> Thanks for any info / hints / remarks / examples!

I would try doing something like:

salloc -p <partition> -N <nodes> -n <localities> -c <cores> mpirun
<your_executable>

HTH
Regards Hartmut
---------------
https://stellar.cct.lsu.edu
https://github.com/STEllAR-GROUP/hpx




_______________________________________________
hpx-users mailing list
[email protected]
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users

Reply via email to