Dear all,

I am trying to incorporate support for parallel I/O in my HPX program. I 
make use of HDF5, which in turn uses MPI I/O for parallel I/O. To use 
HDF5 for parallel I/O, one thing I need to do is to pass the MPI 
communicator. I have trouble obtaining this communicator.

I found hpx::util::mpi_environment::communicator(), which seems exactly 
what I need, but asserting hpx::util::mpi_environment::enabled() fails. 
It seems MPI is not initialized (yet?).

The command line looks like this:

srun \
     --ntasks $nr_tasks \
     --cpus-per-task=$nr_cpus_per_task \
     --cores-per-socket=$nr_cores_per_socket \
     --kill-on-bad-exit \
     $my_command $my_arguments \
         --hpx:ini="hpx.parcel.mpi.enable=1" \
         --hpx:ini="hpx.os_threads=$nr_cores_per_socket" \
         --hpx:bind=$cpu_binding

My questions are:
- Can I assume that MPI is initialized after the HPX runtime is?
- How can I obtain the MPI communicator HPX uses? Or should I maybe 
manage a communicator specifically for I/O myself?

Thanks for any info / hints / remarks / examples!

Kor
_______________________________________________
hpx-users mailing list
[email protected]
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users

Reply via email to