On Wed, Jan 6, 2016 at 7:20 PM, Gilles Gouaillardet <gil...@rist.or.jp>
wrote:

> FWIW,
>
> there has been one attempt to set the OMP_* environment variables within
> OpenMPI, and that was aborted
> because that caused crashes with a prominent commercial compiler.
>
> also, i'd like to clarify that OpenMPI does bind MPI tasks (e.g.
> processes), and it is up to the OpenMP runtime to bind the OpenMP threads
> to the resources made available by OpenMPI to the MPI task.
>
> in this case, that means OpenMPI will bind a MPI tasks to 7 cores (for
> example cores 7 to 13), and it is up to the OpenMP runtime to bind each 7
> OpenMP threads to one core previously allocated by OpenMPI
> (for example, OMP thread 0 to core 7, OMP thread 1 to core 8, ...)
>

Indeed. Hybrid programming is a two-step tango. The harder task (in some
ways) is the placing MPI processes where I want. With omplace I could just
force things (though probably not with Open MPI...haven't tried it yet),
but I'd rather have a more "formulaic" way to place processes since then
you can script it. Now that I know about the ppr: syntax, I can see it'll
be quite useful!

The other task is to get the OpenMP threads in the "right way". I was
pretty sure KMP_AFFINITY=compact was correct (worked once...and, yeah,
using Intel at present. Figured start there, then expand to figure out GCC
and PGI). I'll do some experimenting with the OMP_* versions as a
more-respected standard is always a good thing.

For others with inquiries into this, I highly recommend this page I found
after my query was answered here:

https://www.olcf.ornl.gov/kb_articles/parallel-job-execution-on-commodity-clusters/

At this point, I'm thinking I should start up an MPI+OpenMP wiki to map all
the combinations of compiler+mpistack.

Or pray the MPI Forum and OpenMP combine and I can just look in a Standard.
:D

Thanks,
Matt
-- 
Matt Thompson

Man Among Men
Fulcrum of History

Reply via email to