On Wed, Jan 6, 2016 at 4:36 PM, Matt Thompson <fort...@gmail.com> wrote:

> On Wed, Jan 6, 2016 at 7:20 PM, Gilles Gouaillardet <gil...@rist.or.jp>
> wrote:
>
>> FWIW,
>>
>> there has been one attempt to set the OMP_* environment variables within
>> OpenMPI, and that was aborted
>> because that caused crashes with a prominent commercial compiler.
>>
>> also, i'd like to clarify that OpenMPI does bind MPI tasks (e.g.
>> processes), and it is up to the OpenMP runtime to bind the OpenMP threads
>> to the resources made available by OpenMPI to the MPI task.
>>
>> in this case, that means OpenMPI will bind a MPI tasks to 7 cores (for
>> example cores 7 to 13), and it is up to the OpenMP runtime to bind each 7
>> OpenMP threads to one core previously allocated by OpenMPI
>> (for example, OMP thread 0 to core 7, OMP thread 1 to core 8, ...)
>>
>
> Indeed. Hybrid programming is a two-step tango. The harder task (in some
> ways) is the placing MPI processes where I want. With omplace I could just
> force things (though probably not with Open MPI...haven't tried it yet),
> but I'd rather have a more "formulaic" way to place processes since then
> you can script it. Now that I know about the ppr: syntax, I can see it'll
> be quite useful!
>
> The other task is to get the OpenMP threads in the "right way". I was
> pretty sure KMP_AFFINITY=compact was correct (worked once...and, yeah,
> using Intel at present. Figured start there, then expand to figure out GCC
> and PGI). I'll do some experimenting with the OMP_* versions as a
> more-respected standard is always a good thing.
>
> For others with inquiries into this, I highly recommend this page I found
> after my query was answered here:
>
>
> https://www.olcf.ornl.gov/kb_articles/parallel-job-execution-on-commodity-clusters/
>
> At this point, I'm thinking I should start up an MPI+OpenMP wiki to map
> all the combinations of compiler+mpistack.
>
>
Just using Intel compilers, OpenMP and MPI.  Problem solved :-)

(I work for Intel and the previous statement should be interpreted as a
joke, although Intel OpenMP and MPI interoperate as well as any
implementations of which I am aware.)


> Or pray the MPI Forum and OpenMP combine and I can just look in a
> Standard. :D
>
>
echo "" > $OPENMP_STANDARD # critical step
cat $MPI_STANDARD $OPENMP_STANDARD > $HPC_STANDARD

More seriously, hybrid programming sucks.  Just use MPI-3 and exploit your
coherence domain via MPI_Win_allocate_shared.  That way, you won't have to
mix runtimes, suffer mercilessly because of opaque race conditions in
thread-unsafe libraries, or reason about a bolt-on pseudo-language that
replicates features found in ISO languages without a well-defined
interoperability model.  For example, what is the interoperability between
OpenMP 4.5 threads/atomics and C++11 threads/atomics, C11 threads/atomics,
or Fortran 2008 concurrency features (e.g. coarrays)?  Nobody knows out
side of "don't do that".  How about OpenMP parallel regions inside code
that runs in a POSIX, C11 or C++11 thread?  Good luck.  I've been trying to
solve the latter problem for years and have made very little progress as
far as the spec goes.

Related work:
- http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf
-
http://www.orau.gov/hpcor2015/whitepapers/Exascale_Computing_without_Threads-Barry_Smith.pdf

Do not feed the trolls ;-)

Jeff

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/

Reply via email to