On Wed, Feb 14, 2018 at 5:36 AM, Mark Adams <mfad...@lbl.gov> wrote:

>> >
>> > We have been tracking down what look like compiler bugs and we have
>> only taken at peak performance to make sure we are not wasting our time
>> with threads.
>>    You are wasting your time. There are better ways to deal with global
>> metadata than with threads.
> OK while agree with Barry let me just add for Baky's benefit if nothing
> else.
> You can write efficient code with thread programing models (data shared by
> default) but a thread PM does not help in developing the good data models
> that are required for efficient programs. And you can write crappy code
> with MPI shared memory. While a good start, just putting your shared memory
> in an MPI shared memory window will not make your code faster. Experience
> indicates that in general thread models are less efficient in terms of
> programmer resources. Threads are a pain in the long run.
> While this experience (Petsc/hypre fails when going from -O1 to -O2 on KNL
> and -with-openmp=1 even on flat MPI runs) is only anecdotal and HPC is
> going to involve pain no matter what you do, this may be an example of
> threads biting you.
> It is easier for everyone, compiler writers and programmers, to reason
> about a program where threads live in their own address space, you need to
> decompose your data at a fine level to get good performance anyway, and you
> can use MPI shared memory when you really need it. I wish Chombo would get
> rid of OpenMP but that is not likely to happen any time soon.

Your point about data decomposition is a good one. Even if you want to run
with threads, you must decompose your data intelligently
to get good performance. Can't you do the MPI shared work and still pass it
off as work necessary for threading anyway?


> Mark
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>

Reply via email to