On May 10 2010, Kawashima wrote:

Because MPI_THREAD_FUNNELED/SERIALIZED doesn't restrict other threads to
call functions other than those of MPI library, code bellow are not
thread safe if malloc is not thread safe and MPI_Allreduce calls malloc.

   #pragma omp parallel for private(is_master)
   {
       MPI_Is_thread_main(&is_master);
       if (is_master == 0) {   /* master thread */
           MPI_Allreduce(...);
       } else {                /* other threads */
           /* work that calls malloc */
       }
   }

That's one case.  It also applies to ordinary memory accesses, where
they might use data that overlaps the MPI_Allreduce.  That's probably
impossible for the blocking MPI calls, but it isn't for the one-sided
communication ones, and probably not for the non-blocking ones.  Mixing
non-blocking (and, worse, one-sided communication) and threading is a
nightmare area.

On others, they use a completely different (and seriously incompatible,
at both the syntactic and semantic levels) set of libraries.  E.g. AIX.

Sorry, I don't know these issue well.
Do you mean the case I wrote above about malloc?

No.  You have to compile using different paths if you want threading,
and that generated incompatible code (in a very few ways, but ones
that hit my users).

In C, omp parallel region ends with for-block.
So I think that would call MPI_Allreduce once per process.
# In Fortran, it may require omp end parallel directive to end parallel
# region. But I don't know Fortran well, sorry.

You're right.  I had got myself confused - what I was saying is true
for Fortran coarrays but not OpenMP (in either C or fortran)!

I can't comment on that, though I doubt it's quite that simple.  There's
a big difference between MPI_THREAD_FUNNELED and MPI_THREAD_SERIALIZED
in implementation impact.

I can't imagine difference between those two, unless MPI library uses
something thread local. Ah, there may be something on OSes that I don't
know....

See my other message.  It's evil.  It's also rare, and I don't know how
widespread it is today.


Regards,
Nick Maclaren.

Reply via email to