I also am not sure MPI_THREAD_MULTIPLE works anyway. I attempted an alternative approach: Protecting all calls to objects with a shared communicator by a mutex, and I ran into an odd issue: The TrilinosWrappers::MPI::Vector::Vector() constructor seems to be causing segfaults--I do not see how this could be the users' fault since there are no input parameters. This happens even when I run with only one thread unless I switch back to MPI_THREAD_SERIALIZED. I was able to verify that the MPI implementation I have is able to provide MPI_THREAD_MULTIPLE. As I am somewhat new to MPI I think I'm at the end of what I can try.

I don't know about the segfault -- a backtrace would be useful.

But I will say that the approach with the mutex isn't going to work. The mutex only makes sure that only one copy of your code is running at any given time; it does not ensure that the order of two operations is going to be the same on two machines. And even if you could ensure that, I think you still might have to use a parallel mutex (of which we have one in namespace Utilities::MPI).

In any case, the right way to deal with multiple threads communicating on the same communicator is to ensure that each message is matched on both sides via separate 'tag' values in all MPI communications. The analogy you should consider is that an MPI communicator is a postal service that only knows about street addresses. If multiple people are sending letters from the same address at the same time, to multiple people at another address, you need to include a name and/or reference number with each letter or you can't expect that the receiver will be able to match an incoming letter to a specific open issue (e.g., an invoice). That's what the 'tag' is there for. The problem, of course, is that you can't select the tags when you use packages like deal.II or PETSc or Trilinos -- they have all hardcoded these tags in their calls to MPI somewhere deep down. As a consequence, you really cannot hope to use multiple threads reliably on the same communicator unless you handle all of the communication yourself.

Best
 W.

--
------------------------------------------------------------------------
Wolfgang Bangerth          email:                 [email protected]
                           www: http://www.math.colostate.edu/~bangerth/


--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/4a6783ee-4723-4483-a505-c7eb30ccd4f3%40colostate.edu.

Reply via email to