Sorry about the reply on top; outlook sucks.

Anyway, you're right that we should enable MULTI_THREADS by default.  I was 
saying that while we discussed flipping other switches that would result in 
those locks actually being used, I don't think we should do that at this time.  
Long term, the more I think about it, the more we're going to have to move 
locks out of OPAL to get any real performance (ie, upper layer users are going 
to have to provide the right protection around their free lists).  But not 
today.

Brain

--
  Brian W. Barrett
  Scalable System Software Group
  Sandia National Laboratories
________________________________________
From: devel-boun...@open-mpi.org [devel-boun...@open-mpi.org] on behalf of 
Ralph Castain [r...@open-mpi.org]
Sent: Monday, December 10, 2012 11:49 AM
To: Open MPI Developers
Subject: Re: [OMPI devel] [EXTERNAL] RFC: Enable thread support by default

On Dec 10, 2012, at 10:35 AM, "Barrett, Brian W" <bwba...@sandia.gov> wrote:

> On 12/10/12 11:25 AM, "Ralph Castain" <r...@open-mpi.org> wrote:
>
>>
>> On Dec 10, 2012, at 10:15 AM, "Barrett, Brian W" <bwba...@sandia.gov>
>> wrote:
>>
>>> On 12/8/12 7:59 PM, "Ralph Castain" <r...@open-mpi.org> wrote:
>>>
>>>> WHAT:    Enable both OPAL and libevent thread support by default
>>>>
>>>> WHY:      We need to support threaded operations for MPI-3, and for
>>>> MPI_THREAD_MULTIPLE.
>>>>               Enabling thread support by default is the only way to
>>>> ensure we fix all the problems.
>>>>
>>>> WHEN:   COB, Thurs Dec 13
>>>>
>>>>
>>>> This was a decision reached at the OMPI Developers meeting, so the RFC
>>>> is
>>>> mostly just a "heads up" to everyone that this will happen. We spent
>>>> some
>>>> time recently profiling the impact on performance and found it to be
>>>> significant: 100ns in shared memory latency, and a similar number to
>>>> TCP
>>>> message latency. However, without setting the support "on" by default,
>>>> we
>>>> will never address those problems. Thus, the group decided that we
>>>> would
>>>> enable support by default and being a concerted effort to reduce and/or
>>>> remove the performance impact.
>>>
>>> Thinking about this on the way home Friday, I'm not sure we need to go
>>> quite that far.  I think we do want to enable MPI_THREAD_MULTIPLE by
>>> default to cause all the locks to be "on" by default.  I'm not sure we
>>> need to enable progress threads at this point; the question is do we
>>> want
>>> to take a top-down approach, where we turn on the locks all the time for
>>> everything (expensive) and pare down what actually needs locking for
>>> async
>>> btl callbacks or do we leave off all the locking by default (when thread
>>> count == 1) and only turn on always-lock locks for the code paths that
>>> will deal with async callbacks from the BTLs.  I'm split on the issue.
>>
>> I viewed this in a different light. The question of thread_multiple is a
>> separate one. From my perspective, if we say we are going to support
>> MPI-3's async progress, then I don't see how we avoid the OPAL thread
>> support being "on" all the time.
>>
>> Likewise, if the ORTE wireup methods have to support async behavior, then
>> we have to build the event lib with thread support.
>>
>> So it seems to me that the best path forward is to turn both "on" by
>> default, then learn how to live with that situation.
>
> It depends on what you mean by "on".  Thread support is always "on" these
> days, meaning that opal_mutex_lock does, in fact, have a mutex that
> locks/unlocks.  The question is what the value of opal_using_threads() is
> (i.e., is OPAL_THREAD_LOCK a lock or not?).  In some ways, it doesn't need
> to be (i.e., attributes still don't require the big attribute lock in
> MPI_THREAD_SINGLE).  The problem is that because we protect many of the
> base data structures internally (like free lists) instead of externally,
> it's hard to be thread safe for the small portions of the PML, OSC, and
> runtime components that need thread safety for progress without enabling
> thread safety in a whole lot of other places.


Ummm.... I'm afraid that OPAL_THREAD_LOCK defines to no-op unless you configure 
--enable-opal-multi-threads. And since a lot of the code uses the macro instead 
of the direct function call, that means thread support isn't "on" these days in 
most places.

I'd suggest turning it all on by default, and then let's define and cleanup the 
thread support overall.
>
> Brian
>
> --
>  Brian W. Barrett
>  Scalable System Software Group
>  Sandia National Laboratories
>
>
>
>
>
>
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel


_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel



Reply via email to