On Jan 23, 2013, at 7:24 AM, "Barrett, Brian W" <bwba...@sandia.gov> wrote:

> That's not entirely true; there's some state that's required to be held by
> the RTE framework (the ompi_process_info structure), but it's minimal and
> does not scale with number of peers in a job.

Sorry - guess I don't consider that "state" information as it doesn't change 
over the app's lifecycle. However, it is true that there is some amount of info 
that is required - a review of the orte/util/proc_info.h will show what that is.

Again, the primary point is that this is a thin wrapper over ORTE, so the 
components have to provide the same functionality but are free to do so however 
they choose.

> 
> In terms of interface, there's now three MPI frameworks which encompass
> the set of functionality the MPI layer needs: rte, pubsub, and dpm (the
> last two are the dynamic process stuff).  The RTE framework is a fairly
> small set of functions, probably 20?  I'm hoping we can shrink it slightly
> over time, but it's going to require some thought and changes to the OMPI
> layer, so I didn't want to do it all in one go.

FWIW: we know those interfaces are going to change (a) when the BTLs move to 
the OPAL layer, and (b) as we reduce/remove the modex requirement. As we 
discussed on the call yesterday, the trunk is going to undergo a LOT of change 
over the (roughly) next six months, so these interfaces aren't locked in 
concrete by any means.


> 
> Brian
> 
> On 1/23/13 8:03 AM, "Ralph Castain" <r...@open-mpi.org> wrote:
> 
>> I'm not entirely sure what you're asking here. There is no state at all
>> in the MPI layer - just a set of function calls. Each component in the
>> ompi/mca/rte framework is required to map those function calls to their
>> own implementation. The function calls themselves are just a rename of
>> the current ORTE calls, so the implementations must provide the same
>> functionality - they are simply free to do so however they choose.
>> 
>> 
>> On Jan 22, 2013, at 11:31 PM, Richard Graham <richa...@mellanox.com>
>> wrote:
>> 
>>> Brian,
>>> First - thanks.  I am very happy this is proceeding.
>>> General question here - do you have any idea how much global state
>>> sits behind the current implementation ?  What I am trying to gauge at
>>> what level of granularity one can bring in additional capabilities.
>>> I have not looked in detail yet, but will in the near future.
>>> 
>>> Thanks,
>>> Rich
>>> 
>>> -----Original Message-----
>>> From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On
>>> Behalf Of Barrett, Brian W
>>> Sent: Monday, January 21, 2013 9:31 PM
>>> To: Open MPI Developers
>>> Subject: [OMPI devel] RFC: RTE Framework
>>> 
>>> Hi all -
>>> 
>>> As discussed at the December developer's meeting, a number of us have
>>> been working on a framework in OMPI to encompass the RTE resources
>>> (typically provided by ORTE).  This follows on work Oak Ridge did on the
>>> ORCA layer, which ended up having a number of technical challenges and
>>> was dropped for a simpler approach.
>>> 
>>> The interface is still a work in process and designed around the
>>> concept that the ORTE component is a thin renaming around ORTE itself
>>> (this was one of the points the ORTE developers felt strongly about).
>>> We think it's ready for comments and coming into the trunk, so are
>>> trying to get it looked at by a broader community.  The Mercurial
>>> repository is available
>>> at:
>>> 
>>> https://bitbucket.org/rhc/ompi-trunk
>>> 
>>> This work is focussed only on the creation of a framework to encompass
>>> the RTE interface between OMPI and ORTE.  There are currently two
>>> components:
>>> the ORTE component and a test component implemented over PMI.  The PMI
>>> component is only really useful if ORTE is disabled at autogen time with
>>> the --no-orte option to autogen.  Future work to build against an
>>> external OMPI (in progress, on a different branch) will make using
>>> non-orte components slightly more useful.
>>> 
>>> Anyway, if there aren't any major comments, I'll plan on bringing this
>>> work to the trunk this weekend (Jan 26/27).
>>> 
>>> Brian
>>> 
>>> --
>>> Brian W. Barrett
>>> Scalable System Software Group
>>> Sandia National Laboratories
>>> 
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> devel mailing list
>>> de...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>> 
>> 
>> _______________________________________________
>> devel mailing list
>> de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>> 
>> 
> 
> 
> --
>  Brian W. Barrett
>  Scalable System Software Group
>  Sandia National Laboratories
> 
> 
> 
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel


Reply via email to