Hi,

I created a pull request to add the persistent collective
communication request feature to Open MPI. Though it's
incomplete and will not be merged into Open MPI soon,
you can play your collective algorithms based on my work.

  https://github.com/open-mpi/ompi/pull/2758

Takahiro Kawashima,
MPI development team,
Fujitsu

> Bradley,
> 
> 
> good to hear that !
> 
> 
> What Jeff meant in his previous email, is that since persistent 
> collectives are not (yet) part of the standard, user visible functions
> 
> (Pbcast_init, Pcoll_start, ...) should be part of an extension (e.g. 
> ompi/mpiext/pcoll) and should be named with the MPIX_ prefix
> 
> (e.g. MPIX_Pbcast_init)
> 
> 
> if you can make your source code available (e.g. github, bitbucket, 
> email, ...), then we'll get some more chances to review it and guide you.
> 
> 
> Cheers,
> 
> 
> Gilles
> 
> 
> On 8/1/2016 12:41 PM, Bradley Morgan wrote:
> >
> > Gilles, Nathan, Jeff, George, and the OMPI Developer Community,
> >
> > Thank you all for your kind and helpful responses.
> >
> > I have been gathering your advice and trying to put the various pieces 
> > together.
> >
> > Currently, I have managed to graft a new function MPI_LIBPNBC_Start at 
> > the MPI level with a corresponding pointer into 
> > mca->coll->libpnbc->mca_coll_libpnbc_start() and I can get it to fire 
> > from my test code.  This required good deal of hacking on some of the 
> > core files in trunk/ompi/mpi/c/… and trunk/ompi/mca/coll/… Not ideal, 
> > I’m sure, but for my purposes (and level of familiarity) just getting 
> > this to fire is a breakthrough.
> >
> > I will delve into some of the cleaner looking methods that you all 
> > have provided―I still need much more familiarity with the codebase, as 
> > I often find myself way out in the woods :)
> >
> > Thanks again to all of you for your help.  It is nice to find a 
> > welcoming community of developers.  I hope to be in touch soon with 
> > some more useful findings for you.
> >
> >
> > Best Regards,
> >
> > -Bradley
> >
> >
> >
> >
> >> On Jul 31, 2016, at 5:28 PM, George Bosilca <bosi...@icl.utk.edu 
> >> <mailto:bosi...@icl.utk.edu>> wrote:
> >>
> >> Bradley,
> >>
> >> We had similar needs in one of our projects and as a quick hack we 
> >> extended the GRequest interface to support persistent requests. There 
> >> are cleaner ways, but we decided that highjacking 
> >> the OMPI_REQUEST_GEN was good enough for a proof-of-concept. Then add 
> >> a start member to the ompi_grequest_t in request/grequest.h, and then 
> >> do what Nathan suggested by extending the switch in the 
> >> ompi/mpi/c/start.c (and startall), and directly call your own start 
> >> function.
> >>
> >> George.
> >>
> >>
> >> On Sat, Jul 30, 2016 at 6:29 PM, Jeff Squyres (jsquyres) 
> >> <jsquy...@cisco.com <mailto:jsquy...@cisco.com>> wrote:
> >>
> >>     Also be aware of the Open MPI Extensions framework, explicitly
> >>     intended for adding new/experimental APIs to mpi.h and the
> >>     Fortran equivalents.  See ompi/mpiext.
> >>
> >>
> >>     > On Jul 29, 2016, at 11:16 PM, Gilles Gouaillardet
> >>     <gilles.gouaillar...@gmail.com
> >>     <mailto:gilles.gouaillar...@gmail.com>> wrote:
> >>     >
> >>     > For a proof-of-concept, I'd rather suggest you add
> >>     MPI_Pcoll_start(), and add a pointer in mca_coll_base_comm_coll_t.
> >>     > If you add MCA_PML_REQUEST_COLL, then you have to update all
> >>     pml components (fastidious), if you update start.c (quite
> >>     simple), then you also need to update start_all.c (less trivial)
> >>     > If the future standard mandates the use of MPI_Start and
> >>     MPI_Startall, then we will reconsider this.
> >>     >
> >>     > From a performance point of view, that should not change much.
> >>     > IMHO, non blocking collectives come with a lot of overhead, so
> >>     shaving a few nanoseconds here and then will unlikely change the
> >>     big picture.
> >>     >
> >>     > If I oversimplify libnbc, it basically schedule MPI_Isend,
> >>     MPI_Irecv and MPI_Wait (well, MPI_Test since this is on blocking,
> >>     but let's keep it simple)
> >>     > My intuition is your libpnbc will post MPI_Send_init,
> >>     MPI_Recv_init, and schedule MPI_Start and MPI_Wait.
> >>     > Because of the overhead, I would only expect marginal
> >>     performance improvement, if any.
> >>     >
> >>     > Cheers,
> >>     >
> >>     > Gilles
> >>     >
> >>     > On Saturday, July 30, 2016, Bradley Morgan <morg...@auburn.edu
> >>     <mailto:morg...@auburn.edu>> wrote:
> >>     >
> >>     > Hello Gilles,
> >>     >
> >>     > Thank you very much for your response.
> >>     >
> >>     > My understanding is yes, this might be part of the future
> >>     standard―but probably not from my work alone.  I’m currently just
> >>     trying get a proof-of-concept and some performance metrics.
> >>     >
> >>     > I have item one of your list completed, but not the others.  I
> >>     will look into adding the MCA_PML_REQUEST_COLL case to
> >>     mea_pml_ob1_start.
> >>     >
> >>     > Would it also be feasible to create a new function and pointer
> >>     in mca_coll_base_comm_coll_t struct, (i.e.
> >>     mca_coll_base_module_istart, etc.) just to get a proof of
> >>     concept?  Do you think this would be a fairly accurate
> >>     representation (in regard to performance, not necessarily
> >>     semantics) of how the standard\official implementation would work?
> >>     >
> >>     >
> >>     > Once again, thanks very much for this information!
> >>     >
> >>     >
> >>     > Regards,
> >>     >
> >>     > -Bradley
> >>     >
> >>     >
> >>     >
> >>     >> On Jul 29, 2016, at 10:54 AM, Gilles Gouaillardet
> >>     <gilles.gouaillar...@gmail.com
> >>     <mailto:gilles.gouaillar...@gmail.com>> wrote:
> >>     >>
> >>     >> Out of curiosity, is MPI_Ibcast_init (and friends) something
> >>     that will/might be part of the future standard ?
> >>     >>
> >>     >> if you want to implement this as a MCA, then you have (at
> >>     least) to
> >>     >> - add an Ibcast_init field (function pointer) to
> >>     mca_coll_base_comm_coll_t struct
> >>     >> - add a 'case MCA_PML_REQUEST_COLL:' in mca_pml_ob1_start
> >>     >> - ensure these request are progressed
> >>     >> - ensure these requests can be
> >>     MPI_{Wait,Test,Probe!Request_free!Cancel } and friends
> >>     >>
> >>     >> note all coll components must initialize the new ibcast_init
> >>     field to NULL
> >>     >> and all pml components should handle MCA_PML_REQUEST_COLL.
> >>     >>
> >>     >>
> >>     >> Cheers,
> >>     >>
> >>     >> Gilles
> >>     >>
> >>     >>
> >>     >> On Saturday, July 30, 2016, Bradley Morgan <morg...@auburn.edu
> >>     <mailto:morg...@auburn.edu>> wrote:
> >>     >> Hello OpenMPI Developers,
> >>     >>
> >>     >> (I am new to the community, so please forgive me if I violate
> >>     any protocols or seem naive here…)
> >>     >>
> >>     >>
> >>     >> I am currently working on a prototype component for persistent
> >>     nonblocking collectives (ompi->mca->coll->libpnbc).
> >>     >>
> >>     >> I have integrated my new component and mapped MPI_IBcast to my
> >>     own _init function, which initiates a request but does not start
> >>     it.  Next, I would like to create a function pointer for
> >>     MPI_Start to kick off these requests.  However, the pointer(s)
> >>     for MPI_Start live in the pml (point-to-point) framework and its
> >>     implementation seems tacit to MCA.  I was able to trace the
> >>     default mapping of MPI_Start for my build to
> >>     pml->ob1->pml_ob1_start.c->mca_pml_ob1_start(), but I can’t seem
> >>     to translate the magic in that path to my own component.
> >>     >>
> >>     >> Alternatively, if trying to map MPI_Start is too difficult, I
> >>     think I could also create a custom function like LIBPNBC_Start
> >>     just to get past this and begin testing, but I have not yet found
> >>     a clean way to make a custom component function visible and
> >>     useable at the MPI level.
> >>     >>
> >>     >>
> >>     >> If anyone has any advice or can direct me to any applicable
> >>     resources (mostly regarding function mapping \ publishing for MPI
> >>     components) it would be greatly appreciated!
> >>     >>
> >>     >>
> >>     >> Thanks very much!
_______________________________________________
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Reply via email to