Kewl - I'll add it in the next wave. Meantime, we can revert this one

Thanks!
Ralph

On Feb 6, 2014, at 9:18 AM, Joshua Ladd <josh...@mellanox.com> wrote:

> It’s been CMRed, but scheduled for 1.7.5
>  
> https://svn.open-mpi.org/trac/ompi/ticket/4185
>  
> From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Mike Dubman
> Sent: Thursday, February 06, 2014 12:17 PM
> To: Open MPI Developers
> Subject: Re: [OMPI devel] [OMPI svn] svn:open-mpi r30571 - trunk/ompi/runtime
>  
> It seems that similar code in not in v1.7 tree.
>  
> 
> On Thu, Feb 6, 2014 at 2:40 PM, George Bosilca <bosi...@icl.utk.edu> wrote:
> This commit is unnecessary. The call to delete_proc is already there, few 
> lines above your own patch. It was introduced on Jan 26 2014 with commit 
> https://svn.open-mpi.org/trac/ompi/changeset/30430.
> 
>   George.
> 
> 
> 
> On Feb 6, 2014, at 09:38 , svn-commit-mai...@open-mpi.org wrote:
> 
> > Author: miked (Mike Dubman)
> > Date: 2014-02-06 03:38:32 EST (Thu, 06 Feb 2014)
> > New Revision: 30571
> > URL: https://svn.open-mpi.org/trac/ompi/changeset/30571
> >
> > Log:
> > OMPI: add call to del_procs
> >
> > fixed by AlexM, reviewed by miked
> > cmr=v1.7.5:reviewer=ompi-rm1.7
> >
> > Text files modified:
> >   trunk/ompi/runtime/ompi_mpi_finalize.c |    15 +++++++++++++++
> >   1 files changed, 15 insertions(+), 0 deletions(-)
> >
> > Modified: trunk/ompi/runtime/ompi_mpi_finalize.c
> > ==============================================================================
> > --- trunk/ompi/runtime/ompi_mpi_finalize.c    Wed Feb  5 17:49:26 2014      
> >   (r30570)
> > +++ trunk/ompi/runtime/ompi_mpi_finalize.c    2014-02-06 03:38:32 EST (Thu, 
> > 06 Feb 2014)      (r30571)
> > @@ -94,6 +94,9 @@
> >     opal_list_item_t *item;
> >     struct timeval ompistart, ompistop;
> >     ompi_rte_collective_t *coll;
> > +    ompi_proc_t** procs;
> > +    size_t nprocs;
> > +
> >
> >     /* Be a bit social if an erroneous program calls MPI_FINALIZE in
> >        two different threads, otherwise we may deadlock in
> > @@ -150,6 +153,18 @@
> >        MPI lifetime, to get better latency when not using TCP */
> >     opal_progress_event_users_increment();
> >
> > +
> > +    if (NULL == (procs = ompi_proc_world(&nprocs))) {
> > +        return OMPI_ERROR;
> > +    }
> > +
> > +    if (OMPI_SUCCESS != (ret = MCA_PML_CALL(del_procs(procs, nprocs)))) {
> > +        free(procs);
> > +        return ret;
> > +    }
> > +    free(procs);
> > +
> > +
> >     /* check to see if we want timing information */
> >     if (ompi_enable_timing != 0 && 0 == OMPI_PROC_MY_NAME->vpid) {
> >         gettimeofday(&ompistart, NULL);
> > _______________________________________________
> > svn mailing list
> > s...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/svn
> 
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>  
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel

Reply via email to