So first, there's an error in the patch (e-mail with details coming
shortly, as there are many errors in the patch).  There's no need for both
isends (the new one and the one in there already).

Second, this is in code that's a crutch around the real issue, which is
that for a very small class of applications, the way wireup occurs with
InfiniBand makes it time consuming if the application is very asynchronous
(one process does a single send, the other process doesn't enter the MPI
library for many minutes).  It's not on by default and not recommended for
almost all uses.

The goal is not to have a barrier, but to have every process have at least
one channel for MPI communication fully established to every other
process.  The barrier is a side effect.  The MPI barrier isn't used
precisely because it doesn't cause every process to talk to every other
process.  The rotating ring algorithm was used because we're also trying
as hard as possible to reduce single-point contention, which when everyone
is trying to connect at once, caused failures in either the OOB fabric
(which I think I fixed a couple months ago) or in the IB layer (which
seemed to be the nature of IB).

This is not new code, and given the tiny number of users (now that the OOB
is fixed, one app that I know of at LANL), I'm not really concerned about
scalability.

Brian


> If you really want to have a fully featured barrier why not using the
> collective barrier ? This double ring barrier have really bad
> performance, and it will became a real scalability issue.
>
> Or do we really need to force this particular connection shape (left
> & right) ?
>
>    george.
>
> Modified: trunk/ompi/runtime/ompi_mpi_preconnect.c
> ========================================================================
> ======
> --- trunk/ompi/runtime/ompi_mpi_preconnect.c  (original)
> +++ trunk/ompi/runtime/ompi_mpi_preconnect.c  2007-07-17 21:15:59 EDT
> (Tue, 17 Jul 2007)
> @@ -78,6 +78,22 @@
>
>           ret = ompi_request_wait_all(2, requests, MPI_STATUSES_IGNORE);
>           if (OMPI_SUCCESS != ret) return ret;
> +
> +        ret = MCA_PML_CALL(isend(outbuf, 1, MPI_CHAR,
> +                                 next, 1,
> +                                 MCA_PML_BASE_SEND_COMPLETE,
> +                                 MPI_COMM_WORLD,
> +                                 &requests[1]));
> +        if (OMPI_SUCCESS != ret) return ret;
> +
> +        ret = MCA_PML_CALL(irecv(inbuf, 1, MPI_CHAR,
> +                                 prev, 1,
> +                                 MPI_COMM_WORLD,
> +                                 &requests[0]));
> +        if(OMPI_SUCCESS != ret) return ret;
> +
> +        ret = ompi_request_wait_all(2, requests, MPI_STATUSES_IGNORE);
> +        if (OMPI_SUCCESS != ret) return ret;
>       }
>
>       return ret;
>
>
> On Jul 17, 2007, at 9:16 PM, jsquy...@osl.iu.edu wrote:
>
>> Author: jsquyres
>> Date: 2007-07-17 21:15:59 EDT (Tue, 17 Jul 2007)
>> New Revision: 15474
>> URL: https://svn.open-mpi.org/trac/ompi/changeset/15474
>
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>

Reply via email to