On Wed, 2006-02-22 at 14:54, Fabian Tillier wrote: > On 2/22/06, amith rajith mamidala <[EMAIL PROTECTED]> wrote: > > > > I have a question related to the mixture of HCA bandwidths > > in the fabric. For an upper layer, like MVAPICH, "negotiating" > > for a rate so that all the ports are "involved" can be quite > > expensive, especially if the code falls in the critical path. > > Can any additional support be provided by the underlying > > SA interface, so that the upper protocal layers can do the job > > in a minimum time. This kind of support can be used not only for > > MPI but for other stacks as well, > > All the rate information can be figured out at process launch time. > If at launch time each MPI process used the SA to query for path > information to connect to all the other processes, then the > application could also look at the returned set of paths and figure > out what the slowest rate in the fabric is and use that for multicast > traffic. Note that point-to-point traffice, whether reliable > connected or unreliable datagram, would use the rate in the path > record returned by the SA. Only multicast traffic must be aware of > the rate limitations of the whole job.
You can also get rate information on multicast groups too from the SA. -- Hal > Hopefully that helps. > > - Fab > _______________________________________________ > openib-general mailing list > [email protected] > http://openib.org/mailman/listinfo/openib-general > > To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general _______________________________________________ openib-general mailing list [email protected] http://openib.org/mailman/listinfo/openib-general To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general
