Dave,
I think I know the reason, or at least part of the reason, for these
spikes. As an example, when we select between the different protocols to
use to exchange the message between peers, we only use predefined lengths,
and we completely disregard buffer alignment.
I was planning to address th
>>>4. Re: remote spawn - have no children (r...@open-mpi.org)
>>>5. Re: remote spawn - have no children (r...@open-mpi.org)
>>>6. Re: remote spawn - have no children (Justin Cinkelj)
>>>7. Re: remote spawn - have no children (r...@open-mpi.org)
&
next part --
>> An HTML attachment was scrubbed...
>> URL: <https://rfd.newmexicoconsortium.org/mailman/private/devel/a
>> ttachments/20170502/a6fe11d2/attachment.html>
>> -- next part --
>> A non-text attachment was scr
David,
Are you using the OB1 PML or one of our IB-enabled MTLs (UCX or MXM) ? I
have access to similar cards, and I can't replicate your results. I do see
a performance loss, but nowhere near what you have seen (it is going down
to 47Gb instead of 50Gb).
George.
On Tue, May 2, 2017 at 4:40 PM,
I've used my NetPIPE communication benchmark (http://netpipe.cs.ksu.edu)
to measure the performance of OpenMPI and other implementations on
Comet at SDSC (FDR IB, graph attached, same results measured elsewhere too).
The uni-directional performance is good at 50 Gbps, the bi-directional
perfor