Hi Joshua,
Thank you very much for your help.
I reply the email so late because I wanted to confirm the provided solution
with MXM before that.
However, unfortunately, I haven't used MXM correctly so far (two servers
cannot communicate with MXM).
I'll tell you if MXM solves the problem after I
Hi,
There is a known issue in ConnectX-4 which impacts RDMA_READ bandwidth with
a single QP. The overhead in the HCA of processing a single RDMA_READ
response packet is too high due to the need to lock the QP. With a small
MTU (as is the case with Ethernet packets), the impact is magnified
Hi all,
Sorry for resubmitting this problem because I found I didn't add the
subject in the last email.
I encountered a problem when I tested the performance of OpenMPI over ROCE
100Gbps.
I have two servers connected with mellanox 100Gbps Connect-X4 ROCE NICs on
them.
I used intel mpi benchmark