Ido Shamai wrote: > I will check the issue now. Thinking a little further on this and talking with some colleagues, a question was made, whether the issue can be related to IB credits/starvation and/or protocol/interoperability with switches.
In an attempt to eliminate that, I repeated the test, this time in a loopback manner, e.g used ib_send_lat once with mtu=256 and once with mtu=2048 with both client/server running on the same node/hca - same result, the mtu=256 produces much better latency for large messages, e.g 1k and onward, for example for msg size=8k the latency is ~7us with mtu=256 and ~9us with mtu=2k I recalled that on the 2nd generation HCA (tavor), if mtu=2048 is used, the bandwidth is severely damaged (hence the ofed tavor quirk and friends) can this problem which I see on the 4th generation HCA, be somehow related? Or. -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html
