Michael S. Tsirkin wrote:
Quoting Or Gerlitz <[EMAIL PROTECTED]>:
Subject: Re: [Bug 506] IPoIB IPv4 multicast throughput is poor

Michael S. Tsirkin wrote:
The low throughput is a major issue, though.  Shouldn't the IP multicast
throughput be similar to the UDP unicast throughput?
Is the send side a send only member of multicast group, or full member?
The join state (full / sendonly nonmember / nonmember)is something communicated between the ULP through the ib_sa module and the IB SA.
I don't see how the host ib driver becomes aware to it.

The current ipoib implementation for sendonly joins is to join as full member but not to attach its UD QP for that group.

I think so too. So what does the test do? Is it a sendonly join?

on the client side, when running iperf with -cu ipv4-multicast-address iperf just send packets to that destination, my understanding is that ipoib xmit calls ipoib_mcast_send which sense its a sendonly join etc.

on the server side, when running iperf with -suB ipv4-multicast-address iperf issues an IP_ADD_MEMBERSHIP setsockopt call on its socket and the kernel uses ip_ib_mc_map to compute the L2 multicast address and then call the ipoib device set_multicast_list function which initiates a full join + attach to this MGID.

If it's a full join, HCA creates extra loopback traffic which
has then to be discarded, and which might explain performance degradation.

Can you explain what --is-- the trigger for the "looback channel" creation? my thinking it should be conditioned on having any QP attached to this MGID, which does not seem the case in this scenario.

That's what I'd expect.

Is this documented anywhere (eg Mellanox PRM)?

Or.




_______________________________________________
general mailing list
[email protected]
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to