Correct. But multicast packets are droped at the QP receive level if the
app does not provide enough buffers to accept the data stream. The
bufers can easily be overrun if one does not code carefully given that
the maximum number of those is 16K or so. These drops occurs silently.
Currently there is no accounting for these drops in the upstream kernel.
How about when one of the destinations for the multicast group has its connection to the switch overloaded (because it is subscribing to many multicast groups whose combined bandwidth is momentarily greater than the bandwidth of the link to the switch). Are the messages destined for that endpoint dropped at the switch, or is traffic to the entire multicast group delayed?
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to