On 09/20/2016 11:00 AM, Mintz, Yuval wrote:
The question I rose was whether it actually makes a difference under
such circumstances whether the device would actually filter those
multicast addresses or be completely multicast promiscuous.
e.g., whether it's significant to be filtering out multicast ingress
traffic when you're already allowing 1/2 of all random multicast
packets to be classified for the interface.
Agreed, I think this is the more interesting question here. I thought that we
would want to make sure we are using most of the bins before falling back to
multicast ingress. The reason being that even if its more expensive for the NIC
do the filtering than the multicast mode, it would be more than made up for by
having to drop the traffic higher up the stack. So I think if we can determine
percent of the bins that we want to use, we can then back into the average
number of filters required to get there. As I said, I thought we would want to
make sure we filled basically all the bins (with a high probability that is)
falling back to multicast, and so I threw out 2,048.
AFAIK configuring multiple filters doesn't incur any performance penalty
from the adapter side.
And I agree that from 'offloading' perspective it's probably better to
filter in HW even if the gain is negligible.
So for the upper limit - there's not much of a reason to it; The only gain
would be to prevent driver from allocating lots-and-lots of memory
temporarily for an unnecessary configuration.
Ok. We already have an upper limit to an extent with
/proc/sys/net/ipv4/igmp_max_memberships. And as posted I didn't include
one b/c of the higher level limits already in place.