On 2020-12-14 18:03, David Ahern wrote: > On 12/14/20 3:42 AM, Thomas Karlsson wrote: >> This patch allows the user to set and retrieve the >> IFLA_MACVLAN_BC_QUEUE_LEN parameter via the bcqueuelen >> command line argument >> >> This parameter controls the requested size of the queue for >> broadcast and multicast packages in the macvlan driver. >> >> If not specified, the driver default (1000) will be used. >> >> Note: The request is per macvlan but the actually used queue >> length per port is the maximum of any request to any macvlan >> connected to the same port. >> >> For this reason, the used queue length IFLA_MACVLAN_BC_QUEUE_LEN_USED >> is also retrieved and displayed in order to aid in the understanding >> of the setting. However, it can of course not be directly set. >> >> Signed-off-by: Thomas Karlsson <[email protected]> >> --- >> >> Note: This patch controls the parameter added in net-next >> with commit d4bff72c8401e6f56194ecf455db70ebc22929e2 >> >> v2 Rebased on origin/main >> v1 Initial version >> >> ip/iplink_macvlan.c | 33 +++++++++++++++++++++++++++++++-- >> man/man8/ip-link.8.in | 33 +++++++++++++++++++++++++++++++++ >> 2 files changed, 64 insertions(+), 2 deletions(-) >> >> diff --git a/ip/iplink_macvlan.c b/ip/iplink_macvlan.c >> index b966a615..302a3748 100644 >> --- a/ip/iplink_macvlan.c >> +++ b/ip/iplink_macvlan.c >> @@ -30,12 +30,13 @@ >> static void print_explain(struct link_util *lu, FILE *f) >> { >> fprintf(f, >> - "Usage: ... %s mode MODE [flag MODE_FLAG] MODE_OPTS\n" >> + "Usage: ... %s mode MODE [flag MODE_FLAG] MODE_OPTS [bcqueuelen >> BC_QUEUE_LEN]\n" >> "\n" >> "MODE: private | vepa | bridge | passthru | source\n" >> "MODE_FLAG: null | nopromisc\n" >> "MODE_OPTS: for mode \"source\":\n" >> - "\tmacaddr { { add | del } <macaddr> | set [ <macaddr> [ >> <macaddr> ... ] ] | flush }\n", >> + "\tmacaddr { { add | del } <macaddr> | set [ <macaddr> [ >> <macaddr> ... ] ] | flush }\n" >> + "BC_QUEUE_LEN: Length of the rx queue for broadcast/multicast: >> [0-4294967295]\n", > > Are we really allowing a BC queue up to 4G? seems a bit much. is a u16 > and 64k not more than sufficient? > >
No, 64k is not sufficient when you have very high packet rate and very small packages. I can easily see myself retrieving more than 300 kpps and 65k is then only just around 200 ms of buffer head-room, which I don't consider much saftey margin at all, especially if the incoming data is not perfectly spaced out but rather comes in bursts. If you start adding 10Gbps cards in the mix you could get 10 times that and the buffer would be down to only 20ms. And we would get back to the situation where unicast works fine but multicast does not. So for sure a larger max than 64k is needed. The reason I didn't add an upper limit beside the u32 was that I didn't want to pick an arbitrary limit that works for me now but maybe not support someone elses use case later. I'm now looking at net.core.netdev_max_backlog for inspiration. Which, at least on my system, seems to have a limit of 2147483647 (signed 32 bit int). So perhaps this setting could be aligned with that number since the settings are sort of related instead of using the full range if you prefer that?
