On Tue, Mar 30, 2021 at 11:31:35AM -0500, Ansis wrote:
> On Mon, Mar 29, 2021 at 6:21 PM Ben Pfaff <[email protected]> wrote:
> >
> > On Mon, Mar 29, 2021 at 05:26:34PM -0500, Ansis Atteka wrote:
> > > Under high load I observed that Netlink buffer constantly
> > > fills up for daemons querying Conntrack Table that has a
> > > lot of entries in it:
> > >
> > > netlink_notifier|WARN|netlink receive buffer overflowed
> > >
> > > This patch mitigates the problem by increasing socket
> > > receive buffer size.  Ideally we should try to calculate
> > > buffer size required, but it would be more sophisticated
> > > solution than simply increasing buffer size.
> > >
> > > Signed-off-by: Ansis Atteka <[email protected]>
> > > VMware-BZ: #2724821
> >
> > Are you sure that it's queries that cause overflows?  Queries retrieve
> > individual records, which would not overflow the 1 MB receive buffer
> > that was used before.  Dumps can retrieve large numbers of records, but
> > the kernel works in a way such that the records aren't all written in
> > one go but gradually as userspace reads them.
> >
> > Usually, the cause of an overflow is because there's a Netlink socket
> > that's subscribed to receive updates from some particular subsystem
> > (I guess that's conntrack in this case).  Those do arrive asynchronously
> > and can overflow if they arrive faster than userspace can retrieve and
> > process them.
> >
> > I can believe that that is the problem here.  Is it?  If so then
> > probably the solution is right, but the description of the problem is
> > slightly wrong.
> 
> Thanks, Ben. Would this amendment to commit message better describe situation:
> 
> Under high load I observed that Netlink socket buffer constantly
> fills up for daemons listening for Conntrack Table notifications:

If that's what you observed, then that's perfect, thanks.
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to