On Mon, Jan 22, 2018 at 11:20:53PM +0100, Matteo Croce wrote: > When using the kernel datapath OVS allocates a pool of sockets to handle > netlink events. The number of sockets is: ports * n-handler-threads. > n-handler-threads is user configurable but defaults to the number of cores. > > On machines with lot of CPUs and ports, the number of sockets easily hits > the process file descriptor limit, which is 65536, and will cause > ovs-vswitchd to abort. > > Change the number of allocated sockets to just n-handler-threads which, > if set slighty greater than the number of cores, is enough to handle to > handler the netlink events. > > Replace the struct dpif_channel array with a single dpif_channel instance > and edit the code accordingly. By doing so, the port idx information is lost > in upcall event code, so the call to report_loss() must be commented, as it > already is in the Windows code path. > > Signed-off-by: Matteo Croce <[email protected]>
Per-port sockets help OVS to improve fairness, so that a single busy port can't monopolize all slow-path resources. We can't just throw that away. If there's a problem with too many netlink sockets, let's come up with a solution, but it can't be to eliminate fairness entirely. _______________________________________________ dev mailing list [email protected] https://mail.openvswitch.org/mailman/listinfo/ovs-dev
