On 12 Nov 2018, at 3:44, Eugene Grosbein wrote:

My additions are mostly for Wolfgang,

12.11.2018 6:23, Wolfgang Zenker wrote:

on a jail with quite a lot of somewhat bursty network traffic I get
warnings from netdata recently about packets being dropped because of
net.route.netisr_maxqlen being to small. Before I start setting this
value to some random value I'ld like to find out what it actually means.
A search for documentation turned up nothing; a look at the sources
found that it is used for the size of a "software interrupt queue" in
epair(4). But what does it mean? And does this give me enough
information to find a good value to set for this sysctl?

netisr packet queues keep packets received by the interface and
not yet processed by destined subsystem or userland application
that may be short of CPU cycles or blocked for some reason.

First, the system won't allow you to raise net.route.netisr_maxqlen over the limit net.isr.maxqlimit.
The limit itself can be changed with /boot/loader.conf and reboot.
Default value of limit is 10240. I generally raise the limit upto 102400 for hosts with heavy/bursty traffic. Note that actual increase of net.route.netisr_maxqlen somewhat increases usage of kernel memory and that could be important for 32 bit kernel
and/or system with very low amount of RAM.

There may be several netisr packet queues in the system and raising net.route.netisr_maxqlen allows all of them to grow. epair(4) has its own setting net.link.epair.netisr_maxqlen that defaults to net.route.netisr_maxqlen if unset, so you may be start experimenting with net.link.epair.netisr_maxqlen first, instead of system global net.route.netisr_maxqlen.

Don't set net.route.netisr_maxqlen to random value but double its current value and see if that would be enough. If not, double it again. If 4096 apears not enough, you should check your applications why they can't keep with incoming traffic rate.

Also if you have multiple epair interface-combinations and a multi-core CPU you might also want to try (which never became default I think) to experiment with these settings in loader.conf:

net.isr.bindthreads=1
net.isr.maxthreads=256 # net.isr will always reduce it to mp_cpus

which should help balancing the load across the cores. Note: these changes also affect all other possible traffic going through the netisr framework.

The netisr(9) man page has some documentation of these fields but not everything. The source code does have a lot of comments and if someone would improve the man page that might be a good start:
https://svnweb.freebsd.org/base/head/sys/net/netisr.c?annotate=326272#l159


netstat -Q is also a good source for monitoring and diagnostics.


/bz
_______________________________________________
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to