On 2021-03-02 16:50, Pedro David Marco wrote:
Tried both and with/without cache...
i think its a glibc problem, and if it is it could be solved with edns0
in local dns
force tcp on packet size over 512 byte
https://bobcares.com/blog/bind-edns/ default edns0 is now 4096, but
sometimes its
I have set buffers to 20MB per core and results are great:
# sysctl -w net.core.rmem_default=20971520
0% packet lost... with default value of 200KB packet-loss went easily above 30%
You can chek if you have this problem with:
# netstat -suna
look for errors in UDP area
--Pedteter.
Tried both and with/without cache...
Pedreter...
On Tuesday, March 2, 2021, 04:46:08 PM GMT+1, Matus UHLAR - fantomas
wrote:
On 02.03.21 15:26, Pedro David Marco wrote:
>Just in case someone has this issue...
>Short version:
>In heavy load environments, SA produces more
On 02.03.21 15:26, Pedro David Marco wrote:
Just in case someone has this issue...
Short version:
In heavy load environments, SA produces more UDP traffic (specially if answers
are big, typically happens with TXT queries) than Linux kernel can handlewith
default buffers (tested in Debian
On 2021-03-02 16:26, Pedro David Marco wrote:
Correct Kernel UD tunning solves the problem!
in verbose this is ?
SOLVED!
Just in case someone has this issue...
Short version:
In heavy load environments, SA produces more UDP traffic (specially if answers
are big, typically happens with TXT queries) than Linux kernel can handlewith
default buffers (tested in Debian Buster), so many SA queries never get an