Charles Amstutz([email protected]) on 2019.01.30 23:16:17 +0000:
> Hello
>
> We are running into an issue with a lot of dropped packets where states are
> failing to be created. We have noticed that it coincides with a fair amount
> of congestion, around 10-15/s according to 'pfctl -si'.
>
> We finally tried disabling our Carp Interfaces (we are using carp for
> failover) and the problem seems to completely go away. We have 53 carp
> interfaces on these two boxes and are just looking for some input on what
> might be causing an issue like this, where having carp interfaces enabled is
> causing such high congestion.
>
> We are running OpenBSD 6.4.
>
> Thanks,
Set sysctl net.inet.carp.log=7 (and activate carp again).
What does it show (in /var/log/messages)?
Also, whats the output of
sysctl net.inet.ip.ifq.drops
sysctl net.inet6.ip6.ifq.drops
netstat -m
pfctl -vsi
?
Hello, here are the results
/var/log/messages
With the logging we notice what is typical add entry attempts for arp
sysctl net.inet.ip.ifq.drops
net.inet.ip.ifq.drops=0
sysctl net.inet6.ip6.ifq.drops
net.inet6.ip6.ifq.drops=0
netstat –m
297 mbufs in use:
200 mbufs allocated to data
4 mbufs allocated to packet headers
93 mbufs allocated to socket names and addresses
17/104 mbuf 2048 byte clusters in use (current/peak)
99/555 mbuf 2112 byte clusters in use (current/peak)
0/40 mbuf 4096 byte clusters in use (current/peak)
0/56 mbuf 8192 byte clusters in use (current/peak)
0/14 mbuf 9216 byte clusters in use (current/peak)
0/30 mbuf 12288 byte clusters in use (current/peak)
0/24 mbuf 16384 byte clusters in use (current/peak)
0/48 mbuf 65536 byte clusters in use (current/peak)
5236/6856/524288 Kbytes allocated to network (current/peak/max)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines
pfctl –vsi
Status: Enabled for 1 days 20:18:23 Debug: err
Hostid: 0x30e5b38f
Checksum: 0x0930fa9e7e5a8c4562c3c5b488715989
State Table Total Rate
current entries 7400
half-open tcp 136
searches 486306276 3048.9/s
inserts 21891932 137.3/s
removals 21884532 137.2/s
Source Tracking Table
current entries 0
searches 0 0.0/s
inserts 0 0.0/s
removals 0 0.0/s
Counters
match 39904360 250.2/s
bad-offset 0 0.0/s
fragment 0 0.0/s
short 4 0.0/s
normalize 1 0.0/s
memory 0 0.0/s
bad-timestamp 0 0.0/s
congestion 1777154 11.1/s
ip-option 0 0.0/s
proto-cksum 0 0.0/s
state-mismatch 4185 0.0/s
state-insert 0 0.0/s
state-limit 0 0.0/s
src-limit 0 0.0/s
synproxy 0 0.0/s
translate 0 0.0/s
no-route 0 0.0/s
Limit Counters
max states per rule 0 0.0/s
max-src-states 0 0.0/s
max-src-nodes 0 0.0/s
max-src-conn 0 0.0/s
max-src-conn-rate 0 0.0/s
overload table insertion 0 0.0/s
overload flush states 0 0.0/s
synfloods detected 0 0.0/s
syncookies sent 0 0.0/s
syncookies validated 0 0.0/s
Adaptive Syncookies Watermarks
start 25000 states
end 12500 states