On 2014/07/20 22:15, Darryl Wisneski wrote:
> > How is fragmentation being handled? In OpenVPN or relying on the kernel
> > to do it? Or are you using small mtu anyway to avoid frags?
> 
> We are not tuning for fragmentation, nor are we setting mtu on
> the endpoint.

Doing that might be worth a try. i.e. try to avoid sending UDP packets
that require extra kernel work (i.e. fragmentation) seeing as openvpn can
handle that itself.

> Counters
>   match                            3349507           23.3/s
> 
> [snip]
> 
> everything else 0.0/s

I was really after absolute numbers from the counters if any are
non-zero, not rate.

> We have toggled net.inet.udp.sendspace and net.inet.udp.recvspace between
> 131028 and 262144 with no improvements.  Anything higher and we get a
> hosed system...

Breaking for >256K is expected.

> > > net.inet.ip.ifq.maxlen=1536
> > 
> > Monitor net.inet.ip.ifq.drops, is there an increase?
> 
> No increases in net.inet.ip.ifq.drops through time.
> 
> > This is already a fairly large buffer though (especially as I think you
> > mentioned 100Mb). How did you choose 1536?
> 
> google and trial and error.

Is that "1536 is the lowest value that avoid an increase in ifq.drops"
or something else?

> > > kern.bufcachepercent=90         # kernel buffer cache memory percentage
> > 
> > This won't help OpenVPN. Is this box also doing other things?
> 
> This box is running IPSEC
> 
> It's got four openvpn tunnels terminated on it.
> 
> We are running collectd, symon, dhcpd.  
> 
> The load lives between 2 - 4.

Presumably a lot of disk i/o from rrd writes then. Hmm..
Pity symon doesn't do rrdcache yet. Are you at least using rrdcache
for collectd?

Reply via email to