Berk D. Demir wrote:
Per-Olov Sjvholm wrote:

I have read that people have tested with *very* high load with success...

I am not the best expert....but you don't say anything about the OpenBSD config. At high load you probably have to change net.inet.ip.ifq.maxlen, kern.maxclusters, net.inet.tcp.recvspace, net.inet.tcp.sendspace, net.inet.tcp.rfc1323, kern.somaxconn among some other things... If you for example run out of "maxclusters" the server will freeze (as you mentioned)... Try OpenBSD FAQ ;-)

Again, net.inet.tcp.recvspace, net.inet.tcp.sendspace, rfc1323 sliding window support and kern.somaxconn has nothing to the with routing performance.

kern.maxclusters specifies the maximum number of mbuf clusters and I've never seen a system freezing because of exhausted mbuf clusters. Recent kernels are intelligent enough to tell they're out of mbuf clusters via kernel messages. It can be easily traced from /var/log/messages.

BTW, FAQ's section 6.6 has no recommendations about tuning routing performance.


After 3.8, major performance gains regarding cpu/interrupt load has been done in the em driver among many other fixes in the driver. Can't see you even mention the OpenBSD version you use....

And another thing... SMP kernel wont help if (but you don't) you run PF as it can't make use of SMP. Maybe it could be worse... But I don't know for sure.

MP makes possible to use I/O APICs so offloads the interrupt load from CPU. It can be big plus.

Makes possible?  Erm by magic? Will running that kernel ... well
Um I'd like to buy another clue please Vanna.

When it comes to the NIC;s, many on the list will probably tell you the marvel chip is a good one. But you probably know that if you read the list. But it will probably wont help you if you don't tune the server right anyway...

There doesn't exist many user configurable knobs to achieve high forwarding and packet handling performance if you don't come up with ultra secret kernel patches.
Any hints on who to go to for the ultra secrets?


I am currently trying to connect to DC's over a leased gigaMAN connection. I am getting only 41 MB/s on the bsd routers without ipsec running 7 Mbs with ipsec running. These are Sunfire x2100's running on 3.9 i386 kernels.....

I have so far just found Henning's paper on perf tunning, it seems to tell me that I am very CPU bound when running ipsec. I can buy accelerator cards for crypto, but the performance is nowhere near what I would expect just machine to machine on a x-over cable, or switch between the broadcom cards.

Reply via email to