Per-Olov Sjvholm wrote:

I have read that people have tested with *very* high load with success...

I am not the best expert....but you don't say anything about the OpenBSD config. At high load you probably have to change net.inet.ip.ifq.maxlen, kern.maxclusters, net.inet.tcp.recvspace, net.inet.tcp.sendspace, net.inet.tcp.rfc1323, kern.somaxconn among some other things... If you for example run out of "maxclusters" the server will freeze (as you mentioned)... Try OpenBSD FAQ ;-)

Again, net.inet.tcp.recvspace, net.inet.tcp.sendspace, rfc1323 sliding window support and kern.somaxconn has nothing to the with routing performance.

kern.maxclusters specifies the maximum number of mbuf clusters and I've never seen a system freezing because of exhausted mbuf clusters. Recent kernels are intelligent enough to tell they're out of mbuf clusters via kernel messages. It can be easily traced from /var/log/messages.

BTW, FAQ's section 6.6 has no recommendations about tuning routing performance.

After 3.8, major performance gains regarding cpu/interrupt load has been done in the em driver among many other fixes in the driver. Can't see you even mention the OpenBSD version you use....

And another thing... SMP kernel wont help if (but you don't) you run PF as it can't make use of SMP. Maybe it could be worse... But I don't know for sure.

MP makes possible to use I/O APICs so offloads the interrupt load from CPU. It can be big plus.

When it comes to the NIC;s, many on the list will probably tell you the marvel chip is a good one. But you probably know that if you read the list. But it will probably wont help you if you don't tune the server right anyway...

There doesn't exist many user configurable knobs to achieve high forwarding and packet handling performance if you don't come up with ultra secret kernel patches.

Reply via email to