On Fri, Apr 07, 2006 at 12:17:58AM +0200, Per-Olov Sjvholm wrote: > On Thursday 06 April 2006 23.08, Claudio Jeker wrote: > > On Thu, Apr 06, 2006 at 11:47:16PM +0300, Claudiu Pruna wrote: > > > Hi there list, > > > > > > I got to a situation at work where I have an OpenBSD 3.9 amd64 router > > > acting as bgp and ospf router, and it has to coupe with 100Mbps and > > > approx 15.000 packets per second, but it can't at about 10k pps, I have > > > like 70% cpu utilisation on iterrupt, and all the traffic becomes an > > > extreme sport, it is an Intel P4 3GHz em64 with 512MB of ram and 2 Intel > > > Pro100 (fxp) network cards. > > > > > > Any ideea if/how can I "jump" over the 10k barrier ? > > > > > > > > > > > > P.S.: Claudio thanks for the advice about 3.9 bgpd version and additive > > > communities, I works smooth. > > > > > > Thanks for any sugestion or advice. > > > > Switch to i386. amd64 has some interrupt problems, the amd64 I tested once > > maxed at 80kpps but did 450kpps in i386 mode. > > Hi Claudio > > What cpu, network cards and pf ruleset size did you use during the test when > the server handled 450kpps ? >
CPU (actually two CPUs on the board): cpu0 at mainbus0: (uniprocessor) cpu0: AMD Engineering Sample, 2592.68 MHz cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CF LUSH,MMX,FXSR,SSE,SSE2,SSE3,NXE,MMXX,FFXSR,LONG,3DNOW2,3DNOW cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 1MB 64b/line 16-way L2 cache cpu0: ITLB 32 4KB entries fully associative, 8 4MB entries fully associative cpu0: DTLB 32 4KB entries fully associative, 8 4MB entries fully associative Network cards: bge0 at pci2 dev 9 function 0 "Broadcom BCM5704C" rev 0x03, BCM5704 A3 (0x2003): irq 10, address 00:e0:81:27:e0:a9 brgphy0 at bge0 phy 1: BCM5704 10/100/1000baseT PHY, rev. 0 bge1 at pci2 dev 9 function 1 "Broadcom BCM5704C" rev 0x03, BCM5704 A3 (0x2003): irq 5, address 00:e0:81:27:e0:aa brgphy1 at bge1 phy 1: BCM5704 10/100/1000baseT PHY, rev. 0 PF was disabled (enabling PF with 10 or 20 rules (no states) resulted in a 20-30% drop) At the time we measured it em(4) was slower (300-350kpps) but fixes went in to remove the bottlenecks in the em(4) driver. -- :wq Claudio

