Re: PF problems with many connections.
Should I post bug report if I'm sure that this is the PF problem? Could you recommend me other network performance test which can do _many_ connections at a time. Thank you.
PF problems with many connections.
Hello. I trying to use OpenBSD under high load and have problems with PF. When there is very many connections to server in some point other connections just failes. I try to use simple test application that creates 1000 connections to server for 1000 iteration. Maximum number I have observed with pf was '12' but with 'pfctl -d' all cycle successfully works ('1000'). I try to use following simple test application: Also I have looked the same when testing 'ab' from apache2 distribution. 'ab -c 100 -n 100' : maximum 9 iteration with pf enabled and 100 without. There is instant connection closing if keep state is enabled. When keep state is disabled there is following behaviour: in some moment the program is waiting for reply but do not get it and connection also close because timeout. I have looked no problems in tcpdump reports. Also no blocked packets was in pflog0 interface ('block log all' rule) I am sure that states limit is not exceed. Now I have set limit states50 set limit src-nodes 5 set limit frags 32000 And `pfctl -si` have normal values. 'antispoof' and 'scrub' options are not affected. 'set optimization' make more bad. I looked the same behaviour in real use: when there is many connection, in some point they just closed. Any help will be appropriated. Many thanks. P.S. Sorry for my bad english.
Re: PF problems with many connections.
2007/7/13, Adriaan [EMAIL PROTECTED]: On 7/13/07, TuxR [EMAIL PROTECTED] wrote: Hello. I trying to use OpenBSD under high load and have problems with PF. When there is very many connections to server in some point other connections just failes. I try to use simple test application that creates 1000 connections to server for 1000 iteration. Maximum number I have observed with pf was '12' but with 'pfctl -d' all cycle successfully works ('1000'). I try to use following simple test application: Also I have looked the same when testing 'ab' from apache2 distribution. 'ab -c 100 -n 100' : maximum 9 iteration with pf enabled and 100 without. There is instant connection closing if keep state is enabled. When keep state is disabled there is following behaviour: in some moment the program is waiting for reply but do not get it and connection also close because timeout. I have looked no problems in tcpdump reports. Also no blocked packets was in pflog0 interface ('block log all' rule) I am sure that states limit is not exceed. Now I have set limit states50 set limit src-nodes 5 set limit frags 32000 And `pfctl -si` have normal values. 'antispoof' and 'scrub' options are not affected. 'set optimization' make more bad. I looked the same behaviour in real use: when there is many connection, in some point they just closed. Any help will be appropriated. Many thanks. P.S. Sorry for my bad english. Study the execellent 3 part series of OpenBSD developer at http://undeadly.org/cgi?action=articlesid=20060927091645mode=expanded If after following his advice, your firewall still does not perform adequately come back here with a posting of: 1) dmesg to see what kind of hardware you are using 2) vmstat -i output to show the interrupt rate of the NICs Using 'systat vmstat will give you a 'live' view of the interrupt rate and other resources 3) netstat -m output to see the mbuf stats 4) your pf.conf Others may have additional suggestions of course ;) =Adriaan= Adriaan, thank you for reply. I believe, this is not hardware problem. The system is not under high CPU-Load during tests. Hmmm... Of cource, I have read excelent Daniel Hartmeier's articles. It runnings on FujitsuSiemens SX200 1U Server, 1 Gb RAM, 2x Intel Xeon 3000 (but for now we using non-SMP kernel). # dmesg OpenBSD 4.1 (GENERIC) #1435: Sat Mar 10 19:07:45 MST 2007 [EMAIL PROTECTED]:/usr/src/sys/arch/i386/compile/GENERIC cpu0: Intel(R) Xeon(TM) CPU 3.00GHz (GenuineIntel 686-class) 3 GHz cpu0: FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,SBF, SSE3,MWAIT,DS-CPL,VMX,EST,CNXT-ID,CX16,xTPR real mem = 1072652288 (1047512K) avail mem = 971362304 (948596K) using 4278 buffers containing 53764096 bytes (52504K) of memory mainbus0 (root) bios0 at mainbus0: AT/286+ BIOS, date 10/20/06, BIOS32 rev. 0 @ 0xfd66a, SMBIOS rev. 2.34 @ 0x3fee8000 (67 entries) bios0: FUJITSU SIEMENS PRIMERGY RX200 S3 pcibios0 at bios0: rev 2.1 @ 0xfd590/0xa70 pcibios0: PCI IRQ Routing Table rev 1.0 @ 0xfde30/432 (25 entries) pcibios0: PCI Interrupt Router at 000:31:0 (Intel 82371FB ISA rev 0x00) pcibios0: PCI bus #12 is the last bus bios0: ROM list: 0xc/0x8000 0xc8000/0x5800 0xe2800/0x1400! acpi at mainbus0 not configured ipmi0 at mainbus0: version 1.5 interface KCS iobase 0xca2/2 spacing 1 cpu0 at mainbus0 cpu0: Enhanced SpeedStep disabled by BIOS pci0 at mainbus0 bus 0: configuration mode 1 (no bios) pchb0 at pci0 dev 0 function 0 vendor Intel, unknown product 0x25d8 rev 0x92 ppb0 at pci0 dev 2 function 0 Intel 5000 PCIE rev 0x92 pci1 at ppb0 bus 1 ppb1 at pci1 dev 0 function 0 Intel 6321ESB PCIE rev 0x01 pci2 at ppb1 bus 2 ppb2 at pci2 dev 0 function 0 Intel 6321ESB PCIE rev 0x01 pci3 at ppb2 bus 3 ppb3 at pci2 dev 1 function 0 Intel 6321ESB PCIE rev 0x01 pci4 at ppb3 bus 4 ppb4 at pci1 dev 0 function 3 Intel 6321ESB PCIE-PCIX rev 0x01 pci5 at ppb4 bus 5 mpi0 at pci5 dev 5 function 0 Symbios Logic SAS1068 rev 0x01: irq 11 scsibus0 at mpi0: 63 targets sd0 at scsibus0 targ 0 lun 0: ATA, HITACHI HDS7225S, A6DA SCSI3 0/direct fixed sd0: 238471MB, 238472 cyl, 16 head, 127 sec, 512 bytes/sec, 488390625 sec total sd1 at scsibus0 targ 1 lun 0: ATA, HITACHI HDS7225S, A6DA SCSI3 0/direct fixed sd1: 238471MB, 238472 cyl, 16 head, 127 sec, 512 bytes/sec, 488390625 sec total ppb5 at pci0 dev 3 function 0 Intel 5000 PCIE rev 0x92 pci6 at ppb5 bus 6 ppb6 at pci0 dev 4 function 0 Intel 5000 PCIE rev 0x92 ppb7 at pci7 dev 0 function 0 ServerWorks PCIE-PCIX rev 0xb5 pci8 at ppb7 bus 8 bge0 at pci8 dev 4 function 0 Broadcom BCM5715 rev 0xa3, BCM5715 A3 (0x9003): irq 11, address 00:0a:e4:82:11:60 brgphy0 at bge0 phy 1: BCM5714 10/100/1000baseT PHY, rev. 0 bge1 at pci8 dev 4 function 1 Broadcom BCM5715 rev 0xa3, BCM5715 A3 (0x9003): irq 9, address 00:0a:e4:82:11:61 brgphy1 at bge1 phy 1: BCM5714
Re: PF problems with many connections.
2007/7/13, Stuart Henderson [EMAIL PROTECTED]: pass log quick on $int_if proto tcp from $me to 10.10.10.10 port 80 ^^^ Is it any better without logging? And `pfctl -si` have normal values. It's better to include the output. Also sysctl net.inet.ip.ifq. # pfctl -si Status: Enabled for 14 days 23:22:42 Debug: Urgent State Table Total Rate current entries 34 searches29928507 23.1/s inserts960320.1/s removals 959980.1/s Counters match 7004910.5/s bad-offset 00.0/s fragment 00.0/s short 00.0/s normalize 102540.0/s memory 00.0/s bad-timestamp 00.0/s congestion 00.0/s ip-option 00.0/s proto-cksum 690.0/s state-mismatch 2000.0/s state-insert 00.0/s state-limit00.0/s src-limit 00.0/s synproxy 00.0/s # sysctl net.inet.ip.ifq net.inet.ip.ifq.len=0 net.inet.ip.ifq.maxlen=50 net.inet.ip.ifq.drops=0 It may be also important, before connection failed we can see following state: # pfctl -ss all tcp 10.10.10.101:8427 - 10.10.10.10:80 SYN_SENT:CLOSED I don't see any differences with or without 'log' option.
Re: PF problems with many connections.
pass log quick on $int_if proto tcp from $me to 10.10.10.10 port 80 ^^^ Is it any better without logging? And `pfctl -si` have normal values. It's better to include the output. Also sysctl net.inet.ip.ifq.
Re: PF problems with many connections.
On 7/13/07, TuxR [EMAIL PROTECTED] wrote: Hello. I trying to use OpenBSD under high load and have problems with PF. When there is very many connections to server in some point other connections just failes. I try to use simple test application that creates 1000 connections to server for 1000 iteration. Maximum number I have observed with pf was '12' but with 'pfctl -d' all cycle successfully works ('1000'). I try to use following simple test application: Also I have looked the same when testing 'ab' from apache2 distribution. 'ab -c 100 -n 100' : maximum 9 iteration with pf enabled and 100 without. There is instant connection closing if keep state is enabled. When keep state is disabled there is following behaviour: in some moment the program is waiting for reply but do not get it and connection also close because timeout. I have looked no problems in tcpdump reports. Also no blocked packets was in pflog0 interface ('block log all' rule) I am sure that states limit is not exceed. Now I have set limit states50 set limit src-nodes 5 set limit frags 32000 And `pfctl -si` have normal values. 'antispoof' and 'scrub' options are not affected. 'set optimization' make more bad. I looked the same behaviour in real use: when there is many connection, in some point they just closed. Any help will be appropriated. Many thanks. P.S. Sorry for my bad english. Study the execellent 3 part series of OpenBSD developer at http://undeadly.org/cgi?action=articlesid=20060927091645mode=expanded If after following his advice, your firewall still does not perform adequately come back here with a posting of: 1) dmesg to see what kind of hardware you are using 2) vmstat -i output to show the interrupt rate of the NICs Using 'systat vmstat will give you a 'live' view of the interrupt rate and other resources 3) netstat -m output to see the mbuf stats 4) your pf.conf Others may have additional suggestions of course ;) =Adriaan=