Re: Packet rate limiter
Newp. You're stuck to good old bps with ipfw or bps/cpse (connections per second established) with pf. The other method would be to use cisco netflow export data from a router being polled - then limiting traffic with one of the methods mentioned above... or just place pps limits on your router itself. Jan Sebosik wrote: Hi is there any way how to limit packet per second [PPS] rate to specified IP (group of IP) ? Linux can achieve this via IPtables. I`ve searched a lot of web, but nothing interesting found (for PF, IPFilter, and IPFW). Best regards --- Jan Sebosik, Slovakia ___ freebsd-net@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to [EMAIL PROTECTED] ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
6.x, 4.x ipfw/dummynet pf/altq - network performance issues
So, this may be the wrong list to post to, but it seemed the most appropriate. If someone could suggest a better location to move/cross post to let me know. I've been running some tests with using FreeBSD to filter and rate limit traffic. My first thoughts were to goto the latest stable release, which was 6.1 at the time. I've since done the same test under 6.2 and haven't seen any difference. I later migrated to running 4.11 to get away from these issues, but have discovered others. I've tested on an AMD 3200+ system with dual Intel 1000 series NICs, an AMD Opteron 165 with the same, and a Xeon 2.8 with the same. I've used both stock and intel drivers. 6.x; Normal traffic isn't a problem. The second you get into the realm of abusive traffic, such a DoS/DDoS (over 100mbps) UDP floods the machine falls over. Little packets with ip lengths of 28-29 bytes seem to do the most damage. I've tried playing with various sysctl values and have seen no difference at all. By falls over I mean stops sending all traffic in any direction. TCP syn packets have the same effect, tho not quite as rapidly (200~230mbps). I then tried moving filtering off to a transparent bridge. This improved the situation somewhat, but an extra 30-40mbps of UDP data and it would ultimately crumble. Overall the machine would be able to move between 300k-600k PPS before becoming a cripple, depending on packet length, protocol, and any flags. Without a specific pf or ipfw rule to deal with a packet the box would fall over, with specific block rules it would manage an extra 30-40mbps and then fall over. 4.11; Again, normal traffic isn't a problem. When routing filtering on the same system some of the problems found in 6.x are still apparent, but to a lesser degree. Splitting the task into a transparent filtering bridge with a separate routing box appears to clear it up entirely. UDP floods are much better handled - an ipfw block rule for the packet type and the machine responds as if there were no flood at all (until total bandwidth saturation or PPS limits of the hardware, which in this case was around 950Mbps). TCP syn attacks are also better handled, again a block rule makes it seem as if there were no attack at all. The system also appears to be able to move 800-900k PPS of any one protocol at a time. However, the second you try and queue abusive traffic the machine will fall over. Inbound floods appear to cause ALL inbound traffic to lag horrifically (while rate limiting/piping), which inherently causes a lot of outbound loss due to broken TCP. Now, I'm not sure if this is something to do with dummynet being horribly inefficient, or if there's some sysctl value to deal with inbound that I'm missing. I suppose my concerns are two-fold. Why is 6.x collapsing under traffic that 4.11 could easily block and run merrily along with, and is there a queueing mechanism in place that doesn't tie up the box so much on inbound flows that it ignores all other relevant traffic? (as a note, all tests were done with device polling enabled. Without it systems fall over pretty quickly. I also tried tests using 3com cards and had the same results) ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: 6.x, 4.x ipfw/dummynet pf/altq - network performance issues
I've actually already done everything you've suggested with little or no impact at all. One point where we have different results is with ADAPTIVE_GIANT, I actually noticed a drop of about 50kpps thruput when disabling it. Mike Tancsa wrote: On Mon, 05 Feb 2007 14:03:41 -0800, in sentex.lists.freebsd.questions you wrote: I suppose my concerns are two-fold. Why is 6.x collapsing under traffic that 4.11 could easily block and run merrily along with, and is there a queueing mechanism in place that doesn't tie up the box so much on inbound flows that it ignores all other relevant traffic? (as a note, all tests were done with device polling enabled. Without it systems fall over pretty quickly. I also tried tests using 3com cards and had the same results) On the 6.x box, try enabling adding to /etc/sysctl.conf kern.polling.enable=1 net.inet.ip.fastforwarding=1 kern.polling.idle_poll=1 kern.random.sys.harvest.ethernet=0 and in /boot/loader.conf, add kern.hz=2000 Also removing options ADAPTIVE_GIANT # Giant mutex is adaptive. from the kernel helps a bit as well. with kern.polling.idle_poll=1 your load avg will be messed up but it should help performance a bit. As for firewall rules, things really seem to fall down performance wise, as compared to RELENG_4. I havent found a way to improve that performance However, on the plus side, an extra core does seem to help a bit with the box remaining responsive. For NICs, stay with em or bge nics for now in RELENG_6 I have some misc test results at http://www.tancsa.com/blast.html ---Mike Mike Tancsa, Sentex communications http://www.sentex.net Providing Internet Access since 1994 [EMAIL PROTECTED], (http://www.tancsa.com) ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
little lost with my netstat -m output
I have a couple of issues that I'm somewhat concerned about based on some netstat output results. I have a few boxes running 4.11-STABLE that provide this; FreeBSD 4.11-STABLE #0: Wed May 4 09:49:52 PDT 2005 (i386) # netstat -m netstat: sysctl: retrieving mbstat: Cannot allocate memory And then two boxes running 5.4-STABLE that provide this; FreeBSD 5.4-STABLE #0: Wed May 18 11:51:30 PDT 2005 (amd64) # netstat -m 1358787 mbufs in use 18446744073709476645/32768 mbuf clusters in use (current/max) 0/0/0 sfbufs in use (current/peak/max) 189754 KBytes allocated to network 0 requests for sfbufs denied 0 requests for sfbufs delayed 4735 requests for I/O initiated by sendfile 300 calls to protocol drain routines FreeBSD 5.4-STABLE #0: Wed May 18 11:51:18 PDT 2005 (amd64) # netstat -m 740238 mbufs in use 131702/32768 mbuf clusters in use (current/max) 0/0/0 sfbufs in use (current/peak/max) 448463 KBytes allocated to network 0 requests for sfbufs denied 0 requests for sfbufs delayed 10981 requests for I/O initiated by sendfile 492 calls to protocol drain routines Now, the 4.11 boxes failing to report and rather giving me a memory error are somewhat troubling, but what's more troubling is the 5.4 boxes showing current usage beyond the max limitations. Not to mention consuming between 190 and 450mb for net traffic. The output just seems so out of touch... While all the machines are frequented with attacks, none are in progress at the moment, and the machines rarely suffer anything more than lag due to network saturation... If this spike was the result of an attack, why were they never released? I'm generally confused by what I'm seeing here, what gives? ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
little lost with my netstat -m output
I have a couple of issues that I'm somewhat concerned about based on some netstat output results. I have a few boxes running 4.11-STABLE that provide this; FreeBSD 4.11-STABLE #0: Wed May 4 09:49:52 PDT 2005 (i386) # netstat -m netstat: sysctl: retrieving mbstat: Cannot allocate memory And then two boxes running 5.4-STABLE that provide this; FreeBSD 5.4-STABLE #0: Wed May 18 11:51:30 PDT 2005 (amd64) # netstat -m 1358787 mbufs in use 18446744073709476645/32768 mbuf clusters in use (current/max) 0/0/0 sfbufs in use (current/peak/max) 189754 KBytes allocated to network 0 requests for sfbufs denied 0 requests for sfbufs delayed 4735 requests for I/O initiated by sendfile 300 calls to protocol drain routines FreeBSD 5.4-STABLE #0: Wed May 18 11:51:18 PDT 2005 (amd64) # netstat -m 740238 mbufs in use 131702/32768 mbuf clusters in use (current/max) 0/0/0 sfbufs in use (current/peak/max) 448463 KBytes allocated to network 0 requests for sfbufs denied 0 requests for sfbufs delayed 10981 requests for I/O initiated by sendfile 492 calls to protocol drain routines Now, the 4.11 boxes failing to report and rather giving me a memory error are somewhat troubling, but what's more troubling is the 5.4 boxes showing current usage beyond the max limitations. Not to mention consuming between 190 and 450mb for net traffic. The output just seems so out of touch... While all the machines are frequented with attacks, none are in progress at the moment, and the machines rarely suffer anything more than lag due to network saturation... If this spike was the result of an attack, why were they never released? I'm generally confused by what I'm seeing here, what gives? ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]