Re: [c-nsp] 6500 netflow export and the switch cpu
Hello, we are using optical splitters and copper media convertors to get desired traffic into the probes build on Linux with Luca Deri's PF_RING optimalization and fprobe. It's definitely not bad solution with today's server performance. For higher speeds packet rates (let's say over 500kpps with common server hw) it's possible to use hardware accelerated probes (available interfaces 1GE/10GE) - try: http://www.invea.cz/main/home/ They also offer tuned multiport sw based probes for a very reasonable price. Regards Lubos Ivan Gasparik píše v Pá 12. 09. 2008 v 21:32 +0200: It depends on the amount of traffic you are planning to analyze. In my experience from ISP environment a 3BXL with 256000 netflow entries can handle about 3Gb/s of average internet traffic without overrunning the netflow cache. But you have to use really aggressive timers to force flows time out very quickly and to make space for newly created flow entries. Big guys would say, move to CRS with 1024000 netflow entries per slot and more powerful CPU's ;-) I plan to try the way mentioned by you - mirroring traffic to some fprobe server. Is here somebody running external server for netflow analysis? I would be interrested in your experiences, especially what hardware is needed for processing 10Gb/s of traffic? Ivan -- Lubomir Pinkava, CTO CASABLANCA INT Vinohradska 184 / Praha 3 / PSC 130 52 Telefon: +420 270 000 218 Email: [EMAIL PROTECTED] / www.casablanca.cz ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 6500 netflow export and the switch cpu
On Thursday 11 September 2008, [EMAIL PROTECTED] wrote: You can enable sampling if it is not enabled. It should help some. Highly unlikely. Sampling on the 6500 is performed interely in software, *after* the full set of flows has been received. You have to distinguish between the cpu load seen as interrupt load (caused mostly by walking through the TCAM, collecting statistics and storing them in netflow cache) and the cpu load caused by NDE process (packet generation). Enabling netflow sampling you can decrease the second part of the load - the cpu will generate significantly less packets of export statistics. Ivan ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 6500 netflow export and the switch cpu
Highly unlikely. Sampling on the 6500 is performed interely in software, *after* the full set of flows has been received. You have to distinguish between the cpu load seen as interrupt load (caused mostly by walking through the TCAM, collecting statistics and storing them in netflow cache) and the cpu load caused by NDE process (packet generation). Enabling netflow sampling you can decrease the second part of the load - the cpu will generate significantly less packets of export statistics. Good point. And it doesn't help, of course, that the CPU in question is severely underpowered... Steinar Haug, Nethelp consulting, [EMAIL PROTECTED] ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 6500 netflow export and the switch cpu
It depends on the amount of traffic you are planning to analyze. In my experience from ISP environment a 3BXL with 256000 netflow entries can handle about 3Gb/s of average internet traffic without overrunning the netflow cache. But you have to use really aggressive timers to force flows time out very quickly and to make space for newly created flow entries. Big guys would say, move to CRS with 1024000 netflow entries per slot and more powerful CPU's ;-) I plan to try the way mentioned by you - mirroring traffic to some fprobe server. Is here somebody running external server for netflow analysis? I would be interrested in your experiences, especially what hardware is needed for processing 10Gb/s of traffic? Ivan On Fri, Sep 12, 2008 at 01:09:05AM +0800, cc loo wrote: I was wondering if mirroring the traffic into a server with Netflow probes (such as fprobe) to help relieving the stress on router's CPU would be a wise move ? Is this move common in ISP environments or do most of the big guys just leave the exporting from routers to collectors ? On Thu, Sep 11, 2008 at 11:50 PM, Jon Lewis [EMAIL PROTECTED] wrote: I've got a 6509 with sup720-3bxl running 12.2(18)SXD7b. It's forwarding several hundred mbit/s across a number of gig ports on WS-X6416-GBIC cards. I've noticed it's gotten very slow at certain things (like write mem), and when looking at the switch (remote command switch show proc cpu), I was kind of shocked to see 85% CPU utilization or higher across all time avgs. The biggest CPU eating process seems to be netflow export 223 2563111984 126342970 20287 38.27% 42.39% 42.03% 0 NDE - IPV4 Other than disabling export or moving traffic off this device, are there things I can do to tone this down? The couple hundred mbit/s this switch is forwarding is supposed to be no big deal for this platform. -- Jon Lewis | I route Senior Network Engineer | therefore you are Atlantic Net| _ http://www.lewis.org/~jlewis/pgp for PGP public key_ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 6500 netflow export and the switch cpu
On Fri, Sep 12, 2008 at 09:32:02PM +0200, Ivan Gasparik wrote: I plan to try the way mentioned by you - mirroring traffic to some fprobe server. Is here somebody running external server for netflow analysis? I would be interrested in your experiences, especially what hardware is needed for processing 10Gb/s of traffic? I haven't done anything up to 10G, but I've mirrored transit interfaces to servers for netflow collection as a demo. I'd say it was around 500M of live traffic. I was using pmacctd to generate netflow v9 records with src/dest IP, proto, ports, and src/dest AS. A quad-core 2GHz Xeon could just about keep up with a 500M mirror session per cpu, running one instance per mirror session. -- Ross Vandegrift [EMAIL PROTECTED] The good Christian should beware of mathematicians, and all those who make empty prophecies. The danger already exists that the mathematicians have made a covenant with the devil to darken the spirit and to confine man in the bonds of Hell. --St. Augustine, De Genesi ad Litteram, Book II, xviii, 37 ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 6500 netflow export and the switch cpu
On Fri, 12 Sep 2008, Ben Steele wrote: It looks like the fix was to enable flow-sampling. Out of curiosity what are you using your netflow for? I'm asking because sampling obviously isn't ideal when you are trying to get completely accurate data for accounting. Mostly for abuse tracking/corroboration. For this purpose, sampled should be good enough in most cases. If I could have full netflow, I'd like it, but it looks as if we've hit another unanticipated hardware limitation with our cisco gear. It feels a shame using DFC's for a margin of their capacity purely because you need the TCAM space to produce netflow. Kind of like using Sup720-3bxls to handle a few hundred mbit/s of traffic just to be able to take full routes. -- Jon Lewis | I route Senior Network Engineer | therefore you are Atlantic Net| _ http://www.lewis.org/~jlewis/pgp for PGP public key_ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] 6500 netflow export and the switch cpu
I've got a 6509 with sup720-3bxl running 12.2(18)SXD7b. It's forwarding several hundred mbit/s across a number of gig ports on WS-X6416-GBIC cards. I've noticed it's gotten very slow at certain things (like write mem), and when looking at the switch (remote command switch show proc cpu), I was kind of shocked to see 85% CPU utilization or higher across all time avgs. The biggest CPU eating process seems to be netflow export 223 2563111984 126342970 20287 38.27% 42.39% 42.03% 0 NDE - IPV4 Other than disabling export or moving traffic off this device, are there things I can do to tone this down? The couple hundred mbit/s this switch is forwarding is supposed to be no big deal for this platform. -- Jon Lewis | I route Senior Network Engineer | therefore you are Atlantic Net| _ http://www.lewis.org/~jlewis/pgp for PGP public key_ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 6500 netflow export and the switch cpu
You can enable sampling if it is not enabled. It should help some. Highly unlikely. Sampling on the 6500 is performed interely in software, *after* the full set of flows has been received. Steinar Haug, Nethelp consulting, [EMAIL PROTECTED] ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 6500 netflow export and the switch cpu
On Thu, 11 Sep 2008, Phil Mayers wrote: current ip flowmask for unicast: if-full current ipv6 flowmask for unicast:null Do you need the full mask? It includes tcp/udp ports. Dropping to destination-source may save you a lot of flows (but obviously lose you a lot of info) I'd really like to keep ip-full. It's quite handy when tracking down what an IP has been up to (like when trying to verify infection/scanning complaints). -- Jon Lewis | I route Senior Network Engineer | therefore you are Atlantic Net| _ http://www.lewis.org/~jlewis/pgp for PGP public key_ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 6500 netflow export and the switch cpu
I wonder if it is not something in the config, rather than the traffic. I collect netflow from an old 6509 with upwards of 800M out one interface and I haven't seen any problems.Using if-full too. Granted a lot of our flows are data set transfers though. (I can't get the IOS version right now as it is managed by a different group - but it is probably fairly vanilla.) The number of flows was mentioned, is there alot of VoIP going through your switch, or something like that? What happens if you reduce the aging values? The 'long' one looks high. It just seems that with the load you are quoting, you should be able to get everything... Joe Jon Lewis [EMAIL PROTECTED] Sent by: [EMAIL PROTECTED] 09/11/2008 01:52 PM To Phil Mayers [EMAIL PROTECTED] cc cisco-nsp@puck.nether.net Subject Re: [c-nsp] 6500 netflow export and the switch cpu On Thu, 11 Sep 2008, Phil Mayers wrote: current ip flowmask for unicast: if-full current ipv6 flowmask for unicast:null Do you need the full mask? It includes tcp/udp ports. Dropping to destination-source may save you a lot of flows (but obviously lose you a lot of info) I'd really like to keep ip-full. It's quite handy when tracking down what an IP has been up to (like when trying to verify infection/scanning complaints). -- Jon Lewis | I route Senior Network Engineer | therefore you are Atlantic Net| _ http://www.lewis.org/~jlewis/pgp for PGP public key_ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 6500 netflow export and the switch cpu
On Thu, 11 Sep 2008, Jon Lewis wrote: On Thu, 11 Sep 2008, Phil Mayers wrote: What do the following say: sh mls netflow table-contention detailed Earl in Module 5 Detailed Netflow CAM (TCAM and ICAM) Utilization TCAM Utilization : 100% ICAM Utilization : 7% Netflow TCAM count : 262026 Netflow ICAM count : 10 Netflow Creation Failures: 456680 Netflow CAM aliases : 0 I guess I need to get more aggressive on the flow aging. I've been using mls aging fast time 8 threshold 3 mls aging long 480 mls aging normal 32 It looks like the fix was to enable flow-sampling. mls sampling time-based 64 has our cpu usage back down to about nothing and tcam usage down around 50%. -- Jon Lewis | I route Senior Network Engineer | therefore you are Atlantic Net| _ http://www.lewis.org/~jlewis/pgp for PGP public key_ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] 6500 netflow export and the switch cpu
It looks like the fix was to enable flow-sampling. Out of curiosity what are you using your netflow for? I'm asking because sampling obviously isn't ideal when you are trying to get completely accurate data for accounting. I am interested in hearing people's opinion on their methods of accounting when data hits well beyond the TCAM limit(and you're already on DFC's) and you are in an all Ethernet switched world (ie not broadband ppp radius accounting), do you try and distribute the netflow onto multiple boxes closer to the edge or do you opt for another method? There is the easy option of byte counting switchports via snmp, but if people are wanting statistics of who's been where(possible legal reasons) or where the majority of traffic is coming from then that is not enough, maybe a mix of sampled netflow and switchport byte counting? It feels a shame using DFC's for a margin of their capacity purely because you need the TCAM space to produce netflow. Ben ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/