Re: [c-nsp] ASR A9K-8T-L certain ports limited to 8 Gbps.
Hi, On Wed, Nov 20, 2013 at 10:31 AM, Adam Vitkovsky adam.vitkov...@swan.skwrote: On Wednesday, November 20, 2013 01:15:36 AM McDonald Richards wrote: What framing mode are you running and what is the underlying transmission? I have seen this before on 10G circuits running in wanphy mode and the only fix was to get better transmission (ie. not an STM-64c) and run lanphy :) Hi, Thanks, but it isn't WAN-PHY - RP/0/RSP1/CPU0:ams-5345#show controllers wanphy 0/0/0/2 all Wed Nov 20 10:58:43.952 CET Interface: wanphy0/0/0/2 Configuration Mode: LAN Mode Operational Mode: LAN Mode - I'm starting to think one of the A9K-8T-L cards is messing about on the fabric signalling, or some funky QoS bug somewhere. Interesting that only couple of the 3 upstream ports have this issue Are these ports in a bundle? No, none are in a bundle. There are cases when an ECMP egress path is fine on one, and limited to 8 on another interface. Or maybe some pushback is going on that's not related to the box I don't think so, paths are ebgp upstreams, but also ibgp peering boxes. One interface is afftected and another is not to the same egress path. Thanks if anyone has other clues where to look. Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] ASR A9K-8T-L certain ports limited to 8 Gbps.
Dear All, We have a stubborn problem with one of our ASR9Ks. A couple of upstream ports are limited to 8Gbps egress traffic, while others are not affected. It is a 9010 chassis with two RSP-4Gs, and eight A9K-8T-L cards. Of every card the first three ports are used for upstream (egress traffic), and the following five ports are configured for downstream (ingress) There is no specific QoS configured on either side. The RSPs are running 4.2.3. We've swapped both RSPs, and several linecards, however nothing seems to change the behaviour. rrd graph here http://imgur.com/Td4KSGB Has anyone had experience with a situation like this? Thanks, Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] smartnet fineprint (or hardware stock in case of not so common hardware (100GE))
Hi All, We are in the process of ordering our first 100GE router interface. It will probably be the 1X100GBE= with FP140 for the CRS-3. Normally we keep our own cold spare stock, however with 100GE we will not be able to do so straight away. We're thinking of getting 24*7*4 smartnet for the linecard. As 100GE hardware is not widely deployed anytime soon Cisco's cold spare stock will probably not be abundant. After 15 years in this industry I know that sometimes everything that can go wrong will go wrong. So what happens if a couple of other 100GE customers have hardware issues before we do, and the stock is depleted. Will Cisco fly in linecards from other countries (We're in Europe b.t.w.) or pull them out of labs ? Or will they simply say Mea Culpa and better luck next time? Can anyone point me to the actual fineprint of Cisco in relation to this? (I couldn't find the right google question) And more importantly, does anyone have real life experience in a situation like this? Thanks in advance, Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] ip helper-address redundancy on Catalyst 6500?
On Mon, Nov 8, 2010 at 9:50 PM, Alan Buxey a.l.m.bu...@lboro.ac.uk wrote: in this case 12.2(15)T doesnt map at all to any 65k SX(F|H|I) releases. it does sound intriguing though - count me in as +1 for wanting to see this feature in SXI at least :-) I don't really want to turn this thread into a me too thread. But I know cisco is reading c-nsp, so one more vote for this option on 6500/SXI. Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Nice EEM applet to protect against certain DDoS situations (sup720)
On Mon, Aug 9, 2010 at 3:25 AM, Dobbins, Roland rdobb...@arbor.net wrote: On Aug 9, 2010, at 2:47 AM, bas wrote: And now imagine if I were a bad guy that has control over 50 compromised servers in networks that do not filter outbound spoofed traffic. We don't have to imagine it; this is a quite common scenario, except that the attacker has 5K or 50K or 500K bots in his particular botnet, heh. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Nice EEM applet to protect against certain DDoS situations (sup720)
Hi, On Sun, Aug 8, 2010 at 10:58 AM, Dobbins, Roland rdobb...@arbor.net wrote: On Aug 8, 2010, at 3:25 PM, bas wrote: ACL's are manual and not dynamic when we need them. Also ACL's do not scale with many (spoofed) source addresses. Understood, but they don't complete the DDoS for the attacker, unless one is forced to resort to ACLing off the target. Even then, doing so has no control-plane consequences on hardware-based platforms, so long as ACLs are kept within platform-specific scaling limits. S/RTBH does prevent high CPU, but there are a two deal-breakers. 1. When source addresses are dropped at the network edge, and the attacker uses spoofed source addresses there will be a lot of colateral damage when traffic of real users, whose IP addresses are spoofed, is dropped. This is where the concept of partial service recovery comes into play; the idea is that being up for any percentage of legitimate users is a 100% improvement over being completely down for all users, as is the case when the defender takes the target offline and completes the DDoS for the attacker. I ran the tool synk4 for 100 seconds against a test target. It generated 8.000.000 syn packets with random generated spoofed source addresses. Out of those 8.000.000 only 50 source addresses occurred twice and only 9 occurred three times. So both ACL and S/RTBH would have had zero effect. If I had kept the tool running for an hour it would probably have caused a network with S/RTBH to drop packets from 200.000.000 Internet addresses. And now imagine if I were a bad guy that has control over 50 compromised servers in networks that do not filter outbound spoofed traffic. Nearly 25% (estimated) of all ASN's allow spoofed traffic so it it a real threat. (ref. http://spoofer.csail.mit.edu/summary.php) Anyways, if folks haven't done so already, they may wish to consider implementing S/RTBH so that they have it in their toolkits when needed. It's a useful way to leverage existing hardware and gain some more functionality during times of stress. I agree completely, however imho it ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Nice EEM applet to protect against certain DDoS situations (sup720)
Hi All, We've had problems with certain DDoS situations for years now. For more than half a year we've configured the applets below on our 6500's, it has proven to work great. So I thought I'd share it on the list. One of our bigger problems with our sup720's was when a customer would receive a TCP SYN flood from lots of sources. And then decided to remove the IP address from the server (or shut it down) After the ARP entry expires from arp-cache every SYN packet would cause a glean action for the CPU, and thereby overloading it and causing BGP and OSPF flaps. The standard solution is to use mls rate-limit unicast cef glean y xx But we were seeing issues with static values. Either the values would be too strict and drop real arp gleans (on boxes with lots of hosts), or too loose thereby still overloading CPU in case of a DDoS. Then I found a nice example on wiki.nil.com, and modified it for our purposes. (thanks Ivan!) The following two applets configure strict glean values during high CPU situations, and restore lower glean values when CPU load drops. It also sends an email to a certain email address so you know the applet was executed. event manager environment _mail_smtp mail.yourdomain.com event manager environment _mail_domain yourdomain.net event manager environment _mail_rcpt n...@yourdomain.com event manager session cli username localuser event manager applet restore_glean event snmp oid 1.3.6.1.4.1.9.9.109.1.1.1.1.4.1 get-type exact entry-op le entry-val 35 exit-op ge exit-val 60 poll-interval 60 action 101 syslog priority notifications msg Restoring high glean value action 102 cli command enable action 103 cli command config terminal action 104 cli command mls rate-limit unicast cef glean 9 200 action 105 cli command no mls qos protocol arp police 32000 action 110 cli command end event manager applet temp_low_glean event snmp oid 1.3.6.1.4.1.9.9.109.1.1.1.1.4.1 get-type exact entry-op ge entry-val 80 exit-op le exit-val 65 poll-interval 60 action 101 syslog priority notifications msg Setting low glean due to possible DDoS. action 102 cli command enable action 103 cli command config t action 104 cli command mls rate-limit unicast cef glean 3 60 action 105 cli command mls qos protocol arp police 32000 action 110 cli command end action 200 cli command sh processes cpu sorted 1min action 201 info type routername action 202 mail server $_mail_smtp to $_mail_rcpt from $_info_routern...@$_mail_domain subject Prolonged CPU Spikes body $_cli_result Our boxes do aaa through tacacs and radius. However we've configured a local user localuser so the script can run even if aaa hosts are unreachable. We;ve also configured the global arp policer, not due to the DDoS described before, but because we've run into high CPU problems with ARP storms too.. I hope some of you finds the above helpful. Cya, Bas p.s. if you've disabled dns lookups do not forget to add a static hosts entry for mail.yourdomain.com ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] SXI3 strange issue, Loose mode uRPF jumps to strict by itself
Hi All, Yesterday we had a strange issue. Our monitoring tool alerted that one of our boxes (SUP720-3BXL - 6506 running SXI3) became unreachable. When we logged in everything looked ok. BGP was up, OSPF was up and nothing special in logging. Still traffic had dropped to near zero. With debug ip cef drop we immediately saw that traffic was dropped due to uRPF feature. All upstream interfaces had strict mode uRPF configured, before the problems started it was loose mode uRPF. After manually changing them back too loose mode traffic was restored. A couple of minutes before the problems started an engineer had configured a customer facing interface with strict mode uRPF. Apparently this configuration changed triggered a bug that caused upstream interface loose mode to be automagically turned to strict mode. So, hereby a heads up. If your SXI3 boxes show strange behavior, quickly check uRPF. Cya, Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] NX-OS - Fabric Path
On Mon, Jul 19, 2010 at 3:08 PM, Manu Chao linux.ya...@gmail.com wrote: DRILL... *Will Fabric Path* be based on OTV? Have you read Cisco's pages on FP? ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Centos upload speed slower on 1000m than 100m over WAN links
Hi, On Sun, Jun 27, 2010 at 11:20 AM, Paul p...@gtcomm.net wrote: Yeah I tried that.. I really think it's a problem with the linux kernel and e1000e driver and possibly either limited to that or an incompatibility with cisco switch but I doubt that since i get such good speeds locally. We've had a lot of problems with this issue. transatlantic speeds were faster on FE than on GE. Local speeds were great. It is indeed a bug in the kernel driver. After an upgrade to latest vanilla the problems are gone. Im not sure if anyone has created a rpm for a fix. Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] CRS-1 MSC utilization
Hello Oliver, On Tue, Jun 15, 2010 at 5:00 AM, Oliver Boehmer (oboehmer) oboeh...@cisco.com wrote: Does anyone know if there is a command to view the utilization of a MSC in a CRS? We are using the 8 port 10GE PLIMs which are 2:1 oversubscribed. ... If you want to look at the forwarding asic utilization, show controllers pse utilization location loc is the command you're looking for.. How should I interpret the output of that command? When I issue it on a single PLIM: - #show controllers pse utilization location 0/0/CPU0 PPE Utilization NodeIngress Egress 0/0/CPU0: 9.2 0.6 From this output I would think there is 10% of capacity ingress and 0.6% egress. However with monitor interface I see: Interface In(bps) Out(bps) Te0/0/0/0 3.6G/ 36% 881.3M/ 8% Te0/0/0/1 5.0G/ 50% 862.5M/ 8% Te0/0/0/2245.0M/ 2% 3.1G/ 31% Te0/0/0/3278.5M/ 2% 3.6G/ 36% Te0/0/0/4 2.2G/ 22% 643.6M/ 6% Te0/0/0/5 3.6G/ 36% 1.6G/ 16% Te0/0/0/6 1.1G/ 11% 3.9G/ 39% Te0/0/0/7 3.5G/ 35% 4.8G/ 48% Ingress traffic for all interfaces combined is 19.5Gbit/s and egress is 19.3Gbit/s Nearly 20Gbits bidirectional traffic would be 50% of MSC (or 40% of PSE) capacity right? Or am I looking at it in the wrong way? Or what else should/could we monitor to prevent loss due to too much traffic on a PLIM. Thanks, Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] CRS-1 MSC utilization
Hi, Does anyone know if there is a command to view the utilization of a MSC in a CRS? We are using the 8 port 10GE PLIMs which are 2:1 oversubscribed. As we have primarily outbound traffic (hosting) we use half the ports for inbound (from servers/DC) and half for outbound (to internet) traffic. However some ports also have bi-directional traffic. We currently copy/paste monitor interface output to excel and add up the values (per localion) every now and then. But it would be a lot better if we could check with a CLI command or automatically monitor MSC utilization with a SNMP query. Thanks in advance, Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Data Center cooling
Hi, On Thu, Jan 7, 2010 at 7:59 PM, o...@ovh.net wrote: In 2004 2007 we developped the EcoDatacenter. 12 months per year, we use only the water outside air for the cooling on our 70 000 dedicated servers that we host. But aren't those airco compressors I see in this movie? http://www.youtube.com/user/OvhComOnVousHeberge#p/u/6/xtmkS1-4WTY ( at approx 2:03) Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Upgrade to XR-IOS 3.8.1
On Mon, Nov 9, 2009 at 4:29 AM, Aaron dudep...@gmail.com wrote: Yeah. ISSU isn't were it should be. Some SMU's require a reload depending on what componets are touched. Out of the last 20 SMU's for 3.6.2 only 11 were non traffic impacting. (for us) http://marc.info/?l=cisco-nspm=125508819921150w=2 ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] CRS-1 etherchannel
On Fri, Oct 23, 2009 at 5:55 PM, Dmitry Kiselev dmi...@dmitry.net wrote: Could anybody answer me, is the etherchannel feature supported on CRS-1 with 4-10GE modules (either, ports on single card or cross-cards aggregation)? I plan to use 4 10G ports as layer2 trunk and subinterfaces on it. Is it possible on CRS-1? Like Grzegorz said it is possible to do across the same PLIM but also across multiple PLIM's I suspect however there is a limitation between interfaces with different services cards. Interfaces with a FP40 will probably not bundle with MSC interfaces. P.S. Any fresh plans of FCS date for 100G cards? Cisco has gone the 120G way. There will be 120Gbps FP and MSC service cards. They said they would release the following PLIM's 1x100GE, 12x10GE, 20x10GE (oversubs) Q3/Q4 2010. Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] In Service Software Upgrade
On Thu, Oct 8, 2009 at 3:31 PM, Manu Chao linux.ya...@gmail.com wrote: ISSU not possible on 7600 platform (possible on CRS-1, ASR 9000, Nexus 7000) I would not say ISSU (where U stands for Upgrade) is possible on the CRS-1. Updates are sometimes possible without traffic interruption, but upgrades are not. Where upgrades have anything to do with newer versions and updates are patches Of the last 20 SMU's provided by cisco for 3.6.2 only 11 were without traffic interruption. Real fun doing reloads on CRS-1's most take about 45 minutes from start to end.. So imagine the maintenance window you have to plan for 8 SMU's that require a reload. I regularly get flak from management why our million dollar routers require so many reboots while I told them these shiny routers could do ISSU, Bastiaan ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] multipath BGP not balancing equally.
Hi, I have an issue with unequal multipath BGP loadbalancing It is a 6500 / SUP720-3BXL running 12.2.18SXF16 There are four eBGP sessions to a transit carriers ASN, all with full table However one out of four interfaces sends about 2Gbps less than the other three. RTR-HV7#sh int ten 2/2 | i output rate 1 minute output rate 6357052000 bits/sec, 546295 packets/sec RTR-HV7#sh int ten 3/1 | i output rate 1 minute output rate 8509719000 bits/sec, 729490 packets/sec RTR-HV7#sh int ten 3/3 | i output rate 1 minute output rate 8721235000 bits/sec, 746980 packets/sec RTR-HV7#sh int ten 4/4 | i output rate 1 minute output rate 859240 bits/sec, 734864 packets/sec All four sessions have the same settings (in the same peer-group) Through netflow I've tried to deduct if there are specific ASN's not chosen through the nexthop that has less traffic, but that does not seem to be the case. I've looked at ip cef load-sharing algorithm universal however that seems to already be the default algorithm in current IOS versions. With any prefix I test through sh ip cef x.x.x.x detail it seems all four paths are used. Thanks in advance, Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] clear platform hardware capacity fabric counters?
Hello, I haven't been able to find the command for clearing platform hardware capacity fabric / forwarding counters. Or isn't it possible? and should I reboot? Kind regards, Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] clear platform hardware capacity fabric counters?
Hello Abidin, On Fri, Jul 24, 2009 at 10:06 PM, Abidin Kahramanabidin.kahra...@gmail.com wrote: Hello Bas, Have you tried clear fab peak ? Thank you, that did the trick. I dont know how I missed that. Do you also know how to clear the peak-pps counters in : show platform hardware capacity forwarding Thanks, Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] performance problems / overruns on a 6500/sup720/dfc's
Hello All, I hope you guys can help me with the following issue. It started a couple of weeks ago when one customer reported degraded performance. The customer has ~30 servers on a WS-C3750E-48TD, which in turn has a single 10GE link to the 6500 in question. The 10GE link on the 6500 has a service policy configured to limit IP traffic to 8Gbps. (via an aggregate-policer) Before the problems started the customer was able to push 8Gbps on the link for 16 hours a day, the remaining time the customer has less visitors to their service. The issue arises every day at a time the router starts to forward 7.5 - 8Mpps. (approx 50Gbps) When that moment comes the interface facing the customer drops down to 5 - 6 Gbps. In the interface counters we can see the number of overruns increases very fast. This continues till about 23:00PM when the total traffic forwarded drops below 8mpps. mod1: WS-X6708-10GE mod2: WS-X6748-SFP mod3: WS-X6704-10GE mod4: WS-X6748-GE-TX mod5: WS-X6748-GE-TX mod6: WS-SUP720-3BXL Initially running 12.2(18)SXF15a Currently running 12.2(33)SXI1 The customer was connected to Te1/7 and currently 3/2 Things we have investigated or changed. (all have not resolved the issue) - We saw through sh plat hard cap fab that some of the fabric channels were (nearly) congested. We swapped around a couple of TenG interfaces between channels and slots 1 and 3. - We suspected possible relation to Cisco bugs CSCeh08451 or CSCsl70634. Even though both are resolved in SXF12 we upgraded to SXI1 - Possibly hitting some bottleneck in PFC/fabric, so we upgraded modules 2 and 3 (the heaviest utilized modules) with DFC-3BXL. - Tried different hold-queue's in and out - Several fabric buffer-reserve settings - Disabling all netflow - removing the policy-map(s) - enabling/disabling send/receive flowcontrol on several ports and also on the customer 3750. More customers are noticing degraded performance. Lower speeds and 5 - 20% packetloss. The router has enough memory available, SP and RP cpu's are always below 30% Below sh int output of the first customer that reported issues. TenGigabitEthernet3/2 is up, line protocol is up (connected) Hardware is C6k 1Mb 802.3, address is 000f.35bb.0b40 (bia 000f.35bb.0b40) Description: XXX001 - MO08 Internet address is xx.xx.240.126/26 MTU 1500 bytes, BW 1000 Kbit, DLY 10 usec, reliability 255/255, txload 6/255, rxload 202/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 10Gb/s, media type is 10Gbase-LR input flow-control is off, output flow-control is off ARP type: ARPA, ARP Timeout 00:30:00 Last input 00:00:00, output 00:00:00, output hang never Last clearing of show interface counters 00:56:37 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 30 second input rate 7935497000 bits/sec, 665152 packets/sec 30 second output rate 239985000 bits/sec, 438880 packets/sec L2 Switched: ucast: 32 pkt, 2048 bytes - mcast: 1052 pkt, 318283 bytes L3 in Switched: ucast: 2016175646 pkt, 2998867098833 bytes - mcast: 0 pkt, 0 bytes mcast L3 out Switched: ucast: 1483531972 pkt, 115723597149 bytes mcast: 0 pkt, 0 bytes 2228491744 packets input, 3314752535506 bytes, 0 no buffer Received 3005 broadcasts (0 IP multicasts) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 206532318 overrun, 0 ignored 0 watchdog, 0 multicast, 0 pause input 0 input packets with dribble condition detected 1482844739 packets output, 115625721402 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 PAUSE output 0 output buffer failures, 0 output buffers swapped out As you can see no problems reported other than overruns (approx 10%) sh plat hard cap for output: Forwarding engine load: Module pps peak-pps peak-time 128525914416215 18:21:12 CEST Thu Jul 23 2009 214221801645505 22:42:03 CEST Thu Jul 23 2009 3 9031951018577 11:28:05 CEST Wed Jul 22 2009 617562818244268 01:36:29 CEST Sat Jul 18 2009 We're pretty much stuck. Thanks for reading if you've gotten this far. Any help would be very appreciated. Kind regards, Bas p.s. the box peaks at approx 35Mbps IPv6 traffic, that shouldn't affect IPv4 forwarding performance right? ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Cisco ACE and ASN
Hello list, Does anyone have any experience with high volume webcontent that is loadbalanced through ASN? From the cisco website: Asymmetric server normalization (ASN): Cisco ACE can load balance an initial request from the client to a real server; however, the server directly responds to the client, bypassing Cisco ACE. I am curious how the ACE perfomance scales. There are no fancy requirements, just weighted round robin and keepalive tests. We are looking to host a website that generates ~80Gbps of outward traffic. Incoming traffic is approx 5Gbps and 8.000.000 pps regrettably I do not have any numbers concerning hits/sec etc. Do you think a 8Gbps license on a ACE module will work for this setup? Thanks in advance, Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] SRC2?
Chris Griffin wrote: Anyone know when 12.2(33)SRC2 is supposed to be released, specifically for the 7600. I had heard by the end of July, but so far no release. Thanks We have a very annoying bug in the previous version and are waiting for this release for our 7206VXR. According to someone at Cisco, who wasn't supposed to say this, it would be released in about 4 to 5 weeks. Sadly, this was promised us about 8 weeks ago :( The latest statement we got from them was end-september. Cheers, Bas Roos ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Problem with IPv6 ND over ATM
Dear All, I am trying to implement IPv6 in our network, which was going pretty well untill I ran into a little problem when trying to get IPv6 to work natively for our DSL customers. We have a 7206VXR NPE-G2 on our network side, and for testing purposes a 877-W on the customer side. They have been configured as followed (note that these routers do not exchange IPv6 routing information with the rest of the network or the internet yet, so I just used two IPv6 addresses I found in an example configuration): 7206: interface ATM2/0.756 point-to-point description Test line ! Removed IP address because i don't believe it is needed for solving this problem ip address x.x.x.x x.x.x.x ip access-group ACL-VPI2-VCI756 in atm route-bridged ip atm route-bridged ipv6 no atm enable-ilmi-trap ipv6 address 3FFE:::1003::72/64 snmp trap link-status pvc 2/756 vbr-nrt 8000 8000 encapsulation aal5snap 877: interface ATM0.35 point-to-point description WAN ! Removed IP address because i don't believe it is needed for solving this problem ip address x.x.x.x x.x.x.x ip nat outside ip virtual-reassembly atm route-bridged ip atm route-bridged ipv6 pvc 0/35 encapsulation aal5snap ! ipv6 address 3FFE:::1003::45/64 The problem is: we can't ping from each router to the other using it's IPv6 address (IPv4 works fine). When doing a little debugging, I noticed the following logs on the 877 using 'debug ipv6 nd': *Apr 6 16:09:30.071: ICMPv6-ND: DELETE - INCMP: 3FFE:::1003::72 *Apr 6 16:09:30.071: ICMPv6-ND: Sending NS for 3FFE:::1003::72 on ATM0.35 *Apr 6 16:09:30.083: ICMPv6-ND: Received NA for 3FFE:::1003::72 on ATM0.35 from 3FFE:::1003::72 *Apr 6 16:09:30.083: ICMPv6-ND: NA has no link-layer option *Apr 6 16:09:31.071: ICMPv6-ND: Sending NS for 3FFE:::1003::72 on ATM0.35. *Apr 6 16:09:31.083: ICMPv6-ND: Received NA for 3FFE:::1003::72 on ATM0.35 from 3FFE:::1003::72 *Apr 6 16:09:31.083: ICMPv6-ND: NA has no link-layer option *Apr 6 16:09:32.071: ICMPv6-ND: Sending NS for 3FFE:::1003::72 on ATM0.35 *Apr 6 16:09:32.083: ICMPv6-ND: Received NA for 3FFE:::1003::72 on ATM0.35 from 3FFE:::1003::72 *Apr 6 16:09:32.083: ICMPv6-ND: NA has no link-layer option. *Apr 6 16:09:33.071: ICMPv6-ND: INCMP deleted: 3FFE:::1003::72 *Apr 6 16:09:33.071: ICMPv6-ND: INCMP - DELETE: 3FFE:::1003::72 (this sequence repeats a couple of times, depending on the amount of pings I send). The problem seems to get fixed (I can ping) when I put a static ipv6 neighbor entry in the 877: ipv6 neighbor 3FFE:::1003::72 atm0.35 0018.b987.4b38 I have done some searching on google for the problem I noted in the log 'NA has no link-layer option', which I think is the cause of this problem (I don't get this log line on the 7206), but haven't found any answers that solve my problem. I have also been trying to find some example configurations for IPv6 using ATM interfaces, but besides the most examples being about PPPoA, the examples that do fit my situation, don't show any configuration settings I am missing or did wrong. I hope there is someone here who can give me a push in the right direction (either with my problem, or with the best location to find a good solution). Please excuse me if I made a huge configurtion error in my IPv6 configuration, or with my mail to this list, because both is quite new to me, still ;) Thanks in advance! Bas Roos ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Top 10 Network Engineering Tools
Hi, Hereby my list. Most have been mentioned before inn this thread - wireshark / tcpdump - teraterm / zterm / minicom - ping / traceroute / mtr / lft - dig - ssh / telnet - nmap - netcat - google / radb - flow-tools - nmis / cacti / nagios / rrdtool - (public) looking glasses / route-severs - custome perl / bash scripts to do whatever - Fiber / copper cable tester - label machine Bastiaan ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Cisco 877W BVI nat problem
Hi guys, I'm lost here, can't seem to find where I'm going wrong. atm0.1 is unnumbered to vlan1 vlan1 has public IP address and is nat outside bvi2 has internal address, is bridge for vlan2 and ssid wifi4 and is nat inside wifi clients on this router are getting their IP addresses, but can't reach the outside world. Here is my config: dot11 ssid wifi4 vlan 2 authentication open authentication key-management wpa guest-mode wpa-psk ascii 0 ! ip dhcp excluded-address 192.168.30.1 ! ip dhcp pool vlan2 network 192.168.30.0 255.255.255.0 dns-server 83.149.80.123 62.212.65.123 default-router 192.168.30.1 bridge irb interface ATM0 no ip address no atm ilmi-keepalive dsl operating-mode auto ! interface ATM0.1 point-to-point ip unnumbered Vlan1 no snmp trap link-status pvc 0/35 encapsulation aal5snap interface FastEthernet0 switchport access vlan 2 interface Dot11Radio0 no ip address ! encryption vlan 2 mode ciphers aes-ccm ! ssid wifi4 speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 12.0 18.0 24.0 36.0 48.0 54.0 station-role root ! interface Dot11Radio0.2 encapsulation dot1Q 2 bridge-group 2 bridge-group 2 subscriber-loop-control bridge-group 2 spanning-disabled bridge-group 2 block-unknown-source no bridge-group 2 source-learning no bridge-group 2 unicast-flooding ! interface Vlan1 ip address 85.17.85.97 255.255.255.248 ip nat outside ip virtual-reassembly ! interface Vlan2 no ip address bridge-group 2 bridge-group 2 spanning-disabled ! interface BVI2 ip address 192.168.30.1 255.255.255.0 ip nat inside ip virtual-reassembly ! ip route 0.0.0.0 0.0.0.0 ATM0.1 ! ip nat inside source list 10 interface Vlan1 overload ! access-list 10 permit 192.168.30.0 0.0.0.255 ! bridge 2 route ip Any pointers in the right direction would help me greatly. Thanks in advance, Bas ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/