Re: [c-nsp] BGP route filtering question about upstreams
On Tuesday, October 07, 2014 07:41:08 PM Andrew (Andy) Ashley wrote: Could be an option but I’m guessing that AS100 will then only have a partial table from AS200? Which works out fine, since you say AS100 prefers the full table from AS300, and does not prefer AS200 for the same. I feel we're going round in circles :-). That would be nice, but neither offers communities.. That sucks... not because the upstreams don't offer communities, but that it limits AS100's options. Mark. signature.asc Description: This is a digitally signed message part. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Cat3750: MAC addresses of L3 interfaces change after reboot ?!
Hi all, Catalyst 3560/3750 /G/E switches seem to implement L3 interfaces differently than other switches - they use unique MAC address per every L3 interface. MAC addresses are assigned from the MACaddr pool in the order how L3 interfaces are created, however after reboot they are assigned differently - in the order how interfaces appear in the config. As an example, if you add no switchport to interface Gig0/1 as the last change, it might get ..07c7 MAC address from the pool, but after reboot, it gets ..07c1 and all other L3 interfaces get higher MAC addresses than before. This creates serious problems in environments where strict MAC address security is needed - since manual intervention is required to restore network connectivity. Is there any way to change this odd behavior on Cat 3560/3750 by e.g. - making the switch use the same MAC address for all L3 interfaces - making the switch reuse the same MAC addresses after reboot (MAC persistence) - configuring the MAC address on L3 interface manually? Thanks kind regards, M. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Cisco Security Advisory: Multiple Vulnerabilities in Cisco ASA Software
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Multiple Vulnerabilities in Cisco ASA Software Advisory ID: cisco-sa-20141008-asa Revision 1.0 For Public Release 2014 October 8 16:00 UTC (GMT) Summary +== Cisco Adaptive Security Appliance (ASA) Software is affected by the following vulnerabilities: Cisco ASA SQL*NET Inspection Engine Denial of Service Vulnerability Cisco ASA VPN Denial of Service Vulnerability Cisco ASA IKEv2 Denial of Service Vulnerability Cisco ASA High Performance Monitor Denial of Service Vulnerability Cisco ASA GPRS Tunneling Protocol Inspection Engine Denial of Service Vulnerability Cisco ASA SunRPC Inspection Engine Denial of Service Vulnerability Cisco ASA DNS Inspection Engine Denial of Service Vulnerability Cisco ASA VPN Failover Command Injection Vulnerability Cisco ASA VNMC Command Input Validation Vulnerability Cisco ASA Local Path Inclusion Vulnerability Cisco ASA Clientless SSL VPN Information Disclosure and Denial of Service Vulnerability Cisco ASA Clientless SSL VPN Portal Customization Integrity Vulnerability Cisco ASA Smart Call Home Digital Certificate Validation Vulnerability These vulnerabilities are independent of one another; a release that is affected by one of the vulnerabilities may not be affected by the others. Successful exploitation of the Cisco ASA SQL*NET Inspection Engine Denial of Service Vulnerability, Cisco ASA VPN Denial of Service Vulnerability, Cisco ASA IKEv2 Denial of Service Vulnerability, Cisco ASA High Performance Monitor Denial of Service Vulnerability, Cisco ASA GPRS Tunneling Protocol Inspection Engine Denial of Service Vulnerability, Cisco ASA SunRPC Inspection Engine Denial of Service Vulnerability, and Cisco ASA DNS Inspection Engine Denial of Service Vulnerability may result in a reload of an affected device, leading to a denial of service (DoS) condition. Successful exploitation of the Cisco ASA VPN Failover Command Injection Vulnerability, Cisco ASA VNMC Command Input Validation Vulnerability, and Cisco ASA Local Path Inclusion Vulnerability may result in full compromise of the affected system. Successful exploitation of the Cisco ASA Clientless SSL VPN Information Disclosure and Denial of Service Vulnerability may result in the disclosure of internal information or, in some cases, a reload of the affected system. Successful exploitation of the Cisco ASA Clientless SSL VPN Portal Customization Integrity Vulnerability may result in a compromise of the Clientless SSL VPN portal, which may lead to several types of attacks, which are not limited to cross-site scripting (XSS), stealing of credentials, or redirects of users to malicious web pages. Successful exploitation of the Cisco ASA Smart Call Home Digital Certificate Validation Vulnerability may result in a digital certificate validation bypass, which could allow the attacker to bypass digital certificate authentication and gain access inside the network via remote access VPN or management access to the affected system via the Cisco Adaptive Security Device Management (ASDM). Cisco has released free software updates that address these vulnerabilities. Workarounds that mitigate some of these vulnerabilities are available. This advisory is available at the following link: http://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20141008-asa -BEGIN PGP SIGNATURE- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: GPGTools - http://gpgtools.org iQIcBAEBAgAGBQJUNUBiAAoJEIpI1I6i1Mx3hVAQAKtIV7wBHDjwlHPFj81eM7D0 xf96/YJYO4E1v+qX4waOURzuWf752JPXG00WeB7OXqQg15J6nGR1H4hc9rGyUGg1 fZEbaxBBzosGFK3kf/giONO1jSeRRsOPMVMTKVanCeRwUj/XSP3VeWdWK5BwjSYN 6MkcPryJjo0/7jisUh0SPUUq8OHFoqVtsx+AzLgdcWN5vpGhgSpJVX5WCSS+Mgu1 fAuY49zW/bO9K/oP8KQnzmU2TR6iSLLYwbfU6KglHc8OYVKa6A5cGvqaKWAhxnlX wV34Ry8AdkzFbHl/rZm8Qg+8urdtGEtQ5pGWOooMmNhu0ZToKNxIzneT3Kp01w1r vQoU+UPPKkAC6rmaI30t3ZyCSVvXxx1xXkskFs0LP59tm7d7EvoSyITeu4ytejiw ck1kFWA6gMZuQ2HWFkFo2SLoygS43tEwZzrx/uGJ1YwYPiED3kb7K8UpL3Zj5wD1 JyRog3+SrsYvlVJ2ZV4bTPCtJkbeiYGiuEZ/yC/1WheAiKbsVrurVXwynT0XJDpA 2BL9AdnHxEWYJd+gvBpoELfwSsVQk3WOY/PjmhWaiiRSQlAG4K2IPRugQf1eyJ5Q bjjCnkCproQWVqInCG8JUrTovyQEWe8mev2yMFm/e9zeaVtZhC/FyXG4+ImdXv58 z7tiykxJ8VKRkWGtqYK5 =HjcM -END PGP SIGNATURE- ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] (no subject)
The way I've dealt with this in the past (a trick I learned from Barry) is to ask for full routes (and no default), filter out ALL prefixes except for those of associated with a few far away root name servers, and then generate a default conditional on reachability of at least one name server. It gets a little ugly, because you need to ensure you only generate the default if the route to the name servers is through your upstream, which means that all BGP routers in your AS must not pass on those prefixes plus you have to keep track on an ongoing basis of changes in the prefixes containing the root name servers or their IP address. Disclaimer: Have not tested this approach with IPv6 because I retired before I had any clients who cared :-] but I've used it with IPv4 since the days of 2501 routers. Vince On Thu, 2014-10-02 at 11:46 -0700, Paul Wozney wrote: Okay so I've got two BGP routers here, accepting partial routes - one carrier to each router. Each carrier advertises a default route. I use an as-path filter to limit learned routes to those of the carrier +1 ASn: ip as-path access-list 11 permit ^_[0-9]*$ One carrier has now had two outages in the last year where they've lost their upstream. They continue to advertise a default route to us, so our network experiences failures until we kill the link. It strikes me that if we had FULL routes (and no default route accepted) we could react automatically to failures like this - we could share tables between the routers and if one carrier lost half their routes we'd pick them up from the other router. Is this just how life with partial routes is? Or is there something else I can do? ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Exactly how bad is the 6704-10GE?
All, (This is vaguely related to my question earlier in the week about ASR capacity) We use quite a few 6704-10GE blades on our network, and I'm seeing some random congestion type issues. In some cases, I've made the problem go away by shuffling ports between blades to spread the load, but I'm left wondering exactly where the problems lie. From talking to people on IRC, etc, I'm told that the 6704 runs out of steam around 24-26Gbps of throughput when handling imix traffic. I'm also told that this is largely driven by pps, rather than bps. If we take, for example, a 6504 on our network. It has a Sup2T in slot 1, 6704-10GE(CFC) in slot 2, 6724-SFP(CFC) in slot 3, and 6904-40G(DFC4) in slot 4. I've got a 4*10G portchannel towards our core consisting of Te2/1, Te4/5, Te4/6 Te4/8 Te2/3 and Te4/9 form a 2*10G portchannel towards an IXP Te2/2 is a 10G link towards a transit provider. The traffic profile on the 4*10G portchannel seems to max out at about 24Gbps. I don't see any obvious packet drops or latency increase, just that the traffic doesn't go any higher than that. I suspect I'm hitting a limit on the 6704 which is causing this, but I can't figure out what that limit is. If I take a snapshot of the 3 active ports on the 6704 at peak time, I see: Te2/1: In = 2.7Gbps/580kpps, Out = 5.7Gbps/613kpps Te2/2: In = 7.0Gbps/865kpps, Out = 1.8Gbps/520kpps Te2/3: In = 7.3Gbps/789kpps, Out = 2.5Gbps/666kpps Summing that all up, I've got ~27Gbps of traffic flowing through the card, and just over 4Mpps. I also see this: rtr#show fabric drop Polling interval for drop counters and timestamp is 1 in seconds Packets dropped by fabric for different queues: Counters last cleared time: 22:54 08 Oct 14 slotchannelLow-Q-drops High-Q-drops 1 0 0 0 1 1 0 0 2 0 35759 @00:57 09Oct14 0 2 1 76766 @00:57 09Oct14 0 3 0 0 0 4 0169 @00:56 09Oct14 0 4 1 0 0 So I seem to be seeing fabric drops on the 6704 slot, on both channels (but more on channel 1, which has ports Te2/1 and Te2/2 on it). If I look at fabric utilisation, it doesn't say it's maxing out: rtr#show fabric utilization detail Fabric utilization: IngressEgress Module Chanl Speed rate peak rate peak 1 020G0%0% 0%0% 1 120G0%3% @19:53 08Oct140%3% @19:53 08Oct14 2 020G 27% 50% @22:14 08Oct145% 13% @22:13 08Oct14 2 120G 33% 47% @00:33 09Oct14 23% 33% @23:09 08Oct14 3 020G0%0% 0%0% 4 040G 11% 17% @22:30 08Oct14 26% 40% @00:02 09Oct14 4 140G0%0% 0%0% So my questions... 1) For other people using the 6704-10GE blade, what sort of maximum throughput are you seeing? Have you managed to pinpoint what the limiting factor is? 2) What do the fabric drops really mean. My google-fu isn't helping a lot, and the command doesn't seem to be documented. Is there anything I can do to reduce the fabric drops? Why am I also seeing some on the 6904-40G slot, which should be a much more capable card. Many thanks in advance, Simon ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/