[c-nsp] Right IOS for 7600
Hi All Asking you real IOS gurus out there for the production options on IOS upgrades. Where running 7600 Sup720PFC3BLX OSPF BGP and wish too add microFlow policing Netflow Currently on 12.2(18)SXF7 I understand that are several trains too choose from. Best Regards Mattias Gyllenvarg Omnitron ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Router failure - config lost?
Hi * I've got something of a question that's not necessarily a clear technical problem or config problem ... rather just scoping as to whether other people have come across this, too ... We have a customer who has some 400+ locations. All of these are connected to the central office via an MPLS-based network, using aDSL lines. Every location has an identical 876-W-G-E-k9 router, with (apart from DSL username and IP address) identical config. This network has now been in operation for something like 18 months, and is working nicely. Now, on average 1-2 locations per month go down, losing DSL connectivity, and even a power-cycle and DSL port reset by the DSL-provider won't work, at which point we configure a replacement router and send it out. We usually get the defective router back for analysis, and apart from a hand full of cases in which the routers where physically damaged (lightning, spikes on the power supply etc.), most of the defective routers have simply lost their configuration file. On one occasion, the whole router flash was cleared, removing the IOS. On yet another occasion, I think we found the stock config file (the one with the large header, cisco login etc.) on the router (which I thought was really weird). In all those cases, we have opted to re-use the router, if for nothing else than to see whether it was an actual hardware defect ... to date, no router has shown that behavior twice (we track the ser#). As for the configs/routers themselves, the locations do not have any username/pw to log in to the routers. External access shouldn't be possible, as the network itself has no direct Internet connectivity. Has anybody else here ever experienced effects like this? Tnx, -garry ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Right IOS for 7600
On Mar 16, 2009, at 6:34 AM, Wyatt Mattias Gyllenvarg wrote: Hi All Asking you real IOS gurus out there for the production options on IOS upgrades. Where running 7600 Sup720PFC3BLX OSPF BGP and wish too add microFlow policing Netflow Currently on 12.2(18)SXF7 I understand that are several trains too choose from. I would consider going to SXF16 for now. It will provide you a large number of bugfixes. If possible you should look at SXI but wait until SXI1. - Jared ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] BGP - peer down w/MD5 - suppress logging
Is there any way without messing with the level of logging we have turned on today to suppress these messages in particular: Mar 16 09:43:24: %TCP-6-BADAUTH: No MD5 digest from 206.223.143.84(179) to 206.223.143.81(13412) (RST) tableid - 0 Mar 16 09:43:32: %TCP-6-BADAUTH: No MD5 digest from 206.223.143.84(179) to 206.223.143.81(13412) (RST) tableid - 0 This is a BGP peer that is not coming online at the moment - we don't want to shutdown our actual session. Of course this only happens with MD5 enabled peers ;) Thanks, Paul ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Right IOS for 7600
Hi, On Mon, Mar 16, 2009 at 11:34:44AM +0100, Wyatt Mattias Gyllenvarg wrote: Asking you real IOS gurus out there for the production options on IOS upgrades. [..] I understand that are several trains too choose from. About all options seem to have been discussed in great detail on this list. So what are your questions that are *not* answered in the list archives? gert -- USENET is *not* the non-clickable part of WWW! //www.muc.de/~gert/ Gert Doering - Munich, Germany g...@greenie.muc.de fax: +49-89-35655025g...@net.informatik.tu-muenchen.de pgpq2k09mLPEJ.pgp Description: PGP signature ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Sup720-3BXL Stable IOS
Hi Paul, in case you don't need any fancy 7600 only or SR only features or hardware support you might want to go back to 12.2(18)SXF. We're running SXF10 quite some time now including new 7600 with BGP and IS-IS for v4 and v6 multi-topology w/o the slightest problems up to now. But sooner or later we'll be hit by the AS32-capable update... we'll see if SXJ will run on 7600, else it will be (forced into SRE). regards, Marcus -Ursprüngliche Nachricht- Von: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-boun...@puck.nether.net] Im Auftrag von Paul Stewart Gesendet: Samstag, 14. März 2009 14:52 An: 'Cisco-nsp' Betreff: [c-nsp] Sup720-3BXL Stable IOS Hi there. We're seeing issues occasionally with BGP sessions on Sup720-3BXL getting stuck. When adding a new session in particular it won't come up and sometimes after removing and adding back into the config it works.. Other strange unexplained stuff once in a while.. Anyways, talked to a few peers and they tell me wow, you're on the bleeding edge because we're running 12.2.33SRD and 12.2.33SRC releases. I am thinking of moving to 12.2.33SRA7 release as it's listed as LD vs ED and is very current still - am I playing with fire? Any feedback? Box is running IPv4/IPv6, OSPF and BGP Many thanks, Paul This message was delivered by MDaemon - http://www.altn.com/MDaemon/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] SXI leaks (was: Netflow on SUP720-3BXL)
(SXI has slow memory leaks in BGP, at least for us. Cisco case has been opened, but hasn't proceeded anywhere yet). Interesting. Do you have any details you can share? ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] DLSW on Catalyst 4506-E is it possible???
Hi Guys, DLSW is supported in Catalyst 4500 platform I'm searching in Cisco Doc but it seems that DLSW doesn't run over Cat4500. Are there some way to run DLSW over CAT4500? (in 6500 a MSFC card will solve the problem). Here my sh version 4500# sh version Cisco IOS Software, Catalyst 4500 L3 Switch Software (cat4500-ENTSERVICESK9-M), Version 12.2(44)SG1, RELEASE SOFTWARE (fc1) Technical Support: http://www.cisco.com/techsupport Copyright (c) 1986-2008 by Cisco Systems, Inc. Compiled Wed 09-Jul-08 13:17 by prod_rel_team Image text-base: 0x1000, data-base: 0x11D1CA4C ROM: 12.2(31r)SGA1 Pod Revision 14, Force Revision 31, Tie Revision 32 4500 uptime is 2 days, 19 hours, 53 minutes System returned to ROM by reload System restarted at 14:17:22 Fri Mar 13 2009 System image file is bootflash:cat4500-entservicesk9-mz.122-44.SG1.bin This product contains cryptographic features and is subject to United States and local country laws governing import, export, transfer and use. Delivery of Cisco cryptographic products does not imply third-party authority to import, export, distribute or use encryption. Importers, exporters, distributors and users are responsible for compliance with U.S. and local country laws. By using this product you agree to comply with applicable laws and regulations. If you are unable to comply with U.S. and local laws, return this product immediately. A summary of U.S. laws governing Cisco cryptographic products may be found at: http://www.cisco.com/wwl/export/crypto/tool/stqrg.html If you require further assistance please contact us by sending email to exp...@cisco.com. cisco WS-C4506-E (MPC8540) processor (revision 13) with 524288K bytes of memory. Processor board ID FOX1242H2PR MPC8540 CPU at 800Mhz, Supervisor V-10GE Last reset from Reload 20 Virtual Ethernet interfaces 70 Gigabit Ethernet interfaces 2 Ten Gigabit Ethernet interfaces 511K bytes of non-volatile configuration memory. Configuration register is 0x2101 Rgds. -- Omar E.P.T - Certified Networking Professionals make better Connections! ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] SXI leaks (was: Netflow on SUP720-3BXL)
Hi, On Mon, Mar 16, 2009 at 03:15:01PM +, Phil Mayers wrote: (SXI has slow memory leaks in BGP, at least for us. Cisco case has been opened, but hasn't proceeded anywhere yet). Interesting. Do you have any details you can share? This is on a peering point + upstream provider router, with full IPv4 and IPv6 BGP (unicast only). SXI non-modular, advanced ip services. We lose about 2-4 Mbyte of free memory per day, which goes into holding for the BGP Router process. It seems to be related to churn - we have another router running SXI, and that one is used at the network edge with only about 500 BGP prefixes, and nearly no churn. That one has no (noticeable) memory leak. TAC Case# is SR 610821739. (... maybe we should really go for SXI modular... - just restart the BGP Router every 2 months, and reclaim all this memory without a 10-minute reboot... and maybe even get a bugfixed BGP-process without requiring a reboot). gert -- USENET is *not* the non-clickable part of WWW! //www.muc.de/~gert/ Gert Doering - Munich, Germany g...@greenie.muc.de fax: +49-89-35655025g...@net.informatik.tu-muenchen.de pgpxRv5ev5zHH.pgp Description: PGP signature ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Right IOS for 7600
Hi, On Mon, Mar 16, 2009 at 08:18:42AM -0700, Derick Winkworth wrote: IOS changes. The list archives are not relevant. In that case, the answer is contact your Cisco account representative. All knowledge available on this list is experience from the past - and especially for the 6500/7600, the IOS discussion is repeated *more* often than Cisco is releasing new images. So the archives have all the answers. gert -- USENET is *not* the non-clickable part of WWW! //www.muc.de/~gert/ Gert Doering - Munich, Germany g...@greenie.muc.de fax: +49-89-35655025g...@net.informatik.tu-muenchen.de pgpl5fjACieRI.pgp Description: PGP signature ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Sup720-3BXL Stable IOS
On Monday 16 March 2009 04:42:00 pm Marcus.Gerdon wrote: But sooner or later we'll be hit by the AS32-capable update... we'll see if SXJ will run on 7600, else it will be (forced into SRE). I posit quite a number of upgrades at the (peering) edge come end of this year, along with ensuing moans and groans as we make the networks ASN 32-bit capable. Trying to figure out how to make this pretty, particularly given that this very feature may be what stands between you and your favorite, stable code base... Mark. signature.asc Description: This is a digitally signed message part. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] SXI leaks (was: Netflow on SUP720-3BXL)
Hi, On Mon, Mar 16, 2009 at 11:09:48AM -0500, Murphy, William wrote: Thanks for the heads up... Are any of the maintenance releases of SXH also affected by this memory leak? We've not seen this in SXH3a (and you don't want to use SXH3 due to BGP ghost bugs). I have not tested SXH4. gert -- USENET is *not* the non-clickable part of WWW! //www.muc.de/~gert/ Gert Doering - Munich, Germany g...@greenie.muc.de fax: +49-89-35655025g...@net.informatik.tu-muenchen.de pgpwNKx89e9dd.pgp Description: PGP signature ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Fast IGP on 6500 gigE
All, Given a mix of 6748-SFP, 6704 and 6716 linecards, with SXI software, and OSPF over SVIs, what are people successfully using to speed up link loss and subsequent IGP convergence? Our config broadly looks like: int Vlan38xx description p2p to another router ip address 192.168.0.2 255.255.255.254 ip ospf network point-to-point int Te1/1 switchport switchport mode trunk switchport trunk native vlan 38xx router ospf 1 ispf nsf network 192.168.0.0 0.0.0.255 ...and then the various LDP BGP configs on top. I'm assuming I want some combination of: 1. debounce / carrier-delay (what's the difference) on the gigE 2. IP event dampening on the SVI 3. faster timers on the SFP process; possibly as a conservative start: timers throttle spf 10 100 5000 timers throttle lsa all 10 100 5000 timers lsa arrival 80 The idea is that most routers are dual-attached, so I just want to underlying IGP to converge quickly. I'll tackle the LDP and BGP later... I'm not able to use BFD (since it doesn't work on SVIs under SXI) and I'm only worried about physical link-down - we don't have any weird layer2 between routers except in a few out-of-the way places, and they can just suffer. I realise some of these answers are it depends on the size of your network; there are ~25 routers participating in the OSPF, all reasonably recent and modern, it's a single area 0 design, and it has ~58 p2p loopbacks (via router LSAs) another 18 E2 routes. It seems to take ~6msec for an OSPF adjacency to form between two routers, almost all of which is in INIT-2WAY so I'm guessing SPF is going to be pretty quick. Suggestions welcome, although ask Cisco to tell you is less helpful; I'd like to have some independent understanding of how we arrived at the numbers, and be able to repeat the process in future ;o) ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Fast IGP on 6500 gigE
Hi, you've written most routers are dual-attached, so the concern mostly is failure detection and not re-establishment of a neighbor I think. If you go into debounce or carrier-delay you'll raise the convergence time as a link failure will be ignored for a short time before processes are notified. OSPF should immediately react on an link-down event, so I'd try to speed it up this way. If you use 2 separate SVI for the 2 connections and each VLAN has only 1 port it is allowed in (either a single access port or exactly 1 trunk port) the SVI should go down along with that single port. Playing around the timers I keep for last resort - as there's always the risk to de-stabilize the network seriously (I've seen people trying to get the last second out of a protocol resulting in occasional burn-downs far too often). regards, Marcus -Ursprüngliche Nachricht- Von: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-boun...@puck.nether.net] Im Auftrag von Phil Mayers Gesendet: Montag, 16. März 2009 17:45 An: cisco-nsp@puck.nether.net Betreff: [c-nsp] Fast IGP on 6500 gigE All, Given a mix of 6748-SFP, 6704 and 6716 linecards, with SXI software, and OSPF over SVIs, what are people successfully using to speed up link loss and subsequent IGP convergence? Our config broadly looks like: int Vlan38xx description p2p to another router ip address 192.168.0.2 255.255.255.254 ip ospf network point-to-point int Te1/1 switchport switchport mode trunk switchport trunk native vlan 38xx router ospf 1 ispf nsf network 192.168.0.0 0.0.0.255 and then the various LDP BGP configs on top. I'm assuming I want some combination of: 1. debounce / carrier-delay (what's the difference) on the gigE 2. IP event dampening on the SVI 3. faster timers on the SFP process; possibly as a conservative start: timers throttle spf 10 100 5000 timers throttle lsa all 10 100 5000 timers lsa arrival 80 The idea is that most routers are dual-attached, so I just want to underlying IGP to converge quickly. I'll tackle the LDP and BGP later... I'm not able to use BFD (since it doesn't work on SVIs under SXI) and I'm only worried about physical link-down - we don't have any weird layer2 between routers except in a few out-of-the way places, and they can just suffer. I realise some of these answers are it depends on the size of your network; there are ~25 routers participating in the OSPF, all reasonably recent and modern, it's a single area 0 design, and it has ~58 p2p loopbacks (via router LSAs) another 18 E2 routes. It seems to take ~6msec for an OSPF adjacency to form between two routers, almost all of which is in INIT-2WAY so I'm guessing SPF is going to be pretty quick. Suggestions welcome, although ask Cisco to tell you is less helpful; I'd like to have some independent understanding of how we arrived at the numbers, and be able to repeat the process in future ;o) ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Fast IGP on 6500 gigE
Marcus.Gerdon wrote: Hi, you've written most routers are dual-attached, so the concern mostly is failure detection and not re-establishment of a neighbor I think. Correct If you go into debounce or carrier-delay you'll raise the convergence time as a link failure will be ignored for a short time before processes are notified. Further reading indicates that carrier/debounce are by default as low as you can safely get them on a 6500, so I think I can disregard these. OSPF should immediately react on an link-down event, so I'd try to speed it up this way. If you use 2 separate SVI for the 2 connections We're already doing this. and each VLAN has only 1 port it is allowed in (either a single access port or exactly 1 trunk port) the SVI should go down along with that single port. It does. Interestingly info I've read indicates that routed interfaces signal upper-layer protocols much faster than SVI interfaces, which is something I'll have to investigate. Playing around the timers I keep for last resort - as there's always the risk to de-stabilize the network seriously (I've seen people trying to get the last second out of a protocol resulting in occasional burn-downs far too often). Hmm. Further digging has some pretty concrete recommendations from Cisco in presentations and such suggesting: timers throttle spf 10 100 5000 timers throttle lsa all 10 100 5000 timers lsa arrival 80 e.g. http://www.ciscoexpo.sk/slides/41-vsettey_fast_convergence.pdf The default SPF initial delay is 5 *whopping* seconds; which means that, no matter how fast your link detection and SPF propagation is, it'll be at least 5 seconds before you even *start* trying to converge. Having read the docs, I have a hard time seeing how changing these can burn down the network - the spf and lsa timers have exponential backoff. Would you care to elaborate on the failure modes you have seen? ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Sup720-3BXL and diag issues
I have a 6500 with a sup720-3BXL. Every time I put a WS-X6408A-GBIC card in the chassis, diag fails the card. I haven't rebooted the box yet, as this carrys a significant amount of traffic, has anyone else seen this type of problem. Diag on the most recent card shows: 7) TestUnusedPortLoopback: Port 1 2 3 4 5 6 7 8 U U U F U U U F The cards work in other chassis. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] SXI high cpu usage and rp inband SPAN feature not available
We installed SXI Advanced Enterprise in one of our catalyst 6500 with sup720-3BXL, and the box is having persistent high CPU over 90%/80% where we have normal cpu utilization around 30%. I played with RP SPAN on our SXF11 before, it's good to see what is punted to CPU according to the following Doc, SPAN RP-Inband and SP-Inband A SPAN for the RP or SP port in Cisco IOS Software is available in Cisco IOS Software Release 12.1(19)E and later. http://www.cisco.com/en/US/products/hw/switches/ps708/products_tech_note09186a00804916e0.shtml But the feature is not in the current SXI even the configuration guide has that feature documented. http://cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/span.html#wp1089465 test(config)#monitor session 1 ? destination SPAN destination interface or VLAN filter SPAN filter VLAN source SPAN source interface, VLAN test(config)#monitor session 1 source ? interface SPAN source interface intrusion-detection-module SPAN source intrusion-detection-module remote SPAN source Remote vlan SPAN source VLAN test#remote login switch Trying Switch ... Entering CONSOLE for Switch Type ^C^C^C to end this session test-sp#monitor ? elog Event-logging control commands event-trace Control event tracing test-sp#test monitor ? crash test crash Schilling ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] SXI high cpu usage and rp inband SPAN feature not available
On Mon, Mar 16, 2009 at 04:42:38PM -0400, schilling wrote: We installed SXI Advanced Enterprise in one of our catalyst 6500 with sup720-3BXL, and the box is having persistent high CPU over 90%/80% where we have normal cpu utilization around 30%.?? I played with RP sh proc cpu sort 5s Start by seeing if the cpu use in IP Input or not. I've been seeing SRC3 boxes get into weird states where the run 80-90% in BGP router, until they're rebooted. -- Richard A Steenbergen r...@e-gerbil.net http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC) ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Sup720-3BXL and diag issues
Hi, I have a 6500 with a sup720-3BXL. Every time I put a WS-X6408A-GBIC card in the chassis, diag fails the card. The cards work in other chassis. same slot, or different slot? alan ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] SXI high cpu usage and rp inband SPAN feature not available
On Mon, Mar 16, 2009 at 08:42:38PM +, schilling wrote: We installed SXI Advanced Enterprise in one of our catalyst 6500 with sup720-3BXL, and the box is having persistent high CPU over 90%/80% where we have normal cpu utilization around 30%. I played with RP SPAN on our SXF11 before, it's good to see what is punted to CPU according to the following Doc, SPAN RP-Inband and SP-Inband A SPAN for the RP or SP port in Cisco IOS Software is available in Cisco IOS Software Release 12.1(19)E and later. http://www.cisco.com/en/US/products/hw/switches/ps708/products_tech_note09186a00804916e0.shtml But the feature is not in the current SXI even the configuration guide has that feature documented. http://cisco.com/en/US/docs/switches/lan/catalyst6500/ios/12.2SX/configuration/guide/span.html#wp1089465 test(config)#monitor session 1 ? destination SPAN destination interface or VLAN filter SPAN filter VLAN source SPAN source interface, VLAN test(config)#monitor session 1 source ? interface SPAN source interface intrusion-detection-module SPAN source intrusion-detection-module remote SPAN source Remote vlan SPAN source VLAN On SXI you configure the sp/rp CPU span via the normal span CLI; no more crazy remote commands: monitor session 2 type erspan-source source cpu rp ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Sup720-3BXL and diag issues
On Mon, Mar 16, 2009 at 07:40:30PM +, Jimmy Changa wrote: I have a 6500 with a sup720-3BXL. Every time I put a WS-X6408A-GBIC card in the chassis, diag fails the card. I haven't rebooted the box yet, as this carrys a significant amount of traffic, has anyone else seen this type of problem. Diag on the most recent card shows: 7) TestUnusedPortLoopback: Port 1 2 3 4 5 6 7 8 U U U F U U U F Some of the diags are heavily dependent on IOS config, usually only the disruptive ones though. I don't have any 6408s but what does: sh diagnostic content module X say when the card is inserted? It should list some details about test #7 i.e. whether it's a bootup/health test, which I'd expect to pass in all circumstances. A few weeks ago, I'd have said you may have a bad slot, but recently we had a slot go funny and no amount of fiddling would clear it *except* a reload of the box, so it's definitely worth trying. The cards work in other chassis. Have you tried other cards in that slot? ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Sub-int based EoMPLS on a 6700 series LC in a 7600
I have a TAC case open right now regarding a new EoMPLS VC I'm trying to turn up. This is the first I tried to terminate in a sub-int on one of my 7600s. The LC is a 6748 w/ DFC. The sup is a 720-3BXL running SRB1. The other end is a ME3750 running 12.2(50)SE. The ME's end is a 1Q trunk facing the CE. The native VLAN is for Internet back to a SVI. The other VLAN allowed on the trunk (tagged) has a corresponding SVI with the xconnect to the loopback on that particular 7613. The 7613 has a 1Q trunk facing the CE as well. The port is configured with a sub-int though and the xconnect sits in the sub-int with 1Q encapsulation (tagged). The VLAN IDs are the same on both ends. MTU is the same on both ends and is the default 1500; I'm thinking that I need to raise this for one thing, to support the 1Q trunk over EoMPLS. Or does MPLS auto-adjust for the higher MTU? The VC is up end to end. I can do a MPLS ping from end to end. When I try to ping from the CE on the ME3750's end 2811 (with Fa0/0 configured with sub-ints and a 1Q config matching the ME) I get nothing back and no ARP entry on the 3560E. I do however see counters on the VC increment end to end. When I try to ping from the CE on the 7613's end (a 3560E with a trunk configured for the 1 EoMPLS VLAN and its L3 interface in a SVI) to the 2811 I get nothing back but the 2811 gets a matching ARP entry for the 3560E. The counters on the VC do not increment either like they should (ping with a timeout of 0 repeating 10k times). That's the exact opposite of what I would expect. One question that was raised is if the core-facing interface had to be fancy WAN interfaces. My understanding is that they do not because I don't have to double-tag anything. The core-facing interface is on the same 6748. For grins I moved the CE-facing interface to a 6724 w/ DFC in the same chassis. No change. The VC is up but I only get an ARP on the 2811 and nothing back. From the perspective of the CEs traffic only flows from the 3560E to the 2811. From the perspective of the VC's counters it's just the opposite. The 7613 connects to a ME6524 (MTU 9000 on a L3 sub-int on both ends) and that ME6524 connect to the ME3750 (also a L3 interface with a MTU of 9000). MPLS is enabled end to end. I get the targeted LDP on both ends of course. IS-IS L2 is my IGP. Do I have to have WAN ints for the core-facing links? I'm thinking no but I can't say for certain. Neither could my TAC engineer but they are double checking. Anyone know? Thanks Justin ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/