Re: [c-nsp] IPv6 ND cache via SNMP
On 10/19/2010 01:03 AM, Michael Sinatra wrote: Is anyone out there polling the IPv6 neighbor discovery cache via SNMP? Previously, yes. I get them via expect/cli now, because the OID sorting required for snmpwalk of that table on 6500s is prohibitively expensive when it gets very large (well - it is for IPv4 ipNetToMedia; I am assuming the same for ipv6, and since the expect script already runs for v4...) I am mainly interested in getting the cache from 6500s running SXI4a on the VS-720-10GE-3C. In earlier IOS versions (on different platforms, I believe), this was done using the interim CISCO-IETF-IP-MIB (specifically cInetNetToMediaTable), but it seems as though this should all have been merged into the new RFC 4293-compliant IP-MIB. However, with ip.ipNetToPhysicalTable, I get 'no such object'. Not yet I think. ipv6NetToMediaTable (part of the IPV6-MIB) works great on JunOS, but not on cisco (also 'no such object'). It's not clear from the MIB locater if this is even supported in SXI4a--looks like not. Are we really still that far from IPv4/IPv6 feature parity? Shrug. I wouldn't read too much into it. Our code tries: CISCO-IETF-IP-MIB::cInetNetToMediaEntry 1.3.6.1.4.1.9.10.86.1.1.3.1 ...this still seems to work on our SXI4a test box (I just tested it); remember you won't see anything if the neighbour cache is empty, as it often is on quiet test boxes (I find). Currently what I am doing is scraping show ipv6 neighbor via RANCID and shoving it into a flat file for processing and insertion into a SQL DB. But...yuck! This would be a lot cleaner with SNMP--and far fewer moving parts. One perl script could easily poll and push into SQL all at once. Well, as I say above - that approach has advantages. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Cisco Catalyst 3750 IPv6 BGP Support
On 10/19/2010 01:57 AM, Terry Rupeni (USP) wrote: Hi, We had a 3745 running our IPv6 BGP but has finally given up on us. We have a spare Catalyst 3750G. Had a look at this site: http://www.cisco.com/en/US/docs/ios/ipv6/configuration/guide/ip6-roadmap.html and states it doesn't. Just want to reconfirm to the list. We have 3750Gs running IPv6 BGP. This was covered in the archives a while back (I'm a bit short on time now or I'd expand on it). You need later software (we're on 12.2(52)SE) ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] L2 Rings
look into Cisco REP -- From: Mohammad Khalil eng_m...@hotmail.com Sent: Monday, October 18, 2010 5:06 PM To: cisco-nsp@puck.nether.net Subject: [c-nsp] L2 Rings hi all what is better building a L2 ring using STP or MST ? or building the network using VPLS ? ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Cisco Catalyst 3750 IPv6 BGP Support
Thks for that will look into that IOS version. Terry On 10/19/2010 6:53 PM, Phil Mayers wrote: On 10/19/2010 01:57 AM, Terry Rupeni (USP) wrote: Hi, We had a 3745 running our IPv6 BGP but has finally given up on us. We have a spare Catalyst 3750G. Had a look at this site: http://www.cisco.com/en/US/docs/ios/ipv6/configuration/guide/ip6-roadmap.html and states it doesn't. Just want to reconfirm to the list. We have 3750Gs running IPv6 BGP. This was covered in the archives a while back (I'm a bit short on time now or I'd expand on it). You need later software (we're on 12.2(52)SE) ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ -- Terry Rupeni Network AnalystPh# (+679) 323 2113 Systems Networks Group Fax#(+679) 323 1533 Information Technology Services University of the South Pacific Suva, Fiji ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Cisco Catalyst 3750 IPv6 BGP Support
Re Phil, Terry, p.may...@imperial.ac.uk (Phil Mayers) wrote: We have 3750Gs running IPv6 BGP. This was covered in the archives a while back (I'm a bit short on time now or I'd expand on it). You need later software (we're on 12.2(52)SE) You will need Advanced IP Services, and BGP/v6 started showing up - against Cisco's explicit denial that it would ever be! - in 12.2.50(SE)2 or (SE)3. Don't forget to set your sdm profile correctly, or the system will not start IPv6 routing processes: sdm prefer dual-ipv4-and-ipv6 routing Beware of the limited number of routes/prefixes. Elmar. pgpe3TP09U05u.pgp Description: PGP signature ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] 7600 port-channel trouble
Hi Group, Please help: I'm using etherchannell between two 7600 - 2 pair of ports in module 3 and 2 pair in module 4 (6708 cards) I've tried to migrate channell ports on one of router from card in slot 4 to the same card in slot 7 without any success. # sh ethercha 1 port-cha Port-channels in the group: -- Port-channel: Po1 Age of the Port-channel = 103d:04h:07m:12s Logical slot/port = 14/1 Number of ports = 4 GC = 0x00010001 HotStandBy port = null Port state = Port-channel Ag-Inuse Protocol= PAgP Fast-switchover = disabled Direct Load Swap= disabled Ports in the Port-channel: Index LoadPort EC stateNo of bits --+--++--+--- 2 09 Te3/7 Desirable-Sl 2 0 22 Te3/8 Desirable-Sl 2 3 44 Te4/7 Desirable-Sl 2 1 90 Te7/8 Desirable-Sl 2 Time since last port bundled:0d:00h:13m:24sTe7/8 Time since last port Un-bundled: 0d:00h:13m:44sTe4/8 Last applied Hash Distribution Algorithm: Adaptive Channel-group Iedge Counts: --: Access ref count : 0 Iedge session count: 0 but there no any bit/packet in te7/8 interface counters. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Books for Nexus Arch
as well as the books, if you have access to Cisco Networkers/Live material then the NX-OS Software Architecture and Nexus Hardware Architecture session(s) but together by your friendly clueful Cisco folks are likely useful too. there are a few of us who are on this list who have spent countless hours putting together that material, its good stuff if you have access to it. if you don't then ping your friendly Cisco contacts, no doubt they can get it in pdf format for you. cheers, lincoln. On 18/10/2010, at 2:18 AM, Alessandro Braga wrote: Folks, Thanks a lot! Good stuffs!! Rgs, AB 2010/10/13 quinn snyder snyd...@gmail.com: having used this book -- its of some value. its a great tool for configuration of the device -- quite lacking on architecture and the little one offs of the device. if you need to get the device configured, its a good reference. q. -= sent via gmail using alpine. keeping it old school =- On Wed, 13 Oct 2010, christopher.mar...@usc-bt.com wrote: Nikhil said: Take a look: NX-OS Book: http://www.ciscopress.com/bookstore/product.asp?isbn=1587058928 do you mention this book because it has Nexus in the title, or because you read it and found it valuable? /chris ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] vs mac table in 3750 switches
Hi all. L3sw --trunk--- L2sw1 --trunk--- L2sw2 It it possible that the L2sw2 switch won't send mac address table updates to the others switches if src and des mac is located on it self. /Arne ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] vs mac table in 3750 switches
L3sw --trunk--- L2sw1 --trunk--- L2sw2 It it possible that the L2sw2 switch won't send mac address table updates to the others switches if src and des mac is located on it self. Switches don't send mac address table updates to one another. Switches send Ethernet frames, and *learn* MAC addresses based on which port/VLAN a frame is received on. Steinar Haug, Nethelp consulting, sth...@nethelp.no ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] need help firewall in urgent
Hi I got pix501 but doesn't have asdm support How can I configure it as cli to map from private to public and open the port 53 named server to allow access from outside and inside Thank you so much ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Pica8 - Open Source Cloud Switch
Hello, To have a better overview of a Cloud (or OpenFlow) Switch, I would greatly appreciate to invite you to a further reading of the presentation entitled FI technologies on cloud computing and trusty networking from our partner, Chunghwa Telecom (Leading ISP in Taiwan) : http://www.asiafi.net/meeting/2010/summerschool/p/chu.pdf Mail : pica8@gmail.com 2010/10/18 Lin Pica8 pica8@gmail.com: Hello, We are starting to distribute Pica8 Open Source Cloud Switches : http://www.pica8.com/ Especially, a Pica8 Switch with the following specifications (including Open Source Firmware) : -HW : 48x1Gbps + 4x10 Gbps -Firmware : L2/L3 management for VLAN, LACP, STP/RSTP, LLDP, OSPF, RIP, static route, PIM-SM, VRRP, IGMP, IGMP Snooping, IPv6, Radius/Tacacs+ as well as OpenFlow 1.0 would compete with a Cisco Catalyst 2960-S, Model WS-C2960S-48TD-L for half the price (~2k USD). Mail : pica8@gmail.com ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] vs mac table in 3750 switches
Hi, On Tue, Oct 19, 2010 at 11:54:14AM +0200, Arne Larsen / Region Nordjylland wrote: L3sw --trunk--- L2sw1 --trunk--- L2sw2 It it possible that the L2sw2 switch won't send mac address table updates to the others switches if src and des mac is located on it self. Classic ethernet switches never send mac address table updates (there is no protocol for that - this will change with TRILL and friends, but you won't have that). MAC address tables get updates if switches see packets with yet-unknown source MAC addresses - and to answer your question: if machines on L2sw2 are talking to other machines on L2sw2, these frames are not sent to L2sw1, and thus, L2sw1 will not learn these MAC addresses. gert -- USENET is *not* the non-clickable part of WWW! //www.muc.de/~gert/ Gert Doering - Munich, Germany g...@greenie.muc.de fax: +49-89-35655025g...@net.informatik.tu-muenchen.de pgpC0nMsPfkm1.pgp Description: PGP signature ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Cisco 3750 Reboot Issue
Has anyone experienced crashes with the 3750-12S switches. We have tried 12.2(44), 12.2(50), 12.2(53), and 12.2(55) to alleviate the issue but no difference. I had read about a cisco bug for a memory leak where enabling ip routing on the switch was a workaround. I have tried this, but no change. The switches will stay up for 2-3 days then crash again. One switch just has 2 layer 2 etherchannel interfaces configured, the other is a distribution layer with single dot1q trunks. Here is a copy of the log for the crash event. Oct 18 17:58:26: %PLATFORM-1-CRASHED: System previously crashed with the following message: Oct 18 17:58:26: %PLATFORM-1-CRASHED: Cisco IOS Software, C3750 Software (C3750-IPSERVICESK9-M), Version 12.2(50)SE1, RELEASE SOFTWARE (fc2) Oct 18 17:58:26: %PLATFORM-1-CRASHED: Copyright (c) 1986-2009 by Cisco Systems, Inc. Oct 18 17:58:26: %PLATFORM-1-CRASHED: Compiled Mon 06-Apr-09 08:19 by amvarma Oct 18 17:58:26: %PLATFORM-1-CRASHED: Oct 18 17:58:26: %PLATFORM-1-CRASHED: Debug Exception (Could be NULL pointer dereference) Exception (0x2000)! Oct 18 17:58:26: %PLATFORM-1-CRASHED: Oct 18 17:58:26: %PLATFORM-1-CRASHED: SRR0 = 0x01963184 SRR1 = 0x00029230 SRR2 = 0x006B79A4 SRR3 = 0x00021000 Oct 18 17:58:26: %PLATFORM-1-CRASHED: ESR = 0x DEAR = 0x TSR = 0x8C00 DBSR = 0x1000 Oct 18 17:58:26: %PLATFORM-1-CRASHED: Oct 18 17:58:26: %PLATFORM-1-CRASHED: CPU Register Context: Oct 18 17:58:26: %PLATFORM-1-CRASHED: Vector = 0x2000 PC = 0x00A70FBC MSR = 0x00029230 CR = 0x3003 Oct 18 17:58:26: %PLATFORM-1-CRASHED: LR = 0x00A70F80 CTR = 0x019584F0 XER = 0xE05F Oct 18 17:58:26: %PLATFORM-1-CRASHED: R0 = 0x00A70F80 R1 = 0x02FA5F38 R2 = 0x R3 = 0x0376087C Oct 18 17:58:26: %PLATFORM-1-CRASHED: R4 = 0x R5 = 0x R6 = 0x R7 = 0x Oct 18 17:58:26: %PLATFORM-1-CRASHED: R8 = 0x7530 R9 = 0x R10 = 0x R11 = 0x0005 Oct 18 17:58:26: %PLATFORM-1-CRASHED: R12 = 0xC197AFF2 R13 = 0x0011 R14 = 0x019568F0 R15 = 0x Oct 18 17:58:26: %PLATFORM-1-CRASHED: R16 = 0x R17 = 0x R18 = 0x R19 = 0x Oct 18 17:58:26: %PLATFORM-1-CRASHED: R20 = 0x R21 = 0x R22 = 0x R23 = 0x025E Oct 18 17:58:26: %PLATFORM-1-CRASHED: R24 = 0xAB1234AB R25 = 0x027297F0 R26 = 0x035A42D0 R27 = 0x Oct 18 17:58:26: %PLATFORM-1-CRASHED: R28 = 0x0272B8A4 R29 = 0x01EC5BD4 R30 = 0x027297F0 R31 = 0x Oct 18 17:58:26: %PLATFORM-1-CRASHED: Oct 18 17:58:26: %PLATFORM-1-CRASHED: Stack trace: Oct 18 17:58:26: %PLATFORM-1-CRASHED: PC = 0x00A70FBC, SP = 0x02FA5F38 Oct 18 17:58:26: %PLATFORM-1-CRASHED: Frame 00: SP = 0x02FA5F48PC = 0x00A70F80 Oct 18 17:58:26: %PLATFORM-1-CRASHED: Frame 01: SP = 0x02FA5F78PC = 0x019535D8 Oct 18 17:58:26: %PLATFORM-1-CRASHED: Frame 02: SP = 0x02FA5F90PC = 0x01953028 Oct 18 17:58:26: %PLATFORM-1-CRASHED: Frame 03: SP = 0x02FA5FC8PC = 0x01953C7C Oct 18 17:58:26: %PLATFORM-1-CRASHED: Frame 04: SP = 0x02FA5FE0PC = 0x01956780 Oct 18 17:58:26: %PLATFORM-1-CRASHED: Frame 05: SP = 0x02FA5FF8PC = 0x01956990 Oct 18 17:58:26: %PLATFORM-1-CRASHED: Frame 06: SP = 0x02FA6000PC = 0x00A72F48 Oct 18 17:58:26: %PLATFORM-1-CRASHED: Frame 07: SP = 0xPC = 0x00A69A18 Oct 18 17:58:26: %PLATFORM-1-CRASHED: Any help would be greatly appreciated. Erik Fritzler Director of NOC Services Dark Fiber Solutions, Inc. 600 ½ Grant Ave. York, NE 68467 ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] PIX ipv6 neighbour problem
Hello, my PIX515E is running PIX 8.0.4 with multiple contexts. In one of my contexts I would like to have IPv6 connectivity. The Interface is configured as follows (anonymized IPv6 address) -- interface: interface GigabitEthernet1 nameif inside security-level 100 ip address 192.168.1.232 255.255.255.0 ipv6 address :::1::e8/64 ipv6 nd prefix :::1::/64 no-advertise no-autoconfig -- ipv6-routing: Codes: C - Connected, L - Local, S - Static L :::1::e8/128 [0/0] via ::, inside C :::1::/64 [0/0] via ::, inside L fe80::/10 [0/0] via ::, int_ipv6 via ::, outside via ::, inside L ff00::/8 [0/0] via ::, int_ipv6 via ::, outside via ::, inside S ::/0 [0/0] via :::1::d, inside when I tried to ping the IP (:::1::e8) of the PIX on the inside interface from a linux box I get no responses. When I look at the output of the command show ipv6 neighbours, started multiple times during the pings I get the following outputs: pix515e/s6ipv6# show ipv6 neigh IPv6 Address Age Link-layer Addr State Interface fe80::20a:b8ff:fefb:6d43 518 000a.b8fb.6d43 STALE inside fe80::221:85ff:feca:6146- 0021.85ca.6146 REACH inside pix515e/s6ipv6# show ipv6 neigh IPv6 Address Age Link-layer Addr State Interface fe80::20a:b8ff:fefb:6d43 518 000a.b8fb.6d43 STALE inside :::1::d 0 0021.85ca.6146 DELAY inside fe80::221:85ff:feca:6146- 0021.85ca.6146 REACH inside pix515e/s6ipv6# show ipv6 neigh IPv6 Address Age Link-layer Addr State Interface fe80::20a:b8ff:fefb:6d43 519 000a.b8fb.6d43 STALE inside :::1::d 0 0021.85ca.6146 PROBE inside fe80::221:85ff:feca:6146- 0021.85ca.6146 REACH inside pix515e/s6ipv6# show ipv6 neigh IPv6 Address Age Link-layer Addr State Interface fe80::20a:b8ff:fefb:6d43 519 000a.b8fb.6d43 STALE inside fe80::221:85ff:feca:6146- 0021.85ca.6146 REACH inside here is the output of the PIX-debugging: Oct 19 15:55:52 pix515e %PIX-7-609001: Built local-host identity:fe80::20e:cff:fe80:c80c Oct 19 15:55:52 pix515e %PIX-7-609001: Built local-host inside:ff02::1 Oct 19 15:55:52 pix515e %PIX-6-302020: Built outbound ICMP connection for faddr ff02::1/0 gaddr fe80::20e:cff:fe80:c80c/0 laddr fe80::20e:cff:fe80:c80c/0 Oct 19 15:55:52 pix515e %PIX-7-711001: ICMPv6-ND: Sending RA to ff02::1 on inside Oct 19 15:55:52 pix515e %PIX-7-711001: ICMPv6-ND: MTU = 1500 Oct 19 15:55:52 pix515e %PIX-7-711001: IPV6: source fe80::20e:cff:fe80:c80c (local) Oct 19 15:55:52 pix515e %PIX-7-711001: dest ff02::1 (inside) Oct 19 15:55:52 pix515e %PIX-7-711001: traffic class 224, flow 0x0, len 72+0, prot 58, hops 255, originating Oct 19 15:55:52 pix515e %PIX-7-711001: IPv6: Sending on inside Oct 19 15:55:56 pix515e %PIX-6-302021: Teardown ICMP connection for faddr ff02::1/0 gaddr fe80::20e:cff:fe80:c80c/0 laddr fe80::20e:cff:fe80:c80c/0 Oct 19 15:55:56 pix515e %PIX-7-609002: Teardown local-host identity:fe80::20e:cff:fe80:c80c duration 0:00:04 Oct 19 15:55:56 pix515e %PIX-7-609002: Teardown local-host inside:ff02::1 duration 0:00:04 the neighbour discovery is working well if I ping one linux-host from another. greetings and thanks for help, Andreas -- Zentrum für Datenverarbeitung Abteilung Netze Tel: 07071-2970342 Fax: 07071-295912 ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Using address-family context
Is it safe for existing BGP4 sessions/config without 'address-family ' context to use the 'address-family ipv6 unicast' context to add a BGP6 peer for the first time? Thanks! -- Randy ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Low end cisco switch that supports dot1q tunneling and design question
Depends on what you mean by low end. You could try looking at the 2960's. http://www.cisco.com/en/US/products/ps6406/index.html Also, I'm not sure what you mean by can the tunnel terminate but if both switches in the same vlan connect using an access port, sure. No Dot1Q (Or ISL) trunk will form. This could also be solved with VTP Pruning. VTP Pruning prevents unneeded traffic from traviling through the trunks that don't need it, but still lets you form trunks on your links to the distruibution layer. (Would help you standardize and save configuration headache in the future if you need another VLAN down stream) http://www.cisco.com/univercd/cc/td/doc/product/lan/cat5000/rel_4_2/config/vlans.htm#xtocid79807 -Original Message- From: cisco-nsp-boun...@puck.nether.net [mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Jeff Sent: Tuesday, October 19, 2010 8:39 AM To: cisco-nsp@puck.nether.net Subject: [c-nsp] Low end cisco switch that supports dot1q tunneling and design question Hi there, Can anyone provide recommendations for a low end cisco switch that provides dot1q tunneling features? Also, can the tunnel terminate on multiple switches if they are all configured with the same access vlan tag? Thanks, Jeff. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] PIX ipv6 neighbour problem
On Tue, 2010-10-19 at 16:02 +0200, Andreas Mueller wrote: interface GigabitEthernet1 nameif inside security-level 100 ip address 192.168.1.232 255.255.255.0 ipv6 address :::1::e8/64 ipv6 nd prefix :::1::/64 no-advertise no-autoconfig [...] when I tried to ping the IP (:::1::e8) of the PIX on the inside interface from a linux box I get no responses. When I look at the output of the command show ipv6 neighbours, started multiple times during the pings I get the following outputs: pix515e/s6ipv6# show ipv6 neigh IPv6 Address Age Link-layer Addr State Interface fe80::20a:b8ff:fefb:6d43 518 000a.b8fb.6d43 STALE inside fe80::221:85ff:feca:6146- 0021.85ca.6146 REACH inside Can you ping fe80::221:85ff:feca:6146 from you client? What does ip -6 neighbor list on the client say? What addresses does the client, both link-local and in your configured prefix? -- Peter ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] PIX ipv6 neighbour problem
Hi Andreas, On Tue, 19 Oct 2010, Andreas Mueller wrote: Hello, my PIX515E is running PIX 8.0.4 with multiple contexts. In one of my contexts I would like to have IPv6 connectivity. The Interface is configured as I silently assume but just to verify - no shared interface between the contexts ? [snip] S ::/0 [0/0] via :::1::d, inside when I tried to ping the IP (:::1::e8) of the PIX on the inside interface from a linux box I get no responses. When I look at the output of the command show ipv6 neighbours, started multiple times during the pings I get the following outputs: pix515e/s6ipv6# show ipv6 neigh IPv6 Address Age Link-layer Addr State Interface fe80::20a:b8ff:fefb:6d43 518 000a.b8fb.6d43 STALE inside fe80::221:85ff:feca:6146- 0021.85ca.6146 REACH inside pix515e/s6ipv6# show ipv6 neigh IPv6 Address Age Link-layer Addr State Interface fe80::20a:b8ff:fefb:6d43 518 000a.b8fb.6d43 STALE inside :::1::d 0 0021.85ca.6146 DELAY inside fe80::221:85ff:feca:6146- 0021.85ca.6146 REACH inside pix515e/s6ipv6# show ipv6 neigh IPv6 Address Age Link-layer Addr State Interface fe80::20a:b8ff:fefb:6d43 519 000a.b8fb.6d43 STALE inside :::1::d 0 0021.85ca.6146 PROBE inside fe80::221:85ff:feca:6146- 0021.85ca.6146 REACH inside pix515e/s6ipv6# show ipv6 neigh IPv6 Address Age Link-layer Addr State Interface fe80::20a:b8ff:fefb:6d43 519 000a.b8fb.6d43 STALE inside fe80::221:85ff:feca:6146- 0021.85ca.6146 REACH inside Looks like we've already got the neighbor entry for pref:1::d, then tried to send the NS to it and failed ? here is the output of the PIX-debugging: Oct 19 15:55:52 pix515e %PIX-7-609001: Built local-host identity:fe80::20e:cff:fe80:c80c Oct 19 15:55:52 pix515e %PIX-7-609001: Built local-host inside:ff02::1 Oct 19 15:55:52 pix515e %PIX-6-302020: Built outbound ICMP connection for faddr ff02::1/0 gaddr fe80::20e:cff:fe80:c80c/0 laddr fe80::20e:cff:fe80:c80c/0 Oct 19 15:55:52 pix515e %PIX-7-711001: ICMPv6-ND: Sending RA to ff02::1 on inside Oct 19 15:55:52 pix515e %PIX-7-711001: ICMPv6-ND: MTU = 1500 Oct 19 15:55:52 pix515e %PIX-7-711001: IPV6: source fe80::20e:cff:fe80:c80c (local) Oct 19 15:55:52 pix515e %PIX-7-711001: dest ff02::1 (inside) Oct 19 15:55:52 pix515e %PIX-7-711001: traffic class 224, flow 0x0, len 72+0, prot 58, hops 255, originating Oct 19 15:55:52 pix515e %PIX-7-711001: IPv6: Sending on inside Oct 19 15:55:56 pix515e %PIX-6-302021: Teardown ICMP connection for faddr ff02::1/0 gaddr fe80::20e:cff:fe80:c80c/0 laddr fe80::20e:cff:fe80:c80c/0 Oct 19 15:55:56 pix515e %PIX-7-609002: Teardown local-host identity:fe80::20e:cff:fe80:c80c duration 0:00:04 Oct 19 15:55:56 pix515e %PIX-7-609002: Teardown local-host inside:ff02::1 duration 0:00:04 Based on the timestamps, seems like the ICMP connection was built to send the RA - so I do not see any traces of ND working here at all... Give it a shot this way: debug ipv6 nd, deb ipv6 icmp then clear ipv6 neigh, you should have something like this when pinging from the linux box: ASA(config)# clear ipv6 neigh ASA(config)# deb ipv6 nd ASA(config)# deb ipv6 icmp ASA(config)# sh ipv6 neigh ASA(config)# ICMPv6: Received ICMPv6 packet from 2002:c01d:cafe:1002:218:51ff:fef9:bceb, type 128 ICMPv6: Received echo request from 2002:c01d:cafe:1002:218:51ff:fef9:bceb ICMPv6: Sending echo reply to 2002:c01d:cafe:1002:218:51ff:fef9:bceb ICMPv6-ND: DELETE - INCMP: 2002:c01d:cafe:1002:218:51ff:fef9:bceb ICMPv6-ND: Sending NS for 2002:c01d:cafe:1002:218:51ff:fef9:bceb on inside ICMPv6: Received ICMPv6 packet from 2002:c01d:cafe:1002:218:51ff:fef9:bceb, type 136 ICMPv6-ND: Received NA for 2002:c01d:cafe:1002:218:51ff:fef9:bceb on inside from 2002:c01d:cafe:1002:218:51ff:fef9:bceb ICMPv6-ND: INCMP - REACH: 2002:c01d:cafe:1002:218:51ff:fef9:bceb ICMPv6: Received ICMPv6 packet from 2002:c01d:cafe:1002:218:51ff:fef9:bceb, type 128 ICMPv6: Received echo request from 2002:c01d:cafe:1002:218:51ff:fef9:bceb ICMPv6: Sending echo reply to 2002:c01d:cafe:1002:218:51ff:fef9:bceb ICMPv6: Received ICMPv6 packet from 2002:c01d:cafe:1002:218:51ff:fef9:bceb, type 128 ICMPv6: Received echo request from 2002:c01d:cafe:1002:218:51ff:fef9:bceb ICMPv6: Sending echo reply to 2002:c01d:cafe:1002:218:51ff:fef9:bceb ICMPv6: Received ICMPv6 packet from fe80::218:51ff:fef9:bceb, type 135 ICMPv6-ND: Received NS for fe80::21e:7aff:fe36:6d37 on inside from fe80::218:51ff:fef9:bceb ICMPv6-ND: DELETE - INCMP: fe80::218:51ff:fef9:bceb ICMPv6-ND: INCMP - STALE: fe80::218:51ff:fef9:bceb ICMPv6-ND: Sending NA for fe80::21e:7aff:fe36:6d37 on inside
Re: [c-nsp] Low end cisco switch that supports dot1q tunneling and design question
Hi. Also, can the tunnel terminate on multiple switches if they are all configured with the same access vlan tag? Yes, but not with out some gotchas. If you have a lot of broadcast traffic, and are running the inner Vlans (C-Vlans) sparsely meshed you will have a lot more broadcast traffic than in a normal flat dot1q-domain. As the tunneling switches have no knowledge of the inner Vlans (C-Vlan), broadcasts are flooded on all ports *even if a C-Vlan doesn't exist on a certain port*. In other words: if Vlan X is deployed between port A and B, Vlan Y between A and C, (and no other Vlans exists,) all broadcast traffic entering port A on Vlan X will be flooded on both port B and C (on Vlan X). If there are more than three ports in the domain, things just get worse... If all C-Vlans are to be found on all ports (fully meshed), this drawback doesn't matter, the broadcast had to be flooded to all ports anyway. -- Pelle RFC1925, truth 11: Every old idea will be proposed again with a different name and a different presentation, regardless of whether it works. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Using address-family context
Hi, On 20 October 2010 03:39, Randy McAnally r...@fast-serv.com wrote: Is it safe for existing BGP4 sessions/config without 'address-family ' context to use the 'address-family ipv6 unicast' context to add a BGP6 peer for the first time? Changing the list of advertised address-families will reset the BGP session. If you have the routers peered with more then one neighbour that should be safe. Just remember that the session is getting reset the very moment you activate peer in the ipv6 address family. kind regards Pshem ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] sh proc cpu hist - 3750 stack
All- We have a stack of five 3750G-48TS switches and am curious if it's possible to find the cpu utilization of each member of the stack? According to http://www.cisco.com/en/US/docs/switches/lan/catalyst3750/software/troubleshooting/cpu_util.html In a switch stack, CPU utilization is measured only on the master switch. In our particular stack we have 2 switches dedicated to servers and the balance dedicated to clients, mostly Citrix terminals with a handful of desktops and laptops and the master switch is one of the client switches. So - that begs the question - how can I see the CPU utilization on the 2 server switches (which happen to be about 20 degrees F hotter than the client switches so I know they are working harder)? Thanks in advance, Jeff Wojciechowski LAN, WAN and Telephony Administrator Midland Paper Company 101 E Palatine Rd Wheeling, IL 60090 * tel: 847.777.2829 * fax: 847.403.6829 e-mail: jeff.wojciechow...@midlandpaper.commailto:jeff.wojciechow...@midlandpaper.com http://www.midlandpaper.com This electronic mail (including any attachments) may contain information that is privileged, confidential, or otherwise protected from disclosure to anyone other than its intended recipient(s). Any dissemination or use of this electronic mail or its contents (including any attachments) by persons other than the intended recipient(s) is strictly prohibited. If you have received this message in error, please delete the original message in its entirety (including any attachments) and notify us immediately by reply email so that we may correct our internal records. Midland Paper Company accepts no responsibility for any loss or damage from use of this electronic mail, including any damage resulting from a computer virus. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] sh proc cpu hist - 3750 stack
On Tue, 2010-10-19 at 15:48 -0500, Jeff Wojciechowski wrote: So - that begs the question - how can I see the CPU utilization on the 2 server switches (which happen to be about 20 degrees F hotter than the client switches so I know they are working harder)? You could use remote command N show proc cpu where N is the member index. But even though the hotter switches might have a higher switching load you probably can't see any correlation to the CPU load. The devices are strictly[0] hardware forwarding. -- Peter [0]: Usual disclaimer for unsupported configurations. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Redistributing ipv6 static default route into eigrp failure
Okay, I must be missing something. I've setup a default static route that is showing up in the ipv6 route tables, but not in the local ipv6 eigrp topology nor redistributing out. Anyone have a clue? Or yet another ipv6 bug interface Vlan4 ip address 129.77.4.252 255.255.255.0 ipv6 address 2620:0:2810:104::252/64 ipv6 enable ipv6 eigrp 14607 standby version 2 standby 4 ip 129.77.4.254 standby 4 priority 108 standby 4 preempt standby 104 ipv6 2620:0:2810:104::254/64 standby 104 priority 108 standby 104 preempt end ipv6 route ::/0 Vlan4 2620:0:2810:104::1 ipv6 router eigrp 14607 eigrp router-id 129.77.40.8 redistribute static switch-user2#show ipv6 route IPv6 Routing Table - Default - 19 entries Codes: C - Connected, L - Local, S - Static, U - Per-user Static route B - BGP, R - RIP, I1 - ISIS L1, I2 - ISIS L2 IA - ISIS interarea, IS - ISIS summary, D - EIGRP, EX - EIGRP external O - OSPF Intra, OI - OSPF Inter, OE1 - OSPF ext 1, OE2 - OSPF ext 2 ON1 - OSPF NSSA ext 1, ON2 - OSPF NSSA ext 2 S ::/0 [1/0] via 2620:0:2810:104::1, Vlan4 C 2620:0:2810:104::/64 [0/0] via Vlan4, directly connected L 2620:0:2810:104::252/128 [0/0] via Vlan4, receive D 2620:0:2810:11E::/64 [90/768] via FE80::2D0:FF:FEF3:7000, TenGigabitEthernet1/1 via FE80::2D0:4FF:FE16:0, TenGigabitEthernet1/2 D 2620:0:2810:E001::252/127 [90/1024] via FE80::2D0:FF:FEF3:7000, TenGigabitEthernet1/1 via FE80::2D0:4FF:FE16:0, TenGigabitEthernet1/2 D 2620:0:2810:E001::254/127 [90/768] via FE80::2D0:FF:FEF3:7000, TenGigabitEthernet1/1 C 2620:0:2810:E002::252/127 [0/0] via TenGigabitEthernet1/1, directly connected L 2620:0:2810:E002::253/128 [0/0] via TenGigabitEthernet1/1, receive D 2620:0:2810:E002::254/127 [90/768] via FE80::2D0:FF:FEF3:7000, TenGigabitEthernet1/1 D 2620:0:2810:E101::252/127 [90/1024] via FE80::2D0:FF:FEF3:7000, TenGigabitEthernet1/1 via FE80::2D0:4FF:FE16:0, TenGigabitEthernet1/2 D 2620:0:2810:E101::254/127 [90/768] via FE80::2D0:4FF:FE16:0, TenGigabitEthernet1/2 C 2620:0:2810:E102::252/127 [0/0] via TenGigabitEthernet1/2, directly connected L 2620:0:2810:E102::253/128 [0/0] via TenGigabitEthernet1/2, receive D 2620:0:2810:E102::254/127 [90/768] via FE80::2D0:4FF:FE16:0, TenGigabitEthernet1/2 D 2620:0:2810:FF01::1/128 [90/128512] via FE80::2D0:FF:FEF3:7000, TenGigabitEthernet1/1 via FE80::2D0:4FF:FE16:0, TenGigabitEthernet1/2 D 2620:0:2810:FF01::2/128 [90/128512] via FE80::2D0:4FF:FE16:0, TenGigabitEthernet1/2 D 2620:0:2810:FF01::7/128 [90/128768] via FE80::2D0:FF:FEF3:7000, TenGigabitEthernet1/1 via FE80::2D0:4FF:FE16:0, TenGigabitEthernet1/2 LC 2620:0:2810:FF01::8/128 [0/0] via Loopback0, receive L FF00::/8 [0/0] via Null0, receive switch-user2#show ipv6 eigrp topology EIGRP-IPv6 Topology Table for AS(14607)/ID(129.77.40.8) Codes: P - Passive, A - Active, U - Update, Q - Query, R - Reply, r - reply Status, s - sia Status P 2620:0:2810:E101::254/127, 1 successors, FD is 768 via FE80::2D0:4FF:FE16:0 (768/512), TenGigabitEthernet1/2 P 2620:0:2810:104::/64, 1 successors, FD is 2816 via Connected, Vlan4 P 2620:0:2810:FF01::2/128, 1 successors, FD is 128512 via FE80::2D0:4FF:FE16:0 (128512/128256), TenGigabitEthernet1/2 P 2620:0:2810:E001::252/127, 2 successors, FD is 1024 via FE80::2D0:FF:FEF3:7000 (1024/768), TenGigabitEthernet1/1 via FE80::2D0:4FF:FE16:0 (1024/768), TenGigabitEthernet1/2 via FE80::209:44FF:FE22:EEC0 (3072/512), Vlan4 P 2620:0:2810:FF01::7/128, 2 successors, FD is 128768 via FE80::2D0:FF:FEF3:7000 (128768/128512), TenGigabitEthernet1/1 via FE80::2D0:4FF:FE16:0 (128768/128512), TenGigabitEthernet1/2 via FE80::209:44FF:FE22:EEC0 (130816/128256), Vlan4 P 2620:0:2810:FF01::1/128, 2 successors, FD is 128512 via FE80::2D0:FF:FEF3:7000 (128512/128256), TenGigabitEthernet1/1 via FE80::2D0:4FF:FE16:0 (128512/128256), TenGigabitEthernet1/2 P 2620:0:2810:E002::254/127, 1 successors, FD is 768 via FE80::2D0:FF:FEF3:7000 (768/512), TenGigabitEthernet1/1 P 2620:0:2810:E102::254/127, 1 successors, FD is 768 via FE80::2D0:4FF:FE16:0 (768/512), TenGigabitEthernet1/2 P 2620:0:2810:E001::254/127, 1 successors, FD is 768 via FE80::2D0:FF:FEF3:7000 (768/512), TenGigabitEthernet1/1 P 2620:0:2810:E101::252/127, 2 successors, FD is 1024 via FE80::2D0:FF:FEF3:7000 (1024/768), TenGigabitEthernet1/1 via FE80::2D0:4FF:FE16:0 (1024/768), TenGigabitEthernet1/2 via FE80::209:44FF:FE22:EEC0 (3072/512), Vlan4 P 2620:0:2810:FF01::8/128, 1 successors, FD is 128256 via Connected, Loopback0 P 2620:0:2810:E002::252/127, 1 successors, FD is 512 via Connected, TenGigabitEthernet1/1 P 2620:0:2810:11E::/64, 2 successors, FD is 768 via FE80::2D0:FF:FEF3:7000 (768/512), TenGigabitEthernet1/1
[c-nsp] RFC 4798 IPv6 over IPv4 MPLS Backbone Configuration
Hi All, I was wondering if someone could point me to Cisco documentation on how to configure a Cisco box to exchange IPv6 reachability information based on RFC 4798 in BGP (especially when the BGP neighbor is a non-Cisco device such as Juniper). Thanks. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] 6pe Cisco-Juniper Re: RFC 4798 IPv6 over IPv4 MPLS Backbone Configuration
http://www.cisco.com/en/US/products/sw/iosswrel/ps1835/products_data_sheet09186a008052edd3.html http://www.juniper.net/techpubs/software/junos/junos82/feature-guide-82/download/fg-ipv6-over-mpls.pdf On Oct 19, 2010, at 6:25 PM, texas ex wrote: Hi All, I was wondering if someone could point me to Cisco documentation on how to configure a Cisco box to exchange IPv6 reachability information based on RFC 4798 in BGP (especially when the BGP neighbor is a non-Cisco device such as Juniper). Thanks. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] SLA tracking, what do you ping?
When you use IP SLA to track if an upstream is working on a ISP connection (From customer point of view, and you are not the ISP that knows what will be safe to ping), what do you usually configure to ping? I have found that one hop up from the CPE is not necessary reliable on DSL/Cable. I was wondering if anyone can share their experience on what works well and what to look out for. Thanks, ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] CoPP for SSH on nexus 7k. Confused!
** ip addresses used are imaginary ** Here's a really dumbed down version of my CoPP implementation. Its pretty simple. I have ACL's to allow ssh from anywhere in my network, and then allow telnet from anywhere in my network (note there is an unintentional deny statement in that access-list). Then there is the ACL for matching any other SSH traffic and My policy map says 'any SSH from outside my network' gets dropped. However in reality, I am able to ssh into my box from anywhere. even from outside my network. so I have 2 questions. 1. I assume this is happening because all traffic is matching the deny statement in the ACL copp-system-acl-telnet. What does the deny in an CoPP ACL do? 2. Isnt there a 'deny ip any any'by default at the end of all access-lists. In this case.. even the ACL copp-system-acl-ssh would have a deny ip any any at the end. I have tried my best to explain, but then if you dont understand the scenario, I can try again ;) class-map type control-plane match-any copp-system-class-management match access-group name copp-system-acl-ssh match access-group name copp-system-acl-telnet class-map type control-plane match-any copp-system-class-undesirable match access-group name copp-system-acl-ssh-deny policy-map type control-plane copp-system-policy class copp-system-class-management police cir 1 kbps bc 375 ms conform transmit violate drop class copp-system-class-undesirable police cir 32 kbps bc 375 ms conform drop violate drop class class-default police cir 100 kbps bc 375 ms conform transmit violate drop control-plane service-policy input copp-system-policy ip access-list copp-system-acl-ssh 10 permit tcp 129.63.8.0/24 any eq 22 ip access-list copp-system-acl-telnet 10 permit ip 129.63.8.0/24 any 20 deny ip any any ip access-list copp-system-acl-ssh-deny 10 permit tcp any any eq 22 log ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/