[c-nsp] ASR1000 and QOS
Hello Everyone, I am trying to realize a qos configuration on an asr 1006 for pppoe services being sold by our national incumbent. On a single GE interface I will receive two classes of services, cos 0 and cos 1, each with a set bandwidth. i.e. cos 0 100mbps cos 1 20mbps. Each dslam gets terminated using a vlan for each cos , so in the end I will have n vlans for the cos 0 traffic and x vlans for the cos 1 traffic. Things gets complicated though as we want to assign a policy to the pppoe sessions as well, as we will have varying line rates on the customer lines. Ideally I would like to be able to shape the n vlans to the cos 0 rate and the x vlans to the cos 1 rate, and then be able to shape the single sessions as each will have a different line rate. I have tried 1) with the SE following us (on vacation now since we need him) we thought that service policy aggregation would be the way to go. http://www.cisco.com/en/US/docs/ios/qos/configuration/guide/qos_policies_agg.html but when we assign the end user policy via radius it does not get applied and we have the error policy TEST with fragment class can only be attached to ethernet subifc and port-channel subifc Tinkered awhile with various configs but no go lets try something else.. 2) setting up a policy on the GE that shapes on match vlans , and sending service policy for the users via radius. error message service-policy with queueing features on sessions is not allowed in conjunction with interface based and the policy is not applied bummer I am thinking about trying to declare the interface bandwidth via radius and then use bandwidth % instead of shape but that should be queueing as well and also the scaling documents for the asr have big warnings on the use of lcp:interface-config ... So here I am looking for a way to do this The only other thing that comes to mind is placing a box before the asr to shape the vlans and just work on the sessions on the asr, but that means another box to purchase, maintain, etc etc. If you've made it this far (sorry about the length) Has anyone done something similar, or have any suggestions ? Thanks in advance! Brian --- This e-mail is intended only for the addressee named above. As this e-mail may contain confidential or privileged information, if you are not the named addressee, you are not authorized to retain, read, copy or disseminate this message or any part of it. Please consider your environmental responsibility before printing this e-mail. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Output drops mysteriously appear/disappear on 3750X
Hi Eric, This seems to be caused by the below software bug. CSCtq86186 - Switch stack shows incorrect values for output drops on show interfaces http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetailsbugId=CSCtq86186 I've verified that this affects 15.0(1)SE3 and it'll be fixed in 15.0(2)SE. You can use the 'show platform port-asic stats drop' command to check hardware drop counters. Best regards, Andras On Wed, Aug 22, 2012 at 12:38 AM, Eric Van Tol e...@atlantech.net wrote: Hi all, I've got a pair of stacked 3750X switches running 15.0(1)SE3 and I'm noticing something funky. On a pair of mirror destination ports, I'm seeing output drops appear and disappear in the CLI output of 'show interface' at random intervals. What I mean by this is, after clearing counters on all interfaces, I can do a 'show int gi1/0/16' and see 'output drops 0' re-run the command one second later and see 'output drops 39428200', then re-run the command another second later and see 'output drops 0' again. I see it on four monitor session destination ports. The problem started somewhat randomly one day last week, after having working fine for two days prior after being put into production use. Anyone ever seen this before? I'm pretty sure it's not just a cosmetic issue, as the analyzer attached to these ports is seeing packet loss on the traffic its analyzing, but I don't believe the source ports are actually seeing any loss. Output is below - take note of the 'Last clearing of counters' times. -evt ~~ vsw1-1213c.ss.sls.md#sh int gi2/0/17 GigabitEthernet2/0/17 is up, line protocol is down (monitoring) Hardware is Gigabit Ethernet, address is d48c.b549.4511 (bia d48c.b549.4511) Description: VL Mirror Port MTU 9198 bytes, BW 100 Kbit/sec, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX input flow-control is off, output flow-control is unsupported ARP type: ARPA, ARP Timeout 04:00:00 Last input never, output never, output hang never Last clearing of show interface counters 01:23:22 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 30 second input rate 0 bits/sec, 0 packets/sec 30 second output rate 3868000 bits/sec, 2243 packets/sec 0 packets input, 0 bytes, 0 no buffer Received 0 broadcasts (0 multicasts) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 0 multicast, 0 pause input 0 input packets with dribble condition detected 24027518 packets output, 5217538085 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets 0 unknown protocol drops 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 pause output 0 output buffer failures, 0 output buffers swapped out vsw1-1213c.ss.sls.md#sh int gi2/0/17 GigabitEthernet2/0/17 is up, line protocol is down (monitoring) Hardware is Gigabit Ethernet, address is d48c.b549.4511 (bia d48c.b549.4511) Description: VL Mirror Port MTU 9198 bytes, BW 100 Kbit/sec, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX input flow-control is off, output flow-control is unsupported ARP type: ARPA, ARP Timeout 04:00:00 Last input never, output never, output hang never Last clearing of show interface counters 01:23:23 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 39428034 Queueing strategy: fifo Output queue: 0/40 (size/max) 30 second input rate 0 bits/sec, 0 packets/sec 30 second output rate 3868000 bits/sec, 2243 packets/sec 0 packets input, 0 bytes, 0 no buffer Received 0 broadcasts (0 multicasts) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 0 multicast, 0 pause input 0 input packets with dribble condition detected 24027518 packets output, 5217538085 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets 0 unknown protocol drops 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 pause output 0 output buffer failures, 0 output buffers swapped out vsw1-1213c.ss.sls.md#sh int gi2/0/17 GigabitEthernet2/0/17 is up, line protocol is down (monitoring) Hardware is Gigabit Ethernet, address is d48c.b549.4511 (bia d48c.b549.4511) Description: VL Mirror Port MTU 9198 bytes, BW 100 Kbit/sec, DLY 10 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 1000Mb/s, media type
[c-nsp] traceroute shows mpls labels...how?
Do you all know how this works? How is traceroute able to report back the mpls label that is in use in the transit hops? Also wondering why I don't see this on windows command line tracert Aaron RP/0/RSP0/CPU0:9k#trace vrf one 1.2.3.4 source 2.4.6.8 1 19.1911.5 [MPLS: Labels 16001/16220 Exp 0] 2 msec 1 msec 0 msec 2 19.1911.1 [MPLS: Label 16220 Exp 0] 0 msec 0 msec 1 msec 3 88.88.191.22 0 msec 0 msec 19.1911.33 1 msec 4 88.88.191.18 1 msec 1 msec 0 msec 5 88.88.135.221 10 msec 10 msec 11 msec 6 122.47.236.130 [MPLS: Label 17039 Exp 1] 47 msec 49 msec 51 msec 7 122.47.154.53 [MPLS: Labels 0/17017 Exp 1] 48 msec 49 msec 47 msec 8 122.45.30.134 [MPLS: Labels 23417/17016 Exp 1] 48 msec 49 msec 47 msec 9 122.45.1.17 [MPLS: Labels 23439/17016 Exp 1] 50 msec 49 msec 51 msec 10 122.45.31.189 [MPLS: Labels 0/17016 Exp 1] 51 msec 54 msec 51 msec 11 122.45.158.34 [MPLS: Labels 0/16009 Exp 1] 49 msec 50 msec 47 msec 12 122.45.104.49 46 msec 46 msec 47 msec 13 122.45.108.14 47 msec 47 msec 47 msec 14 * * * 15 * * * 16 * ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Output drops mysteriously appear/disappear on 3750X
Usually shows up (worse) on port channels. Drops are read as a single binary counter, and are calculated as a delta from the previously read values. Occasionally the port channel values are offset 2x the previous values (individual ports versus the channel). We've been dealing with the network monitoring panic alerts almost forever. Bug appeared in 12.2.58(?) and still waiting on fixed release to go general release. Have updated to current 15.x but it's still broken. Jeff ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] traceroute shows mpls labels...how?
MPLS TTL By default mpls ip propagation-ttl is enabled in global configuration mode. This enabled user to trace the hops of the mpls router with labels as shown in above traceroute. This is because MPLS TTL field is copied from IP TTL field, on each MPLS LSR hop a TTL will be decremented. To “hide” the MPLS hops you can disable it by doing no mpls ip propagation-ttl on every LSR in global configuration mode. Disabling MPLS propagation TTL will make MPLS TTL field to have a fixed 255 value, and on every MPLS LSR hop the IP TTL value will be intact. IP TTL will only be decremented when egress LSR sends out to the destination host unlabeled. m1(config)#no mpls ip propagate-ttl Not sure why you're not seeing it in Windows, prob a very simple traceroute implementation. /Roger On Wed, Aug 22, 2012 at 9:21 PM, Aaron aar...@gvtc.com wrote: Do you all know how this works? How is traceroute able to report back the mpls label that is in use in the transit hops? Also wondering why I don't see this on windows command line tracert Aaron RP/0/RSP0/CPU0:9k#trace vrf one 1.2.3.4 source 2.4.6.8 1 19.1911.5 [MPLS: Labels 16001/16220 Exp 0] 2 msec 1 msec 0 msec 2 19.1911.1 [MPLS: Label 16220 Exp 0] 0 msec 0 msec 1 msec 3 88.88.191.22 0 msec 0 msec 19.1911.33 1 msec 4 88.88.191.18 1 msec 1 msec 0 msec 5 88.88.135.221 10 msec 10 msec 11 msec 6 122.47.236.130 [MPLS: Label 17039 Exp 1] 47 msec 49 msec 51 msec 7 122.47.154.53 [MPLS: Labels 0/17017 Exp 1] 48 msec 49 msec 47 msec 8 122.45.30.134 [MPLS: Labels 23417/17016 Exp 1] 48 msec 49 msec 47 msec 9 122.45.1.17 [MPLS: Labels 23439/17016 Exp 1] 50 msec 49 msec 51 msec 10 122.45.31.189 [MPLS: Labels 0/17016 Exp 1] 51 msec 54 msec 51 msec 11 122.45.158.34 [MPLS: Labels 0/16009 Exp 1] 49 msec 50 msec 47 msec 12 122.45.104.49 46 msec 46 msec 47 msec 13 122.45.108.14 47 msec 47 msec 47 msec 14 * * * 15 * * * 16 * ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] traceroute shows mpls labels...how?
Thanks Roger, I knew about that. But what I was asking is how does that label information get reported back to the device that I'm doing the traceroute from you see what I'm asking ? I might put a sniffer on the device that I'm tracing from to look at that icmp ttl expired in transit packets and see where exactly is that label information embedded/carried in the ttl expired in transit packets. see what I'm asking ? Aaron -Original Message- From: Roger Wiklund [mailto:roger.wikl...@gmail.com] Sent: Wednesday, August 22, 2012 2:49 PM To: Aaron Cc: cisco-nsp@puck.nether.net Subject: Re: [c-nsp] traceroute shows mpls labels...how? MPLS TTL By default mpls ip propagation-ttl is enabled in global configuration mode. This enabled user to trace the hops of the mpls router with labels as shown in above traceroute. This is because MPLS TTL field is copied from IP TTL field, on each MPLS LSR hop a TTL will be decremented. To hide the MPLS hops you can disable it by doing no mpls ip propagation-ttl on every LSR in global configuration mode. Disabling MPLS propagation TTL will make MPLS TTL field to have a fixed 255 value, and on every MPLS LSR hop the IP TTL value will be intact. IP TTL will only be decremented when egress LSR sends out to the destination host unlabeled. m1(config)#no mpls ip propagate-ttl Not sure why you're not seeing it in Windows, prob a very simple traceroute implementation. /Roger On Wed, Aug 22, 2012 at 9:21 PM, Aaron aar...@gvtc.com wrote: Do you all know how this works? How is traceroute able to report back the mpls label that is in use in the transit hops? Also wondering why I don't see this on windows command line tracert Aaron RP/0/RSP0/CPU0:9k#trace vrf one 1.2.3.4 source 2.4.6.8 1 19.1911.5 [MPLS: Labels 16001/16220 Exp 0] 2 msec 1 msec 0 msec 2 19.1911.1 [MPLS: Label 16220 Exp 0] 0 msec 0 msec 1 msec 3 88.88.191.22 0 msec 0 msec 19.1911.33 1 msec 4 88.88.191.18 1 msec 1 msec 0 msec 5 88.88.135.221 10 msec 10 msec 11 msec 6 122.47.236.130 [MPLS: Label 17039 Exp 1] 47 msec 49 msec 51 msec 7 122.47.154.53 [MPLS: Labels 0/17017 Exp 1] 48 msec 49 msec 47 msec 8 122.45.30.134 [MPLS: Labels 23417/17016 Exp 1] 48 msec 49 msec 47 msec 9 122.45.1.17 [MPLS: Labels 23439/17016 Exp 1] 50 msec 49 msec 51 msec 10 122.45.31.189 [MPLS: Labels 0/17016 Exp 1] 51 msec 54 msec 51 msec 11 122.45.158.34 [MPLS: Labels 0/16009 Exp 1] 49 msec 50 msec 47 msec 12 122.45.104.49 46 msec 46 msec 47 msec 13 122.45.108.14 47 msec 47 msec 47 msec 14 * * * 15 * * * 16 * ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] traceroute shows mpls labels...how?
There are ICMP extensions to carry MPLS label stack information but the trace route application needs to support it. The windows client doesn't. Phil Sent from my iPad On Aug 22, 2012, at 3:21 PM, Aaron aar...@gvtc.com wrote: Do you all know how this works? How is traceroute able to report back the mpls label that is in use in the transit hops? Also wondering why I don't see this on windows command line tracert Aaron RP/0/RSP0/CPU0:9k#trace vrf one 1.2.3.4 source 2.4.6.8 1 19.1911.5 [MPLS: Labels 16001/16220 Exp 0] 2 msec 1 msec 0 msec 2 19.1911.1 [MPLS: Label 16220 Exp 0] 0 msec 0 msec 1 msec 3 88.88.191.22 0 msec 0 msec 19.1911.33 1 msec 4 88.88.191.18 1 msec 1 msec 0 msec 5 88.88.135.221 10 msec 10 msec 11 msec 6 122.47.236.130 [MPLS: Label 17039 Exp 1] 47 msec 49 msec 51 msec 7 122.47.154.53 [MPLS: Labels 0/17017 Exp 1] 48 msec 49 msec 47 msec 8 122.45.30.134 [MPLS: Labels 23417/17016 Exp 1] 48 msec 49 msec 47 msec 9 122.45.1.17 [MPLS: Labels 23439/17016 Exp 1] 50 msec 49 msec 51 msec 10 122.45.31.189 [MPLS: Labels 0/17016 Exp 1] 51 msec 54 msec 51 msec 11 122.45.158.34 [MPLS: Labels 0/16009 Exp 1] 49 msec 50 msec 47 msec 12 122.45.104.49 46 msec 46 msec 47 msec 13 122.45.108.14 47 msec 47 msec 47 msec 14 * * * 15 * * * 16 * ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] traceroute shows mpls labels...how?
Also this depends on vendor too. IIRC junos uses udp for its trace routing and ios uses icmp.Meaning that if you did traceroute from a cisco box going over a juniper network the labels wouldn't show and vice versa. You brought up something I was 100% suee about a few years ago but those brain cells are gone. On Aug 22, 2012 2:58 PM, Phil Bedard phil...@gmail.com wrote: There are ICMP extensions to carry MPLS label stack information but the trace route application needs to support it. The windows client doesn't. Phil Sent from my iPad On Aug 22, 2012, at 3:21 PM, Aaron aar...@gvtc.com wrote: Do you all know how this works? How is traceroute able to report back the mpls label that is in use in the transit hops? Also wondering why I don't see this on windows command line tracert Aaron RP/0/RSP0/CPU0:9k#trace vrf one 1.2.3.4 source 2.4.6.8 1 19.1911.5 [MPLS: Labels 16001/16220 Exp 0] 2 msec 1 msec 0 msec 2 19.1911.1 [MPLS: Label 16220 Exp 0] 0 msec 0 msec 1 msec 3 88.88.191.22 0 msec 0 msec 19.1911.33 1 msec 4 88.88.191.18 1 msec 1 msec 0 msec 5 88.88.135.221 10 msec 10 msec 11 msec 6 122.47.236.130 [MPLS: Label 17039 Exp 1] 47 msec 49 msec 51 msec 7 122.47.154.53 [MPLS: Labels 0/17017 Exp 1] 48 msec 49 msec 47 msec 8 122.45.30.134 [MPLS: Labels 23417/17016 Exp 1] 48 msec 49 msec 47 msec 9 122.45.1.17 [MPLS: Labels 23439/17016 Exp 1] 50 msec 49 msec 51 msec 10 122.45.31.189 [MPLS: Labels 0/17016 Exp 1] 51 msec 54 msec 51 msec 11 122.45.158.34 [MPLS: Labels 0/16009 Exp 1] 49 msec 50 msec 47 msec 12 122.45.104.49 46 msec 46 msec 47 msec 13 122.45.108.14 47 msec 47 msec 47 msec 14 * * * 15 * * * 16 * ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] traceroute shows mpls labels...how?
That's what I was looking for. So is it a part of the mpls lsr's icmp implementation that would allow those icmp extensions to carry and send that info or the traceing device (windows) or both? In other words, I wonder if the windows device actually is rcv'ing the icmp extended packets carrying that mpls label info BUT the tracert windows application just isn't smart enough to render it on the cli output.. I would think that wireshark on windows would tell me if it is or isn't seeing those extensions with the label info Aaron From: Chris Evans [mailto:chrisccnpsp...@gmail.com] Sent: Wednesday, August 22, 2012 3:03 PM To: Phil Bedard Cc: lt,cisco-nsp@puck.nether.netgt,; Aaron Subject: Re: [c-nsp] traceroute shows mpls labels...how? Also this depends on vendor too. IIRC junos uses udp for its trace routing and ios uses icmp.Meaning that if you did traceroute from a cisco box going over a juniper network the labels wouldn't show and vice versa. You brought up something I was 100% suee about a few years ago but those brain cells are gone. On Aug 22, 2012 2:58 PM, Phil Bedard phil...@gmail.com wrote: There are ICMP extensions to carry MPLS label stack information but the trace route application needs to support it. The windows client doesn't. Phil Sent from my iPad On Aug 22, 2012, at 3:21 PM, Aaron aar...@gvtc.com wrote: Do you all know how this works? How is traceroute able to report back the mpls label that is in use in the transit hops? Also wondering why I don't see this on windows command line tracert Aaron RP/0/RSP0/CPU0:9k#trace vrf one 1.2.3.4 source 2.4.6.8 1 19.1911.5 [MPLS: Labels 16001/16220 Exp 0] 2 msec 1 msec 0 msec 2 19.1911.1 [MPLS: Label 16220 Exp 0] 0 msec 0 msec 1 msec 3 88.88.191.22 0 msec 0 msec 19.1911.33 1 msec 4 88.88.191.18 1 msec 1 msec 0 msec 5 88.88.135.221 10 msec 10 msec 11 msec 6 122.47.236.130 [MPLS: Label 17039 Exp 1] 47 msec 49 msec 51 msec 7 122.47.154.53 [MPLS: Labels 0/17017 Exp 1] 48 msec 49 msec 47 msec 8 122.45.30.134 [MPLS: Labels 23417/17016 Exp 1] 48 msec 49 msec 47 msec 9 122.45.1.17 [MPLS: Labels 23439/17016 Exp 1] 50 msec 49 msec 51 msec 10 122.45.31.189 [MPLS: Labels 0/17016 Exp 1] 51 msec 54 msec 51 msec 11 122.45.158.34 [MPLS: Labels 0/16009 Exp 1] 49 msec 50 msec 47 msec 12 122.45.104.49 46 msec 46 msec 47 msec 13 122.45.108.14 47 msec 47 msec 47 msec 14 * * * 15 * * * 16 * ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] traceroute shows mpls labels...how?
Yes wireshark should see the info. There are a couple rfcs for the extensions but I don't know what they are offhand. Does not require anything special in the client ICMP packet and only applies to TTL Exceeded and Dest Unreachable responses. Phil Sent from my iPad On Aug 22, 2012, at 4:22 PM, Aaron aar...@gvtc.com wrote: That’s what I was looking for. So is it a part of the mpls lsr’s icmp implementation that would allow those icmp extensions to carry and send that info or the traceing device (windows) or both? In other words, I wonder if the windows device actually is rcv’ing the icmp extended packets carrying that mpls label info BUT the tracert windows application just isn’t smart enough to render it on the cli output…. I would think that wireshark on windows would tell me if it is or isn’t seeing those extensions with the label info Aaron From: Chris Evans [mailto:chrisccnpsp...@gmail.com] Sent: Wednesday, August 22, 2012 3:03 PM To: Phil Bedard Cc: lt,cisco-nsp@puck.nether.netgt,; Aaron Subject: Re: [c-nsp] traceroute shows mpls labels...how? Also this depends on vendor too. IIRC junos uses udp for its trace routing and ios uses icmp.Meaning that if you did traceroute from a cisco box going over a juniper network the labels wouldn't show and vice versa. You brought up something I was 100% suee about a few years ago but those brain cells are gone. On Aug 22, 2012 2:58 PM, Phil Bedard phil...@gmail.com wrote: There are ICMP extensions to carry MPLS label stack information but the trace route application needs to support it. The windows client doesn't. Phil Sent from my iPad On Aug 22, 2012, at 3:21 PM, Aaron aar...@gvtc.com wrote: Do you all know how this works? How is traceroute able to report back the mpls label that is in use in the transit hops? Also wondering why I don't see this on windows command line tracert Aaron RP/0/RSP0/CPU0:9k#trace vrf one 1.2.3.4 source 2.4.6.8 1 19.1911.5 [MPLS: Labels 16001/16220 Exp 0] 2 msec 1 msec 0 msec 2 19.1911.1 [MPLS: Label 16220 Exp 0] 0 msec 0 msec 1 msec 3 88.88.191.22 0 msec 0 msec 19.1911.33 1 msec 4 88.88.191.18 1 msec 1 msec 0 msec 5 88.88.135.221 10 msec 10 msec 11 msec 6 122.47.236.130 [MPLS: Label 17039 Exp 1] 47 msec 49 msec 51 msec 7 122.47.154.53 [MPLS: Labels 0/17017 Exp 1] 48 msec 49 msec 47 msec 8 122.45.30.134 [MPLS: Labels 23417/17016 Exp 1] 48 msec 49 msec 47 msec 9 122.45.1.17 [MPLS: Labels 23439/17016 Exp 1] 50 msec 49 msec 51 msec 10 122.45.31.189 [MPLS: Labels 0/17016 Exp 1] 51 msec 54 msec 51 msec 11 122.45.158.34 [MPLS: Labels 0/16009 Exp 1] 49 msec 50 msec 47 msec 12 122.45.104.49 46 msec 46 msec 47 msec 13 122.45.108.14 47 msec 47 msec 47 msec 14 * * * 15 * * * 16 * ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] traceroute shows mpls labels...how?
Yes the windows box will get, it just doesn't what to do with it. A router that isn't mpls enabled, but can understand it will show them. On Aug 22, 2012 3:22 PM, Aaron aar...@gvtc.com wrote: That’s what I was looking for. So is it a part of the mpls lsr’s icmp implementation that would allow those icmp extensions to carry and send that info or the traceing device (windows) or both? In other words, I wonder if the windows device actually is rcv’ing the icmp extended packets carrying that mpls label info BUT the tracert windows application just isn’t smart enough to render it on the cli output…. I would think that wireshark on windows would tell me if it is or isn’t seeing those extensions with the label info ** ** Aaron ** ** *From:* Chris Evans [mailto:chrisccnpsp...@gmail.com] *Sent:* Wednesday, August 22, 2012 3:03 PM *To:* Phil Bedard *Cc:* lt,cisco-nsp@puck.nether.netgt,; Aaron *Subject:* Re: [c-nsp] traceroute shows mpls labels...how? ** ** Also this depends on vendor too. IIRC junos uses udp for its trace routing and ios uses icmp.Meaning that if you did traceroute from a cisco box going over a juniper network the labels wouldn't show and vice versa. You brought up something I was 100% suee about a few years ago but those brain cells are gone. On Aug 22, 2012 2:58 PM, Phil Bedard phil...@gmail.com wrote: There are ICMP extensions to carry MPLS label stack information but the trace route application needs to support it. The windows client doesn't. Phil Sent from my iPad On Aug 22, 2012, at 3:21 PM, Aaron aar...@gvtc.com wrote: Do you all know how this works? How is traceroute able to report back the mpls label that is in use in the transit hops? Also wondering why I don't see this on windows command line tracert Aaron RP/0/RSP0/CPU0:9k#trace vrf one 1.2.3.4 source 2.4.6.8 1 19.1911.5 [MPLS: Labels 16001/16220 Exp 0] 2 msec 1 msec 0 msec 2 19.1911.1 [MPLS: Label 16220 Exp 0] 0 msec 0 msec 1 msec 3 88.88.191.22 0 msec 0 msec 19.1911.33 1 msec 4 88.88.191.18 1 msec 1 msec 0 msec 5 88.88.135.221 10 msec 10 msec 11 msec 6 122.47.236.130 [MPLS: Label 17039 Exp 1] 47 msec 49 msec 51 msec 7 122.47.154.53 [MPLS: Labels 0/17017 Exp 1] 48 msec 49 msec 47 msec 8 122.45.30.134 [MPLS: Labels 23417/17016 Exp 1] 48 msec 49 msec 47 msec 9 122.45.1.17 [MPLS: Labels 23439/17016 Exp 1] 50 msec 49 msec 51 msec 10 122.45.31.189 [MPLS: Labels 0/17016 Exp 1] 51 msec 54 msec 51 msec 11 122.45.158.34 [MPLS: Labels 0/16009 Exp 1] 49 msec 50 msec 47 msec 12 122.45.104.49 46 msec 46 msec 47 msec 13 122.45.108.14 47 msec 47 msec 47 msec 14 * * * 15 * * * 16 * ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] traceroute shows mpls labels...how?
Just the probe packets differ in protocol the responses are always ICMP responses. Phil Sent from my iPad On Aug 22, 2012, at 4:03 PM, Chris Evans chrisccnpsp...@gmail.com wrote: Also this depends on vendor too. IIRC junos uses udp for its trace routing and ios uses icmp.Meaning that if you did traceroute from a cisco box going over a juniper network the labels wouldn't show and vice versa. You brought up something I was 100% suee about a few years ago but those brain cells are gone. On Aug 22, 2012 2:58 PM, Phil Bedard phil...@gmail.com wrote: There are ICMP extensions to carry MPLS label stack information but the trace route application needs to support it. The windows client doesn't. Phil Sent from my iPad On Aug 22, 2012, at 3:21 PM, Aaron aar...@gvtc.com wrote: Do you all know how this works? How is traceroute able to report back the mpls label that is in use in the transit hops? Also wondering why I don't see this on windows command line tracert Aaron RP/0/RSP0/CPU0:9k#trace vrf one 1.2.3.4 source 2.4.6.8 1 19.1911.5 [MPLS: Labels 16001/16220 Exp 0] 2 msec 1 msec 0 msec 2 19.1911.1 [MPLS: Label 16220 Exp 0] 0 msec 0 msec 1 msec 3 88.88.191.22 0 msec 0 msec 19.1911.33 1 msec 4 88.88.191.18 1 msec 1 msec 0 msec 5 88.88.135.221 10 msec 10 msec 11 msec 6 122.47.236.130 [MPLS: Label 17039 Exp 1] 47 msec 49 msec 51 msec 7 122.47.154.53 [MPLS: Labels 0/17017 Exp 1] 48 msec 49 msec 47 msec 8 122.45.30.134 [MPLS: Labels 23417/17016 Exp 1] 48 msec 49 msec 47 msec 9 122.45.1.17 [MPLS: Labels 23439/17016 Exp 1] 50 msec 49 msec 51 msec 10 122.45.31.189 [MPLS: Labels 0/17016 Exp 1] 51 msec 54 msec 51 msec 11 122.45.158.34 [MPLS: Labels 0/16009 Exp 1] 49 msec 50 msec 47 msec 12 122.45.104.49 46 msec 46 msec 47 msec 13 122.45.108.14 47 msec 47 msec 47 msec 14 * * * 15 * * * 16 * ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] traceroute shows mpls labels...how?
Also this depends on vendor too. IIRC junos uses udp for its trace routing and ios uses icmp.Meaning that if you did traceroute from a cisco box going over a juniper network the labels wouldn't show and vice versa. Works just fine, labels are shown, both ways in our mixed Cisco - Juniper network. Steinar Haug, Nethelp consulting, sth...@nethelp.no ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] traceroute shows mpls labels...how?
Ahh good. I can't exactly remember what I ran into. Thx for clearing that up! On Aug 22, 2012 3:56 PM, sth...@nethelp.no wrote: Also this depends on vendor too. IIRC junos uses udp for its trace routing and ios uses icmp.Meaning that if you did traceroute from a cisco box going over a juniper network the labels wouldn't show and vice versa. Works just fine, labels are shown, both ways in our mixed Cisco - Juniper network. Steinar Haug, Nethelp consulting, sth...@nethelp.no ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] traceroute shows mpls labels...how?
Chris/Phil, thanks a lot Aaron From: Phil Bedard [mailto:phil...@gmail.com] Sent: Wednesday, August 22, 2012 3:29 PM To: Chris Evans Cc: lt,cisco-nsp@puck.nether.netgt,; Aaron Subject: Re: [c-nsp] traceroute shows mpls labels...how? Just the probe packets differ in protocol the responses are always ICMP responses. Phil Sent from my iPad On Aug 22, 2012, at 4:03 PM, Chris Evans chrisccnpsp...@gmail.com wrote: Also this depends on vendor too. IIRC junos uses udp for its trace routing and ios uses icmp.Meaning that if you did traceroute from a cisco box going over a juniper network the labels wouldn't show and vice versa. You brought up something I was 100% suee about a few years ago but those brain cells are gone. On Aug 22, 2012 2:58 PM, Phil Bedard phil...@gmail.com wrote: There are ICMP extensions to carry MPLS label stack information but the trace route application needs to support it. The windows client doesn't. Phil Sent from my iPad On Aug 22, 2012, at 3:21 PM, Aaron aar...@gvtc.com wrote: Do you all know how this works? How is traceroute able to report back the mpls label that is in use in the transit hops? Also wondering why I don't see this on windows command line tracert Aaron RP/0/RSP0/CPU0:9k#trace vrf one 1.2.3.4 source 2.4.6.8 1 19.1911.5 [MPLS: Labels 16001/16220 Exp 0] 2 msec 1 msec 0 msec 2 19.1911.1 [MPLS: Label 16220 Exp 0] 0 msec 0 msec 1 msec 3 88.88.191.22 0 msec 0 msec 19.1911.33 1 msec 4 88.88.191.18 1 msec 1 msec 0 msec 5 88.88.135.221 10 msec 10 msec 11 msec 6 122.47.236.130 [MPLS: Label 17039 Exp 1] 47 msec 49 msec 51 msec 7 122.47.154.53 [MPLS: Labels 0/17017 Exp 1] 48 msec 49 msec 47 msec 8 122.45.30.134 [MPLS: Labels 23417/17016 Exp 1] 48 msec 49 msec 47 msec 9 122.45.1.17 [MPLS: Labels 23439/17016 Exp 1] 50 msec 49 msec 51 msec 10 122.45.31.189 [MPLS: Labels 0/17016 Exp 1] 51 msec 54 msec 51 msec 11 122.45.158.34 [MPLS: Labels 0/16009 Exp 1] 49 msec 50 msec 47 msec 12 122.45.104.49 46 msec 46 msec 47 msec 13 122.45.108.14 47 msec 47 msec 47 msec 14 * * * 15 * * * 16 * ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] traceroute shows mpls labels...how?
That's what I was looking for. So is it a part of the mpls lsr's icmp implementation that would allow those icmp extensions to carry and send that info or the traceing device (windows) or both? In other words, I wonder if the windows device actually is rcv'ing the icmp extended packets carrying that mpls label info BUT the tracert windows application just isn't smart enough to render it on the cli output.. Since I get MPLS labels shown using ntraceroute (Nanog traceroute) on my FreeBSD box, it's pretty clear that other hosts also receive the MPLS label info (assuming it is transmitted by routers on the way). Whether they can *use* the label info is another question. Steinar Haug, Nethelp consulting, sth...@nethelp.no ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] ME3600X Output Drops
Replying to my own message * Adjusting the hold queue didn't help. * Applying QOS and per referenced email stopped the drops immediately - I used something like the below: policy-map leaf class class-default queue-limit 491520 bytes policy-map logical class class-default service-policy leaf policy-map root class class-default service-policy logical * I would be interested to hear if others have ended up applying a similar policy to all interfaces. Any gotchas? I expect any 10Gbps interfaces would be okay without the QoS - haven't seen any issue on these myself. *Apart from this list I have found very little information around this whole issue. Any pointers to other documentation would be appreciated. Thanks Ivan Ivan Hi, I am seeing output drops on a ME3600X interface as shown below GigabitEthernet0/2 is up, line protocol is up (connected) MTU 9216 bytes, BW 100 Kbit/sec, DLY 10 usec, reliability 255/255, txload 29/255, rxload 2/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 1000Mb/s, media type is RJ45 input flow-control is off, output flow-control is unsupported ARP type: ARPA, ARP Timeout 04:00:00 Last input 6w1d, output never, output hang never Last clearing of show interface counters 00:12:56 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 231 Queueing strategy: fifo Output queue: 0/40 (size/max) 30 second input rate 10299000 bits/sec, 5463 packets/sec 30 second output rate 114235000 bits/sec, 12461 packets/sec 3812300 packets input, 705758638 bytes, 0 no buffer Received 776 broadcasts (776 multicasts) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 776 multicast, 0 pause input 0 input packets with dribble condition detected 9103882 packets output, 10291542297 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets 0 unknown protocol drops 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 pause output 0 output buffer failures, 0 output buffers swapped out I have read about similar issues on the list: http://www.gossamer-threads.com/lists/cisco/nsp/157217 https://puck.nether.net/pipermail/cisco-nsp/2012-July/085889.html 1. I have no QoS policies applied to the physical interface or EVCs. Would increasing the hold queue help? Is there a recommended value - the maximum configurable is 24. What is the impact on the 44MB of packet buffer. 2. If the hold queue isn't an option is configuring QoS required to increase the queue-limit from the default 100us. Again are there any recommended values and what impact is there on the available 44MB of packet buffer. 3. I have found that when applying policies to the EVCs the show policy map output does not have information for the queue-limit as I have seen when applying polices to the physical interface. Does this mean that EVCs will still suffer from output drops? Thanks Ivan ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
[c-nsp] Port numbering change on 1841?
Hi Guys, Is it possible for 1841 port numbering to change? example - 1841 with 2 onboard FE's + FE WIC Would be FA0/0 (Onboard)FA0/1 (Onboard)FA0/0/0 (FE WIC) We have been told that a replacement 1841 that went to site(Same config as previous 1841), FA0/1 and FA0/0/0 swapped - I seriously doubt this happened, and suspect eth cables were connected to incorrect ports. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] Port numbering change on 1841?
No chance, it's a cisco router, not a linux box. Interfaces don't just swap themselves around, and particularly not an onboard port with an HWIC one. -Jonesy Thanks Andrew - Does anyone know of a command to show what hardware is assigned to which Int? i.e. Anyway to show that HWIC-1FE is assigned FE0/0/0 ? sh inv, only shows chassis+hwic #sh inventory NAME: chassis, DESCR: 1841 chassis PID: CISCO1841 , VID: V05 , SN: FHK1311200E NAME: WIC/HWIC 0, DESCR: One-Port Fast Ethernet High Speed WAN Interface Card PID: HWIC-1FE , VID: V01 , SN: FOC14346SST sh hardware, only shows number of int types Cheers. ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
Re: [c-nsp] ME3600X Output Drops
Hi Ivan, In fact the default queue limit in 3800x/3600x is quite small We also had issues with drops in all interfaces, even without congestion After some research and an SR with Cisco, we have started applying qos on all interfaces policy-map INTERFACE-OUTPUT-POLICY class dummy class class-default shape average X queue-limit 2457 packets The dummy class does nothing. It is just there because IOS wouldn't allow changing queue limit otherwise Also there were issues with the policy counters which should be resolved after 15.1(2)EY2 Cisco said they would increase the default queue sizes in the second half of 2012.. So, I suggest you try the latest IOS version and check again 10G interfaces had no drops in our setup too. Regards George On Thu, Aug 23, 2012 at 1:34 AM, Ivan cisco-...@itpro.co.nz wrote: Replying to my own message * Adjusting the hold queue didn't help. * Applying QOS and per referenced email stopped the drops immediately - I used something like the below: policy-map leaf class class-default queue-limit 491520 bytes policy-map logical class class-default service-policy leaf policy-map root class class-default service-policy logical * I would be interested to hear if others have ended up applying a similar policy to all interfaces. Any gotchas? I expect any 10Gbps interfaces would be okay without the QoS - haven't seen any issue on these myself. *Apart from this list I have found very little information around this whole issue. Any pointers to other documentation would be appreciated. Thanks Ivan Ivan Hi, I am seeing output drops on a ME3600X interface as shown below GigabitEthernet0/2 is up, line protocol is up (connected) MTU 9216 bytes, BW 100 Kbit/sec, DLY 10 usec, reliability 255/255, txload 29/255, rxload 2/255 Encapsulation ARPA, loopback not set Keepalive set (10 sec) Full-duplex, 1000Mb/s, media type is RJ45 input flow-control is off, output flow-control is unsupported ARP type: ARPA, ARP Timeout 04:00:00 Last input 6w1d, output never, output hang never Last clearing of show interface counters 00:12:56 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 231 Queueing strategy: fifo Output queue: 0/40 (size/max) 30 second input rate 10299000 bits/sec, 5463 packets/sec 30 second output rate 114235000 bits/sec, 12461 packets/sec 3812300 packets input, 705758638 bytes, 0 no buffer Received 776 broadcasts (776 multicasts) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 776 multicast, 0 pause input 0 input packets with dribble condition detected 9103882 packets output, 10291542297 bytes, 0 underruns 0 output errors, 0 collisions, 0 interface resets 0 unknown protocol drops 0 babbles, 0 late collision, 0 deferred 0 lost carrier, 0 no carrier, 0 pause output 0 output buffer failures, 0 output buffers swapped out I have read about similar issues on the list: http://www.gossamer-threads.com/lists/cisco/nsp/157217 https://puck.nether.net/pipermail/cisco-nsp/2012-July/085889.html 1. I have no QoS policies applied to the physical interface or EVCs. Would increasing the hold queue help? Is there a recommended value - the maximum configurable is 24. What is the impact on the 44MB of packet buffer. 2. If the hold queue isn't an option is configuring QoS required to increase the queue-limit from the default 100us. Again are there any recommended values and what impact is there on the available 44MB of packet buffer. 3. I have found that when applying policies to the EVCs the show policy map output does not have information for the queue-limit as I have seen when applying polices to the physical interface. Does this mean that EVCs will still suffer from output drops? Thanks Ivan ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/ ___ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/