Re: [c-nsp] mLACP at 6500

2011-12-21 Thread Phil Mayers

On 12/20/2011 09:42 PM, Robert Hass wrote:

Hi
In 12.2 SXJ release Cisco implemented very interesting feature called
- multichassis LACP (mLACP). Documentation says it's designed for
server deployment. I'm using topology where distribution is made at


My understanding was that they meant directly attached servers i.e. 
plugged into the 6500 directly, not via intermediate switches.


Do they even support a trunk mode mLACP?


two 6500/Sup720 and from each 6500 is 1G link to access switch (2960).
Redundancy and loop-free topology is provided by MSTP. I'm looking for
comments how good or bad mLACP works. As we would like migrate from
MSTP to mLACP for access switches (currently 60 access-switches).


It comes with an alarming list of caveats, like you cannnot define 100 
vlans on a switch where you also enable mLACP.


It's also not very interesting (to me) because it's active-standby i.e. 
not both links forwarding. I can do that *now* with most NIC teaming 
drivers, or the linux bond, and without upstream config.


Personally, I don't see the point. Wondering if anyone can suggest a 
reason for the feature.

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] mLACP at 6500

2011-12-21 Thread Mark Tinka
On Wednesday, December 21, 2011 05:17:42 PM Phil Mayers 
wrote:

 It's also not very interesting (to me) because it's
 active-standby i.e. not both links forwarding. I can do
 that *now* with most NIC teaming drivers, or the linux
 bond, and without upstream config.
 
 Personally, I don't see the point. Wondering if anyone
 can suggest a reason for the feature.

We have tried it because we don't have the VSS supervisor 
modules, but we always assumed it was active/active, 
otherwise what's the point?

We're certainly looking forward to it on the ME3600X, and 
hope it won't be as broken there as you suggest it is on the 
6500.

Juniper's initial implementation of this on the MX only 
supported Layer 2 instances. However, they now do support 
running IP atop MC-LAG's, although they implement it using 
VRRP.

Mark.


signature.asc
Description: This is a digitally signed message part.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Re: [c-nsp] Backup route - EIGRP

2011-12-21 Thread Andrew Miehs
On 21.12.2011, at 07:36, Ambedkar p.ambed...@gmail.com wrote:
 I am having two leased lines of 2Mbps and 4Mbps. I want to configure both
 as a backup route in case if one fails within 3 seconds.

 Please give suggestions.

use eigrp and set bandwidth on the interfaces.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] mLACP at 6500

2011-12-21 Thread Mark Berly
On Wed, Dec 21, 2011 at 4:27 AM, Mark Tinka mti...@globaltransit.netwrote:

 On Wednesday, December 21, 2011 05:17:42 PM Phil Mayers
 wrote:

  It's also not very interesting (to me) because it's
  active-standby i.e. not both links forwarding. I can do
  that *now* with most NIC teaming drivers, or the linux
  bond, and without upstream config.
 
  Personally, I don't see the point. Wondering if anyone
  can suggest a reason for the feature.

 We have tried it because we don't have the VSS supervisor
 modules, but we always assumed it was active/active,
 otherwise what's the point?

 We're certainly looking forward to it on the ME3600X, and
 hope it won't be as broken there as you suggest it is on the
 6500.


Arista Networks has a similar feature called MLAG which allows
active-active paths with no vlan restrcitions and an active-active gateway
functionality. It is worth looking into if you are looking to implement
this type feature



 Juniper's initial implementation of this on the MX only
 supported Layer 2 instances. However, they now do support
 running IP atop MC-LAG's, although they implement it using
 VRRP.

 Mark.

 ___
 cisco-nsp mailing list  cisco-nsp@puck.nether.net
 https://puck.nether.net/mailman/listinfo/cisco-nsp
 archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] mLACP at 6500

2011-12-21 Thread Phil Mayers

On 21/12/11 11:02, Mark Berly wrote:


Arista Networks has a similar feature called MLAG which allows
active-active paths with no vlan restrcitions and an active-active gateway
functionality. It is worth looking into if you are looking to implement
this type feature


Plenty of kit does these days. Extreme x-series, Cisco Nexus, etc.

It's not new/special any more.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] mLACP at 6500

2011-12-21 Thread Jeroen van Ingen
On Wed, 2011-12-21 at 11:41 +, Phil Mayers wrote:
 On 21/12/11 11:02, Mark Berly wrote:
 
  Arista Networks has a similar feature called MLAG which allows
  active-active paths with no vlan restrcitions and an active-active gateway
  functionality. It is worth looking into if you are looking to implement
  this type feature
 
 Plenty of kit does these days. Extreme x-series, Cisco Nexus, etc.
 It's not new/special any more.

Well, sometimes it *is* new/special in the sense that switch-to-switch
multichassis LAG can be a challenge to implement correctly along with
other features. Stuff like STP, IGMP Snooping  multicast forwarding and
DHCP snooping come to mind.

So even though plenty of kit does it, I believe plenty of kit have its
caveats when using multichassis LAG / LACP.


Regards,

Jeroen van Ingen
ICT Service Centre
University of Twente, P.O.Box 217, 7500 AE Enschede, The Netherlands


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] mLACP at 6500

2011-12-21 Thread Phil Mayers

On 21/12/11 16:28, Jeroen van Ingen wrote:

On Wed, 2011-12-21 at 11:41 +, Phil Mayers wrote:

On 21/12/11 11:02, Mark Berly wrote:


Arista Networks has a similar feature called MLAG which allows
active-active paths with no vlan restrcitions and an active-active gateway
functionality. It is worth looking into if you are looking to implement
this type feature


Plenty of kit does these days. Extreme x-series, Cisco Nexus, etc.
It's not new/special any more.


Well, sometimes it *is* new/special in the sense that switch-to-switch
multichassis LAG can be a challenge to implement correctly along with
other features. Stuff like STP, IGMP Snooping  multicast forwarding and
DHCP snooping come to mind.

So even though plenty of kit does it, I believe plenty of kit have its
caveats when using multichassis LAG / LACP.


An excellent point ;o)
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] mLACP at 6500

2011-12-21 Thread Mark Tinka
On Thursday, December 22, 2011 12:28:17 AM Jeroen van Ingen 
wrote:

 So even though plenty of kit does it, I believe plenty of
 kit have its caveats when using multichassis LAG / LACP.

Indeed - in many cases, features normally supported on 
physical interfaces have to be rigged to work on logical 
interfaces such as LAG's.

So yes, I suppose it's not automatic that if a certain 
feature works on a LAG, it will work on an MC-LAG.

Cheers,

Mark.


signature.asc
Description: This is a digitally signed message part.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

[c-nsp] asr1000 lacp

2011-12-21 Thread marc williams

can you have QinQ sub interfaces on an lacp bundle on the asr1000 series? iOS 
XE 3S

--
marcĀ 
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] shaping w/sub interfaces - drops

2011-12-21 Thread Dan Letkeman
Hello,

I'm wondering if its possible to eliminate drops using shaping?  I
have a sub interface set-up for guest access and I want to limit all
access to 3mbps and http access to 2mbps.  If I apply a policy to the
sub interface I continuously see drops on the http class when it runs
in and around 2mbps.  Its just web browsing so I don't ever want to
drop the packets just retransmit.

I have the following configured:

class-map match-all http
 match protocol http

policy-map guest-output
 class http
  shape peak 200 50 25
 class class-default
  shape average 300 256000

policy-map guest-input
 class guest-upload
police 75 10 1000 conform-action transmit  exceed-action
drop  violate-action drop

interface GigabitEthernet0/0.823
 encapsulation dot1Q 823
 ip address 10.7.184.1 255.255.255.0
 ip access-group wifiguest in
 ip helper-address 10.4.0.5
 no ip redirects
 no ip unreachables
 no ip proxy-arp
 ip nbar protocol-discovery
 ip flow ingress
 ip flow egress
 ip virtual-reassembly
 ip policy route-map router-astarogw
 service-policy input guest-input
 service-policy output guest-output


I am also seeing drops on the physical interface G0/0.  I tried to
apply a policy and it says I cannot do any shaping when shaping is
already applied to a sub interface.  Do I need to apply a policy to
the G0/0 interface first, and then apply a policy to shape certain
traffic on the sub interface?

Any hints, ideas or configuration examples would be appreciated.

Thanks,
Dan.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] shaping w/sub interfaces - drops

2011-12-21 Thread Jay Hennigan
On 12/21/11 11:11 AM, Dan Letkeman wrote:
 Hello,
 
 I'm wondering if its possible to eliminate drops using shaping?  I
 have a sub interface set-up for guest access and I want to limit all
 access to 3mbps and http access to 2mbps.  If I apply a policy to the
 sub interface I continuously see drops on the http class when it runs
 in and around 2mbps.  Its just web browsing so I don't ever want to
 drop the packets just retransmit.

When you limit traffic by any means you may have the choice to either
delay the excess packets or drop them.  Delaying the packets means
storing them in a buffer until the traffic falls below the limit, then
forwarding them.

The buffers have a limited size.  If there is more traffic than the
buffers can hold, it will eventually be dropped.  There is lots of
discussion and several examples regarding this with leaky bucket
analogies.

So if there is more traffic than the configured shape rate (or more
traffic than the physical medium can handle) it will get dropped either
immediately or when the buffers fill up depending on configuration,
amount of memory, etc.

Upper-layer protocols such as TCP can mitigate this by slowing the input
rate when drops are detected.  But if there is more traffic coming in
than the buffers, shape limit, or outbound medium can handle, it must
get dropped.  There's nowhere else for it to go.

--
Jay Hennigan - CCIE #7880 - Network Engineering - j...@impulse.net
Impulse Internet Service  -  http://www.impulse.net/
Your local telephone and internet company - 805 884-6323 - WB6RDV
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


[c-nsp] Vlans on an ASR9000

2011-12-21 Thread Brian Christopher Raaen
I am currently working on replacing two 7200's and a 3750 switch with
an ASR9000 Router for a 10G upgrade.  Currently the 3750 is only
preforming layer2 functions and the two routers connect to it via
802.1q trunks.  The switch then hands off a 802.1q trunk to other
downstream switches.  I am unsure of how to configure the ports on the
ASR to allow me to put more than one port in a vlan.  I have tried
going through the different configuration guides and think that either
the would do something like the l2vpn set or the IRB setup listed in
the Cisco ASR 9000 Series Aggregation Services Router Interface and
Hardware Component Configuration Guide.  If I were using a 6500 or
7600 I know how I would set up the ports, however I am unsure how I
would do it in the ASR.  I am wondering about the best way to set this
up, or if I really shouldn't be using the 40 Port linecard for
switching/layer2 functions.  Thanks for any input and I'd really
appreciate any configuration snippets.

---
Brian Raaen
Zcorum
Network Architect
bra...@zcorum.com
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cisco 2811 performance issue - dual(new) isp

2011-12-21 Thread Chuck Church
Hmmm.  Well, there are a few variables.  If one site does give you good
results, then the router might not be totally at fault.  You are getting
'ignore' errors on the interface with CBAC enabled, that's definitely
slowing things down, as you're getting re-transmits and TCP window starting
small again.   Just curious, what does 'sh buffer' output look like?

 

 

Thanks,

 

Chuck

 

From: Jmail Clist [mailto:jmlis...@gmail.com] 
Sent: Tuesday, December 20, 2011 11:43 PM
To: Chuck Church
Cc: cisco-nsp@puck.nether.net
Subject: Re: [c-nsp] Cisco 2811 performance issue - dual(new) isp

 

Chuck,

 

Interesting. Not sure why it was so low.  I switched back to this new ISP
conn on fa0/1 tonight and ran some more tests. Below is various output
immediately after testing. The second set of show outputs is after I applied
CBAC inbound and a generic deny extended access list outbound on fa0/1. The
CBAC is definitely raising my cpu, as expected.

Performance is still low in my opinion, at least with testing on most test
sites. The only site that gave me great results was speakeasy.net.

 

rtr2811#sh int switching | begin  FastEthernet0/1
FastEthernet0/1
  Throttle count 11
   Drops RP 11 SP  0
 SPD Flushes   Fast  0SSE  0
 SPD Aggress   Fast  0
SPD Priority Inputs   20030942  Drops  0

Protocol  IP
  Switching pathPkts In   Chars In   Pkts Out  Chars Out
 Process  66120   22268396  370794417563
Cache misses  0  -  -  -
Fast 410053  477119555 351638  183218275
   Auton/SSE  0  0  0  0

Protocol  DEC MOP
  Switching pathPkts In   Chars In   Pkts Out  Chars Out
 Process  0  0   8697 669669
Cache misses  0  -  -  -
Fast  0  0  0  0
   Auton/SSE  0  0  0  0

Protocol  ARP
  Switching pathPkts In   Chars In   Pkts Out  Chars Out

 

rtr2811#sh int fa0/1
FastEthernet0/1 is up, line protocol is up
  Hardware is MV96340 Ethernet, address is 0015.f956.d549 (bia
0015.f956.d549)
  Internet address is 200.200.200.200/24
  MTU 1500 bytes, BW 10 Kbit/sec, DLY 100 usec,
 reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, 100BaseTX/FX
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:02, output hang never
  Last clearing of show interface counters 00:01:49
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 731000 bits/sec, 108 packets/sec
  5 minute output rate 357000 bits/sec, 45 packets/sec
 17949 packets input, 14940931 bytes
 Received 5515 broadcasts, 0 runts, 0 giants, 0 throttles
 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
 0 watchdog
 0 input packets with dribble condition detected
 11012 packets output, 9300361 bytes, 0 underruns
 0 output errors, 0 collisions, 0 interface resets

 

rtr2811#sh proc cpu sorted 1min
CPU utilization for five seconds: 9%/1%; one minute: 14%; five minutes: 13%
 PID Runtime(ms) Invoked  uSecs   5Sec   1Min   5Min TTY Process
  80 tel:80%C2%A0%C2%A0%C2%A0%2097347040 97347040   262361459
371  1.75%  1.77%  1.76%   0 IGMP Snooping Re
 11884936308   283025140300  1.67%  1.54%  1.52%   0 IP Input

  19 939143230838598304  0.31%  1.20%  1.28%   0 ARP Input

 182  392060  1300284614  0  1.03%  1.12%  1.12%   0 HQF Shaper
Backg
  921905298460835327313  0.39%  0.49%  0.50%   0 ILPM

   3 219764420210291108  0.23%  0.31%  0.31%   0 Skinny Msg
Serve
 314  169248   163513044  1  0.31%  0.30%  0.31%   0 PPP manager

 12514641486985  0.47%  0.16%  0.12% 514 SSH Process

   511185972  797585  14024  0.00%  0.15%  0.17%   0 Check heaps

 315   88332   163513044  0  0.15%  0.14%  0.15%   0 PPP Events

  91 6798308 5230434   1299  0.07%  0.12%  0.13%   0 tCOUNTER

 --More--


///
After CBAC applied outbound/extended deny all access-list  inbound

 

FastEthernet0/1 is up, line protocol is up
  Hardware is MV96340 Ethernet, address is 0015.f956.d549 (bia
0015.f956.d549)
  Internet address is 200.200.200.200/24
  MTU 1500 bytes, BW 10 Kbit/sec, DLY 100 usec,
 reliability 255/255, txload 1/255, rxload 3/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, 100BaseTX/FX
  ARP type: ARPA, ARP 

Re: [c-nsp] Switch support for IPv6 policing

2011-12-21 Thread Mack McBride
Use a mac access-list or class-default

mac access-list extended ALL
 permit any any
class-map match-all ANY-MAC
 match access-group name MAC
policy-map 10M
 class ANY-MAC
  police 1000 100 exceed-action drop

or

policy-map 10M
 class class-default
  police 1000 100 exceed-action drop

LR Mack McBride
Network Architect

-Original Message-
From: cisco-nsp-boun...@puck.nether.net 
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Vincent C Jones
Sent: Tuesday, December 20, 2011 6:28 PM
To: cisco-nsp
Subject: [c-nsp] Switch support for IPv6 policing

Arrgh. Currently filtering and policing user traffic on Cisco 2960 switches and 
discovered the hard way that the ingress policy ONLY applies itself to IPv4 
packets and only IPv4 access-groups can be applied to an interface. What Cisco 
switches do I have to upgrade to in order to filter and police ALL customer 
traffic and not just IPv4 traffic?

Vince

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net 
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Cisco 2811 performance issue - dual(new) isp

2011-12-21 Thread Vinny Abello
To add to Chuck's questions:

Can you post your FastEthernet0/1 configuration? What exactly is this interface 
plugged into?

What IOS version are you running?

I think you said this works fine with a computer connected directly to the 
provider, but just out of curiosity what other device is doing NAT in front of 
the router closer to the Internet? I'm assuming you're not really connected to 
Embratel in Brasil seeing as there is no prefix announced on the Internet that 
includes 200.200.200.0/24.

-Vinny

On 12/21/2011 4:04 PM, Chuck Church wrote:
 Hmmm.  Well, there are a few variables.  If one site does give you good
 results, then the router might not be totally at fault.  You are getting
 'ignore' errors on the interface with CBAC enabled, that's definitely
 slowing things down, as you're getting re-transmits and TCP window starting
 small again.   Just curious, what does 'sh buffer' output look like?
 
  
 
  
 
 Thanks,
 
  
 
 Chuck
 
  
 
 From: Jmail Clist [mailto:jmlis...@gmail.com] 
 Sent: Tuesday, December 20, 2011 11:43 PM
 To: Chuck Church
 Cc: cisco-nsp@puck.nether.net
 Subject: Re: [c-nsp] Cisco 2811 performance issue - dual(new) isp
 
  
 
 Chuck,
 
  
 
 Interesting. Not sure why it was so low.  I switched back to this new ISP
 conn on fa0/1 tonight and ran some more tests. Below is various output
 immediately after testing. The second set of show outputs is after I applied
 CBAC inbound and a generic deny extended access list outbound on fa0/1. The
 CBAC is definitely raising my cpu, as expected.
 
 Performance is still low in my opinion, at least with testing on most test
 sites. The only site that gave me great results was speakeasy.net.
 
  
 
 rtr2811#sh int switching | begin  FastEthernet0/1
 FastEthernet0/1
   Throttle count 11
Drops RP 11 SP  0
  SPD Flushes   Fast  0SSE  0
  SPD Aggress   Fast  0
 SPD Priority Inputs   20030942  Drops  0
 
 Protocol  IP
   Switching pathPkts In   Chars In   Pkts Out  Chars Out
  Process  66120   22268396  370794417563
 Cache misses  0  -  -  -
 Fast 410053  477119555 351638  183218275
Auton/SSE  0  0  0  0
 
 Protocol  DEC MOP
   Switching pathPkts In   Chars In   Pkts Out  Chars Out
  Process  0  0   8697 669669
 Cache misses  0  -  -  -
 Fast  0  0  0  0
Auton/SSE  0  0  0  0
 
 Protocol  ARP
   Switching pathPkts In   Chars In   Pkts Out  Chars Out
 
  
 
 rtr2811#sh int fa0/1
 FastEthernet0/1 is up, line protocol is up
   Hardware is MV96340 Ethernet, address is 0015.f956.d549 (bia
 0015.f956.d549)
   Internet address is 200.200.200.200/24
   MTU 1500 bytes, BW 10 Kbit/sec, DLY 100 usec,
  reliability 255/255, txload 1/255, rxload 1/255
   Encapsulation ARPA, loopback not set
   Keepalive set (10 sec)
   Full-duplex, 100Mb/s, 100BaseTX/FX
   ARP type: ARPA, ARP Timeout 04:00:00
   Last input 00:00:00, output 00:00:02, output hang never
   Last clearing of show interface counters 00:01:49
   Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
   Queueing strategy: fifo
   Output queue: 0/40 (size/max)
   5 minute input rate 731000 bits/sec, 108 packets/sec
   5 minute output rate 357000 bits/sec, 45 packets/sec
  17949 packets input, 14940931 bytes
  Received 5515 broadcasts, 0 runts, 0 giants, 0 throttles
  0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
  0 watchdog
  0 input packets with dribble condition detected
  11012 packets output, 9300361 bytes, 0 underruns
  0 output errors, 0 collisions, 0 interface resets
 
  
 
 rtr2811#sh proc cpu sorted 1min
 CPU utilization for five seconds: 9%/1%; one minute: 14%; five minutes: 13%
  PID Runtime(ms) Invoked  uSecs   5Sec   1Min   5Min TTY Process
   80 tel:80%C2%A0%C2%A0%C2%A0%2097347040 97347040   262361459
 371  1.75%  1.77%  1.76%   0 IGMP Snooping Re
  11884936308   283025140300  1.67%  1.54%  1.52%   0 IP Input
 
   19 939143230838598304  0.31%  1.20%  1.28%   0 ARP Input
 
  182  392060  1300284614  0  1.03%  1.12%  1.12%   0 HQF Shaper
 Backg
   921905298460835327313  0.39%  0.49%  0.50%   0 ILPM
 
3 219764420210291108  0.23%  0.31%  0.31%   0 Skinny Msg
 Serve
  314  169248   163513044  1  0.31%  0.30%  0.31%   0 PPP manager
 
  12514641486985  0.47%  0.16%  0.12% 514 SSH Process
 
511185972  797585  14024  0.00%  0.15%  0.17%   0 Check heaps
 
  315   

[c-nsp] 2960S drops/packet loss

2011-12-21 Thread John Elliot

Hi Guys, 

Have a pair of 2960's in a stack, one port(trunk) connects to another DC and we 
are seeing ~5% packet-lossand large output drops to this DC.


#sh interfaces gigabitEthernet 1/0/17 counters errors

PortAlign-Err FCS-ErrXmit-Err Rcv-Err  UnderSize  
OutDiscards
Gi1/0/170   0   0   0  0   
182867

GigabitEthernet1/0/17 is up, line protocol is up (connected)  Hardware is 
Gigabit Ethernet, address is a0cf.5b87.ec11 (bia a0cf.5b87.ec11)  Description: 
QinQ_to_DC2  MTU 1998 bytes, BW 10 Kbit, DLY 100 usec, reliability 
255/255, txload 41/255, rxload 23/255  Encapsulation ARPA, loopback not set  
Keepalive set (10 sec)  Full-duplex, 100Mb/s, media type is 10/100/1000BaseTX  
input flow-control is off, output flow-control is unsupported  ARP type: ARPA, 
ARP Timeout 04:00:00  Last input 6d13h, output 00:00:00, output hang never  
Last clearing of show interface counters 04:02:15  Input queue: 0/75/0/0 
(size/max/drops/flushes); Total output drops: 183592  Queueing strategy: fifo  
Output queue: 0/40 (size/max)  30 second input rate 9047000 bits/sec, 2075 
packets/sec  30 second output rate 16324000 bits/sec, 2309 packets/sec


As you can see, 30sec rate isnt excessive, but as the drops are outdiscards it 
would appear we are getting hit by the small buffers/microburst issue.

Done a bit of reasearch, and as we have mls qos configured(Need to as we have 
to trust dscp markings), we needto look at tweaking the buffer allocations on 
the switch to hopefully mitigate these drops. 

There appears to be a range of recommendations when it comes to these tweaks - 
Hoping someone has some suggestions on what to set with mls qos queue-set 
output to alleviate the drops?(start conservative, then apply more aggressive 
if needed)and also, does adjusting the buffers require an outage window?

Our traffic is primarily backup(replication which is very bursty), and Internet

Thanks in advance.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/