Wow thanks Daniel that did the trick on the 3750 platform! Here's a sample config in case anyone ever needs it:

interface FastEthernet1/0/23
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 666-670
 switchport mode trunk
 load-interval 30
 srr-queue bandwidth share 1 25 35 40
 srr-queue bandwidth shape  10  0  0  0
 srr-queue bandwidth limit 80
 priority-queue out
 mls qos trust dscp
 spanning-tree portfast

Jose


On 12/16/2009 10:04 AM, Bielawa, Daniel W. (NS) wrote:
Hello,
        We had the same issue on couple of links. We solved it with the 
following command. The number on the end is a percentage of link speed in 1 
percent increments. This was done on a 3750G running 12.2(44)SE6, this command 
might or might not work on other platforms.

  srr-queue bandwidth limit (10-90)

Thank You

Daniel Bielawa
Network Engineer
Liberty University Network Services
Email: dwbiel...@liberty.edu
Phone: 434-592-7987


-----Original Message-----
From: cisco-nsp-boun...@puck.nether.net 
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Lobo
Sent: Wednesday, December 16, 2009 8:45 AM
To: cisco-nsp@puck.nether.net
Subject: [c-nsp] Egress QoS on FE links with less than 100Mbps speeds

We're doing some Catalyst testing to roll out QoS on our Ethernet
network and have come up against a hurdle.  On most of our backbone
links in a MAN, the actual bandwidth between one C/O to another C/O is
not always 100Mbps.  There are times when the link is only capable of
hitting say 80Mbps (we're a wireless isp) or less.

Since we have to use a FE port for this type of connection, do the
switches believe that they have 100Mbps of bandwidth to play with when
putting packets into the appropriate queues?

I'm a bit confused as to how the switches work in this fashion.  If I
were using CAT5 cables or fiber this would be simple to understand as
the bandwidth would be fixed.  :)

This is an example of a configuration on a 3550-24 that I'm using:


interface FastEthernet0/x
mls qos trust dscp
wrr-queue bandwidth 40 35 25 1
wrr-queue cos-map 1 0 1
wrr-queue cos-map 2 2
wrr-queue cos-map 3 3 4 6 7
wrr-queue cos-map 4 5
priority-queue out
!

The switches that we use are 2950, 3550, 3750 and 6524s.

With MQC and "layer 3" QoS, I would know how to fix this by simply using
the "bandwidth" command on the physical interface and basing my output
policy-map to use "bandwidth percent" for each class.  Layer 2 QoS
doesn't seem to work this way though.

Any help would be appreciated.

Thanks.

Jose
_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to