Re: [c-nsp] Egress QoS on FE links with less than 100Mbps speeds

2009-12-20 Thread Brad Henshaw
Marian Ďurkovič wrote: 

 Flowcontrol doesn't need to be QOS-aware in this scenario.
 On the switch side, it's enough if it supports plain RX flowcontrol
 i.e. flowcontrol receive [desired|on]. Then the wireless link can
 send pause frames to slow down the switch port automatically to the
 real bandwidth and output buffering / QOS configuration on the switch
 port is applied as expected.

That is assuming that the switch honours QoS and continues to prioritise 
packets appropriately when it receives a PAUSE from the radio gear - more often 
than not this is not the case and all traffic in the egress buffers will be 
equally affected.

I recall reading (maybe on this list) that the Nexium 5k or 7k supports 
QoS-aware flow control - but I wouldn't bet on it being included in low-end 
switches any time soon. (not that I'd complain if it was)

Regards,
Brad
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Egress QoS on FE links with less than 100Mbps speeds

2009-12-18 Thread Marian Ďurkovič
On Fri, 18 Dec 2009 10:16:08 +1000, Brad Henshaw wrote
 Flow control comes with its own set of challenges however such as 
 varied support across vendors and models and the fact that it's almost 
 never QoS-aware in the kind of edge switches you're using.

Flowcontrol doesn't need to be QOS-aware in this scenario.

On the switch side, it's enough if it supports plain RX flowcontrol
i.e. flowcontrol receive [desired|on]. Then the wireless link can
send pause frames to slow down the switch port automatically to the
real bandwidth and output buffering / QOS configuration on the switch
port is applied as expected.


 With kind regards,

 M.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Egress QoS on FE links with less than 100Mbps speeds

2009-12-17 Thread Lobo
Wow thanks Daniel that did the trick on the 3750 platform!  Here's a 
sample config in case anyone ever needs it:


interface FastEthernet1/0/23
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 666-670
 switchport mode trunk
 load-interval 30
 srr-queue bandwidth share 1 25 35 40
 srr-queue bandwidth shape  10  0  0  0
 srr-queue bandwidth limit 80
 priority-queue out
 mls qos trust dscp
 spanning-tree portfast

Jose


On 12/16/2009 10:04 AM, Bielawa, Daniel W. (NS) wrote:

Hello,
We had the same issue on couple of links. We solved it with the 
following command. The number on the end is a percentage of link speed in 1 
percent increments. This was done on a 3750G running 12.2(44)SE6, this command 
might or might not work on other platforms.

  srr-queue bandwidth limit (10-90)

Thank You

Daniel Bielawa
Network Engineer
Liberty University Network Services
Email: dwbiel...@liberty.edu
Phone: 434-592-7987


-Original Message-
From: cisco-nsp-boun...@puck.nether.net 
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Lobo
Sent: Wednesday, December 16, 2009 8:45 AM
To: cisco-nsp@puck.nether.net
Subject: [c-nsp] Egress QoS on FE links with less than 100Mbps speeds

We're doing some Catalyst testing to roll out QoS on our Ethernet
network and have come up against a hurdle.  On most of our backbone
links in a MAN, the actual bandwidth between one C/O to another C/O is
not always 100Mbps.  There are times when the link is only capable of
hitting say 80Mbps (we're a wireless isp) or less.

Since we have to use a FE port for this type of connection, do the
switches believe that they have 100Mbps of bandwidth to play with when
putting packets into the appropriate queues?

I'm a bit confused as to how the switches work in this fashion.  If I
were using CAT5 cables or fiber this would be simple to understand as
the bandwidth would be fixed.  :)

This is an example of a configuration on a 3550-24 that I'm using:


interface FastEthernet0/x
mls qos trust dscp
wrr-queue bandwidth 40 35 25 1
wrr-queue cos-map 1 0 1
wrr-queue cos-map 2 2
wrr-queue cos-map 3 3 4 6 7
wrr-queue cos-map 4 5
priority-queue out
!

The switches that we use are 2950, 3550, 3750 and 6524s.

With MQC and layer 3 QoS, I would know how to fix this by simply using
the bandwidth command on the physical interface and basing my output
policy-map to use bandwidth percent for each class.  Layer 2 QoS
doesn't seem to work this way though.

Any help would be appreciated.

Thanks.

Jose
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
   

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Egress QoS on FE links with less than 100Mbps speeds

2009-12-17 Thread Lobo
Hi Peter.  The reason why the radio only works at less than 100M is 
because that's all the bandwidth it has.  This is licensed based 
wireless technology for point to point shots between buildings.  
Bandwidths can be anywhere from 18M to 400M depending on which frequency 
you use and radio brand.  For that 400M radio we use Gig interfaces so 
we would need to use the bandwidth limit command to make sure that it 
only operates at 40% vs 100%.


So for the 2950s and 3550s it looks like we may not have much wiggle 
room.  My recommendation might be to upgrade those all to 3750s.  :)


Thanks.

Jose


On 12/16/2009 2:55 PM, Peter Rathlev wrote:

On Wed, 2009-12-16 at 08:45 -0500, Lobo wrote:
[...]
   

There are times when the link is only capable of hitting say 80Mbps
(we're a wireless isp) or less.

Since we have to use a FE port for this type of connection, do the
switches believe that they have 100Mbps of bandwidth to play with when
putting packets into the appropriate queues?
 

The interface will take packets from the output queue and send them as
fast as it can, so as long as there are packets to be sent they will be
sent at 100 mbps.

   

I'm a bit confused as to how the switches work in this fashion.  If I
were using CAT5 cables or fiber this would be simple to understand as
the bandwidth would be fixed.  :)
 

The interesting things happen in the box that converts from 100 mbps to
something less, i.e. the wireless bridge. Why is it sometimes less than
100 mbps? Is it simple loss because of varying signal quality? Does the
wireless bridge compensate for this loss by retransmitting at layer 1,
meaning a little RTT variance and some lost bandwidth? Or does it just
drop and let the overlying protocols handle this? (In short: how do you
measure it? TCP throughput is not a reliable measurement.)

About the switch: The WRR you configure (on a 3550) is Weighted Round
Robin; it doesn't define anything relating to how much bandwidth there
actually is, it just defines how many packets from each queue to serve
to the interface tx ring in each turn.

The important bit though is IMHO that you use the priority queueing.
This means that queue 4 (CoS 5) will _always_ be sent first. This should
minimise loss when traffic crosses the wireless bridge.

   

The switches that we use are 2950, 3550, 3750 and 6524s.

With MQC and layer 3 QoS, I would know how to fix this by simply using
the bandwidth command on the physical interface and basing my output
policy-map to use bandwidth percent for each class.  Layer 2 QoS
doesn't seem to work this way though.
 

On the 3750 you can use what Daniel mentioned: srr-queue bandwidth
limit. AFAIK this just uses a time divisioning on the interface and
throws away unused timeslots. Bear in mind that if the wireless bridge
has a very shallow queue this might not work very well.

This command isn't available on the 2950 or 3550. And even though a few
(10GE) ports one the 6500/7600 platform support SRR, you can't cap the
interface as such like this.

   

___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Egress QoS on FE links with less than 100Mbps speeds

2009-12-17 Thread Marian Ďurkovič
On Thu, Dec 17, 2009 at 08:40:10AM -0500, Lobo wrote:
 Hi Peter.  The reason why the radio only works at less than 100M is 
 because that's all the bandwidth it has.  This is licensed based 
 wireless technology for point to point shots between buildings.  
 Bandwidths can be anywhere from 18M to 400M depending on which frequency 
 you use and radio brand.  For that 400M radio we use Gig interfaces so 
 we would need to use the bandwidth limit command to make sure that it 
 only operates at 40% vs 100%.
 
 So for the 2950s and 3550s it looks like we may not have much wiggle 
 room.  My recommendation might be to upgrade those all to 3750s.  :)

In fact a properly implemented sub-rate service should use ethernet
flowcontrol to signal real available bandwidth to the switch.
With flowcontrol working, no such tweaks are necessary.

   With kind regards,

   M.
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Egress QoS on FE links with less than 100Mbps speeds

2009-12-17 Thread Brad Henshaw
Lobo wrote: 

 Wow thanks Daniel that did the trick on the 3750 platform!  Here's a
 sample config in case anyone ever needs it:
 interface FastEthernet1/0/23
  srr-queue bandwidth limit 80

Glad to hear that worked. Be aware that bandwidth limiting on these
platforms is not an exact science - you'll always need to test to ensure
you're getting what you expect - sometimes you may need to tweak the
SRR-Queue buffer allocations and thresholds especially if traffic is
bursty.

The same goes for implementation of flow control. (with regard to the
testing requirement, not the buffering)

Flow control comes with its own set of challenges however such as varied
support across vendors and models and the fact that it's almost never
QoS-aware in the kind of edge switches you're using.

Regards,
Brad
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Egress QoS on FE links with less than 100Mbps speeds

2009-12-16 Thread Bielawa, Daniel W. (NS)
Hello,
We had the same issue on couple of links. We solved it with the 
following command. The number on the end is a percentage of link speed in 1 
percent increments. This was done on a 3750G running 12.2(44)SE6, this command 
might or might not work on other platforms.

 srr-queue bandwidth limit (10-90)

Thank You

Daniel Bielawa 
Network Engineer
Liberty University Network Services
Email: dwbiel...@liberty.edu
Phone: 434-592-7987


-Original Message-
From: cisco-nsp-boun...@puck.nether.net 
[mailto:cisco-nsp-boun...@puck.nether.net] On Behalf Of Lobo
Sent: Wednesday, December 16, 2009 8:45 AM
To: cisco-nsp@puck.nether.net
Subject: [c-nsp] Egress QoS on FE links with less than 100Mbps speeds

We're doing some Catalyst testing to roll out QoS on our Ethernet 
network and have come up against a hurdle.  On most of our backbone 
links in a MAN, the actual bandwidth between one C/O to another C/O is 
not always 100Mbps.  There are times when the link is only capable of 
hitting say 80Mbps (we're a wireless isp) or less.

Since we have to use a FE port for this type of connection, do the 
switches believe that they have 100Mbps of bandwidth to play with when 
putting packets into the appropriate queues?

I'm a bit confused as to how the switches work in this fashion.  If I 
were using CAT5 cables or fiber this would be simple to understand as 
the bandwidth would be fixed.  :)

This is an example of a configuration on a 3550-24 that I'm using:


interface FastEthernet0/x
mls qos trust dscp
wrr-queue bandwidth 40 35 25 1
wrr-queue cos-map 1 0 1
wrr-queue cos-map 2 2
wrr-queue cos-map 3 3 4 6 7
wrr-queue cos-map 4 5
priority-queue out
!

The switches that we use are 2950, 3550, 3750 and 6524s.

With MQC and layer 3 QoS, I would know how to fix this by simply using 
the bandwidth command on the physical interface and basing my output 
policy-map to use bandwidth percent for each class.  Layer 2 QoS 
doesn't seem to work this way though.

Any help would be appreciated.

Thanks.

Jose
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/
___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/


Re: [c-nsp] Egress QoS on FE links with less than 100Mbps speeds

2009-12-16 Thread Peter Rathlev
On Wed, 2009-12-16 at 08:45 -0500, Lobo wrote:
[...]
 There are times when the link is only capable of hitting say 80Mbps
 (we're a wireless isp) or less.
 
 Since we have to use a FE port for this type of connection, do the 
 switches believe that they have 100Mbps of bandwidth to play with when
 putting packets into the appropriate queues?

The interface will take packets from the output queue and send them as
fast as it can, so as long as there are packets to be sent they will be
sent at 100 mbps.

 I'm a bit confused as to how the switches work in this fashion.  If I 
 were using CAT5 cables or fiber this would be simple to understand as 
 the bandwidth would be fixed.  :)

The interesting things happen in the box that converts from 100 mbps to
something less, i.e. the wireless bridge. Why is it sometimes less than
100 mbps? Is it simple loss because of varying signal quality? Does the
wireless bridge compensate for this loss by retransmitting at layer 1,
meaning a little RTT variance and some lost bandwidth? Or does it just
drop and let the overlying protocols handle this? (In short: how do you
measure it? TCP throughput is not a reliable measurement.)

About the switch: The WRR you configure (on a 3550) is Weighted Round
Robin; it doesn't define anything relating to how much bandwidth there
actually is, it just defines how many packets from each queue to serve
to the interface tx ring in each turn.

The important bit though is IMHO that you use the priority queueing.
This means that queue 4 (CoS 5) will _always_ be sent first. This should
minimise loss when traffic crosses the wireless bridge.

 The switches that we use are 2950, 3550, 3750 and 6524s.
 
 With MQC and layer 3 QoS, I would know how to fix this by simply using 
 the bandwidth command on the physical interface and basing my output 
 policy-map to use bandwidth percent for each class.  Layer 2 QoS 
 doesn't seem to work this way though.

On the 3750 you can use what Daniel mentioned: srr-queue bandwidth
limit. AFAIK this just uses a time divisioning on the interface and
throws away unused timeslots. Bear in mind that if the wireless bridge
has a very shallow queue this might not work very well.

This command isn't available on the 2950 or 3550. And even though a few
(10GE) ports one the 6500/7600 platform support SRR, you can't cap the
interface as such like this.

-- 
Peter


___
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/