Excerpt of the doc update in the config guide <http://www.cisco.com/en/US/docs/switches/metro/me3400e/software/release /12.2_58_ex/configuration/guide/swqos.html#wp1497063> is as follows:
WTD is configured by using the queue-limit policy-map class command. The command adjusts the queue size (buffer size) associated with a particular class of traffic. You specify the threshold as the number of packets, where each packet is a fixed unit of 256 bytes or as a percentage value. You can specify different queue sizes in absolute (number of packets) or percentage terms for different classes of traffic (CoS, DSCP, precedence, or QoS group) in the same queue. Setting a queue limit establishes a drop threshold for the associated traffic when congestion occurs. You cannot configure the queue size in absolute and percentage terms in the same policy. When you configure queue limit, the range for the number of packets is from 16 to 4272, in multiples of 16, where each packet is a fixed unit of 256 bytes. Excerpt of the doc update in the command reference <http://www.cisco.com/en/US/docs/switches/metro/me3400e/software/release /12.2_58_ex/command/reference/cli1.html#wp5095786> document is as follows: queue-limit Use the queue-limit policy-map class configuration command to set the queue maximum threshold for weighted tail drop (WTD) in an output policy map. Use the no form of this command to return to the default. queue-limit [cos value | dot1ad dei value | dscp value | precedence value | qos-group value] {number-of-packets [packets]| percent value} no queue-limit [cos value | dot1ad dei value | dscp value | precedence value | qos-group value] number-of-packets [packets] {number-of-packets [packets]| percent value} Syntax Description cos value (Optional) Set the parameters for each cost of service (CoS) value. The range is from 0 to 7. dot1ad dei value (Optional) Set the parameters for each drop eligibility indicator (DEI) value. The range is from 0 to 1. dscp value (Optional) Set the parameters for each Differentiated Services Code Point (DSCP) value. The range is from 0 to 63. precedence value (Optional) Set the parameters for each IP precedence value. The range is from 0 to 7. qos-group value (Optional) Set the parameters for each quality-of-service (QoS) group value. The range is from 0 to 99. number-of-packets [packets] Set the maximum threshold for WTD as the number of packets in the queue. The range is from 16 to 4272 and refers to 256-byte packets. The default is 160 packets. The packets keyword is optional. Note For optimal network performance, we strongly recommend that you configure the maximum queue-limit to 272 or less. percent value (Optional) Set the maximum threshold for WTD as a percentage of the total number of packets (buffers) in the common pool. The range is from 1 to 100. Defaults Default queue limit is 160 (256-byte) packets. Command Modes Policy-map class configuration Command History Release Modification 12.2(44)EY This command was introduced. 12.2(55)SE The dot1ad dei keywords were added. 12.2(58)EX The percent keyword was added. Usage Guidelines ...(content) You cannot configure the queue limit in absolute (number of packets) and percentage terms in the same policy. When you use the queue-limit command to configure thresholds within a class map, the WTD thresholds must be less than or equal to the maximum threshold of the queue. This means that the queue size configured without a qualifier must be larger than any of the queue sizes configured with a qualifier. When you use the percent keyword to configure the queue limit, note that the threshold values for WTD qualifiers are calculated based on the number of packets (buffers) available for each policy or class (the default for which is 160 packets if you do not configure a queue limit). The threshold values are not a percentage of the total number of packets in the common pool on the switch. Examples This example shows how to configure WTD as a percentage of packets in the queue where freeclass1, freeclass2, and freeclass3 get a minimum of 20 percent of the traffic bandwidth. The class-default gets the remaining 10 percent. In the example: Part A shows how you can set a percentage queue limit for each class of traffic. Part B shows how you can set a percentage queue limit for the threshold. Part C show how you can configure both in the same policy. Part A: Switch(config)#policy-map free-class Switch(config-pmap)#class freeclass1 Switch(config-pmap-c)#bandwidth percent 20 Switch(config-pmap-c)#queue-limit cos 1 percent 60 Switch(config-pmap-c)#exit Part B: Switch(config-pmap)#class freeclass2 Switch(config-pmap-c)#bandwidth percent 20 Switch(config-pmap-c)#queue-limit percent 40 Part C: Switch(config-pmap)#class freeclass3 Switch(config-pmap-c)#bandwidth percent 20 Switch(config-pmap-c)#queue-limit percent 40 Switch(config-pmap-c)#queue-limit cos 4 percent 10 Switch(config-pmap-c)#exit Switch(config-pmap)#exit Related Commands Command Description class <http://www.cisco.com/en/US/docs/switches/metro/me3400e/software/release /12.2_58_ex/command/reference/cli1.html#wp1860245> Defines a traffic classification match criteria for the specified class-map name. policy-map <http://www.cisco.com/en/US/docs/switches/metro/me3400e/software/release /12.2_58_ex/command/reference/cli1.html#wp1864146> Creates or modifies a policy map that can be attached to multiple ports to specify a service policy. show policy-map <http://www.cisco.com/en/US/docs/switches/metro/me3400e/software/release /12.2_58_ex/command/reference/cli2.html#wpxref83492> Displays QoS policy maps. -Waris -----Original Message----- From: [email protected] [mailto:[email protected]] On Behalf Of Pavel Skovajsa Sent: Thursday, April 19, 2012 2:13 PM To: [email protected] Subject: Re: [c-nsp] New Cisco ME3400 IOS? The new 12.2(58)EX is out there, can somebody please share experience with it? Also would be great if someone can shed some light on what is actually considered an 'Enhanced QoS buffer management' since from the release notes http://www.cisco.com/en/US/docs/switches/metro/me3400e/software/release/ 12.2_58_ex/release/notes/ol24334.html <http://www.cisco.com/en/US/docs/switches/metro/me3400e/software/release /12.2_58_ex/release/notes/ol24334.html> it seems like the queue size has magically gone up: //////// Option to configure the queue size threshold in percentage terms. You can now specify different queue sizes in absolute (number of packets) or percentage terms for different classes of traffic in the same queue. The upper limit of the number of packets you can specify when configuring a queue limit is increased from 544 to 4272. ///////// Is there a DOC describing how these queue size thresholds actually work on ME3400? -pavel On Fri, Mar 23, 2012 at 11:36 AM, Aled Morris <[email protected] <mailto:[email protected]> > wrote: > On 23 March 2012 07:59, Tassos Chatzithomaoglou > <[email protected] > >wrote: > > > Can you please provide more details about "Enhanced QoS buffer > management"? > > > > > Sometimes this is marketing speak for "now works (more) like the > documentation claims it always did" i.e. fixed wiithout admitting that > the code was broken before. > > Aled > _______________________________________________ > cisco-nsp mailing list [email protected] <mailto:[email protected]> > https://puck.nether.net/mailman/listinfo/cisco-nsp <https://puck.nether.net/mailman/listinfo/cisco-nsp> > archive at http://puck.nether.net/pipermail/cisco-nsp/ <http://puck.nether.net/pipermail/cisco-nsp/> > _______________________________________________ cisco-nsp mailing list [email protected] <mailto:[email protected]> https://puck.nether.net/mailman/listinfo/cisco-nsp <https://puck.nether.net/mailman/listinfo/cisco-nsp> archive at http://puck.nether.net/pipermail/cisco-nsp/ <http://puck.nether.net/pipermail/cisco-nsp/>
<<image001.png>>
_______________________________________________ cisco-nsp mailing list [email protected] https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/
