Hi Phil

The rationale behind doing the L2 MTU on the whole platform is related to the 
whole QoS architecture and the Data Center Ethernet setup with FCoE and stuff.

Remember, when you use FCoE, you are using a dedicated CoS class (or queue) for 
the FCoE traffic. As FCoE is requiring a MTU of roughly 2158 bytes with no-drop 
capabilities, you need to enable this on the whole switch.

An example of a QoS policy on the switch could be:

queue 1 used for CoS 0 (best effort), MTU 1500 bytes and drop
queue 2 used for CoS 1, MTU 1500 bytes and drop
queue 3 used for CoS 2, (iSCSI), MTU 9000 bytes and drop
queue 4 used for CoS 3 (FCoE), MTU 2158 bytes and no-drop
...

As you can see, we are using different MTU and drop-preference settings on the 
queues. As each port on the switch has 8 queues, the easiest way to implement 
this is by using a global MTU setting.

On the Nexus 7k, we are using a per-port setting on each M-series linecard, but 
we are using global MTU on the F-series linecard, just as on the 5k series.

There are probably several other rationales behind the design, but this gives 
you an example.

Lars Christensen
CCIE #20292



Den 20/09/2012 kl. 11.17 skrev Phil Mayers <p.may...@imperial.ac.uk>:

> On 09/20/2012 07:20 AM, Reuben Farrelly wrote:
> 
>> policy-map type network-qos enable-jumbo-frames
>>   class type network-qos class-default
>>     mtu 9216
>> 
>> system qos
>>   service-policy type network-qos enable-jumbo-frames
>> 
> 
> Interesting.
> 
> Does anyone know the rationale behind this way of setting the MTU on ths 
> platform? It seems a bit peculiar, and as you note, a too-high L2 MTU is 
> seldom harmful, so you would think per-interface (or even per-device) would 
> suffice.
> 
> FWIW N7k seems to work "normally" i.e. set physical interface & SVI MTU in 
> the interface config.
> _______________________________________________
> cisco-nsp mailing list  cisco-nsp@puck.nether.net
> https://puck.nether.net/mailman/listinfo/cisco-nsp
> archive at http://puck.nether.net/pipermail/cisco-nsp/


_______________________________________________
cisco-nsp mailing list  cisco-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

Reply via email to