Hi guys

Pls find article 1667 attached.

 

Lev

 

 

From: Darren Coleman [mailto:[email protected]] 
Sent: Saturday, 09 February, 2013 6:28
To: Enterasys Customer Mailing List
Cc: Enterasys Customer Mailing List
Subject: Re: [enterasys] tg in/out discards

 


Marki,


            Unfortunately I don't have an archive. The article  I received
was directly from Enterasys.




Cheers

Darren





Strebler, Reinhard (SCC <reinhard.strebler <at> kit.edu> writes:

 

I checked it, too - obviously these articles are no longer online 

Reinhard

 

Well well, article 1667 is not online anymore either but Darren Coleman
seems to

be in possession of an archive, so maybe he can help ;-)

 

 

---

To unsubscribe from enterasys, send email to [email protected] with the body:
unsubscribe enterasys [email protected]

 

 

*       --To unsubscribe from enterasys, send email to [email protected] with
the body: unsubscribe enterasys [email protected] 


---
To unsubscribe from enterasys, send email to [email protected] with the body: 
unsubscribe enterasys [email protected]
Title: Elaboration of DFE Performance Expectations
Title: Elaboration of DFE Performance Expectations Article ID: 1667
Technology : Previous ID: ent21567

Products
DFE

Cause
The DFE's performance expectations are generally stated as:

  • "packet forwarding rate" = 13.5 million packets per second, per module
  • "switching capacity" = 18 gigabits per second, per module
  • "backplane capacity" = 20 gigabits per second, per link

This article elaborates upon the standard information set, to explain what is meant by these statements and to suggest ways in which port selection may optimize bandwidth use.

Important point: There are miscellaneous factors which dictate that the numbers stated in this document, though substantially correct, are approximations only.

Solution
Packet forwarding rate = 13.5 Mpps1 per module = 4.5 Mpps per packet processor2.
This is how fast the forwarding decisions can be made for received traffic.
Assuming 64-byte packets3, this would forward 7.776 Gbps of incoming data streams.
Assuming 75-byte packets, this would forward 9.000 Gbps of incoming data streams.
Assuming 1518-byte packets, this would exceed the module's switching capacity.

Switching capacity = 18 Gbps per module = 6 Gbps per packet processor.
This is the bandwidth for port-specific combined receive & transmit functions.
Assuming 64-byte packets, this would exceed the module's forwarding rate4.
Assuming 75-byte packets, this would match the module's forwarding rate4, yielding Line Speed for nine gigabit ports of Full Duplex traffic.
Assuming 1518-byte packets, this would be 1.474442 Mpps (.737221 Mpps x 2).

FTM2 backplane capacity = 20 Gbps per Link (10 Gbps Rx, 10 Gbps Tx).
This is the bandwidth for combined receive & transmit functions over a single FTM2 link.
Assuming balanced Full Duplex traffic between one module pair, this exceeds the modules' switching capacity.
On a N7 chassis there are 21 (N5=10, N3=3) of these individual slot-to-slot links.

The following chart defines which ports are allocated to individual packet processors. Optimized performance may be attained by considering the port grouping within each packet processor. For instance, you may wish to connect no more than three "power users" to any given gigabit port group, and in extreme cases may wish to leave any remaining ports in that group unused - removing local bandwidth oversubscription considerations for those users. For balanced Full Duplex traffic, backplane utilization should not be a factor.

                                                                      Switching Capacity
Model # Description Port#s by Packet Processor Oversubscribed?5


2G2072-52 48 10/100/1000, 4 MGBIC 01-12,25-36 13-24,37-48 49-52 Y
7G4202-30 30 10/100/1000 01-10 11-20 21-30 Y
4G4202-60 60 10/100/1000 01-20 21-40 41-60 Y
7G4202-60 60 10/100/1000 01-20 21-40 41-60 Y
4G4202-72 72 10/100/1000 01-12,25-36 13-24,37-48 49-72 Y
7G4202-72 72 10/100/1000 01-12,25-36 13-24,37-48 49-72 Y
4G4205-72 72 10/100/1000 01-12,25-36 13-24,37-48 49-72 Y
7G4205-72 72 10/100/1000 01-12,25-36 13-24,37-48 49-72 Y
7G4270-09 9 MGBIC 01-03 04-06 07-09
7G4270-10 10 MGBIC 01-03 04-06 07-09 10
7G4270-12 12 MGBIC 01-04 05-08 09-12 Y
7G4280-19 18 MGBIC, NEM 01-06 07-12 13-18 19-246 Y
4G4282-41 40 10/100/1000, NEM 01-20 21-40 41-466 Y
7G4282-41 40 10/100/1000, NEM 01-20 21-40 41-466 Y
4G4282-49 48 10/100/1000, NEM 01-12,25-36 13-24,37-48 49-546 Y
7G4282-49 48 10/100/1000, NEM 01-12,25-36 13-24,37-48 49-546 Y
4G4285-49 48 10/100/1000, NEM 01-12,25-36 13-24,37-48 49-546 Y
7G4285-49 48 10/100/1000, NEM 01-12,25-36 13-24,37-48 49-546 Y
4H4202-72 72 10/100 01-12,25-36 13-24,37-48 49-72
7H4202-72 72 10/100 01-12,25-36 13-24,37-48 49-72
4H4203-72 72 10/100 01-12,25-36 13-24,37-48 49-72
7H4203-72 72 10/100 01-12,25-36 13-24,37-48 49-72
4H4282-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
4H4283-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
4H4284-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
7H4284-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
4H4285-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
7H4382-25 24 10/100, NEM 01-12,13-24 25-306
7H4382-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
7H4383-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
7H4385-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
7K4290-02 2 10GBase see below7 Y
7K4297-02 2 10GBase see below7 Y
7K4297-04 4 10GBase see below7 Y

1 This rate assumes the connection is already programmed into hardware (5115). Up to 126,000 flows per standard module (42,000 per packet processor) may be set up (programmed into hardware), per second.

2 As is discernable in the above chart, all modules have a maximum of three packet processors, except the 7G4270-10 which has four, the 7G4280-19 which has four when a NEM is installed, and the 7H4382-25 which has two when a NEM is installed. The throughput calculations assume three packet processors.

3 These calculations assume 8 bytes for the preamble and StartFrameDelimiter, present on the wire but not part of the stated frame size. The InterFrameGap is not considered herein.

4 Assuming that the local packet processor made the forwarding decision for traffic being transmitted - which is not the case for traffic received into the System on another port group (possibly another module) in the System.

5 Switching capacity could be exceeded by the use of more than three Full Duplex gigabit ports per packet processor on any module flagged above as "oversubscribed".
Possible workaround: An equivalent switching capacity would be utilized by six unidirectional gigabit ports per packet processor, with the incoming unicast traffic being directed to another packet processor (port group) for transmission. One possible way this infomation could be applied is for server backups, which tend to be data-intensive in only one direction. As long as the average frame size is 150 or greater, this would not exceed the 4.5 Mpps forwarding rate for the local ingress packet processor and thus Line Speed could be attained unidirectionally on all six gigabit ports. Also consider conditions at each of the egress packet processors, and backplane utilization if the traffic is destined to a separate module in the System.

6 Modules which host a "NEM" (Network Expansion Module) have use of their higher port numbers if a NEM (e.g. 7G-6MGBIC) is installed, giving access to its ports and its onboard packet processor.

7 The useable bandwidth per 10GBase port (1669) will not exceed 9 Gbps unidirectionally. The data is internally handled as nine 1 Gbps ports per 10GBase port. Considering a single 10GBase port, three of its 1 Gbps paths are allocated to each packet processor, for a total of nine paths on three processors. This results in no oversubscription (as explained above), assuming that the 10GBase data stream is such that it may be evenly allocated to each of the nine available paths. A second/third/fourth 10GBase port uses the same packet processors in the same manner, sharing the bandwidth with the first 10GBase port. Since all 10GBase activity can at best attain an aggregate of 18 Gbps with Rx and Tx combined (total capacity of the three underlying packet processors), the use of two or more 10GBase ports on one 7K429x-xx is a potentially oversubscribed scenario.

For a discussion of Packet Buffer distribution, please refer to 1668.

Products
DFE

Cause
The DFE's performance expectations are generally stated as:

  • "packet forwarding rate" = 13.5 million packets per second, per module
  • "switching capacity" = 18 gigabits per second, per module
  • "backplane capacity" = 20 gigabits per second, per link

This article elaborates upon the standard information set, to explain what is meant by these statements and to suggest ways in which port selection may optimize bandwidth use.

Important point: There are miscellaneous factors which dictate that the numbers stated in this document, though substantially correct, are approximations only.

Solution
Packet forwarding rate = 13.5 Mpps1 per module = 4.5 Mpps per packet processor2.
This is how fast the forwarding decisions can be made for received traffic.
Assuming 64-byte packets3, this would forward 7.776 Gbps of incoming data streams.
Assuming 75-byte packets, this would forward 9.000 Gbps of incoming data streams.
Assuming 1518-byte packets, this would exceed the module's switching capacity.

Switching capacity = 18 Gbps per module = 6 Gbps per packet processor.
This is the bandwidth for port-specific combined receive & transmit functions.
Assuming 64-byte packets, this would exceed the module's forwarding rate4.
Assuming 75-byte packets, this would match the module's forwarding rate4, yielding Line Speed for nine gigabit ports of Full Duplex traffic.
Assuming 1518-byte packets, this would be 1.474442 Mpps (.737221 Mpps x 2).

FTM2 backplane capacity = 20 Gbps per Link (10 Gbps Rx, 10 Gbps Tx).
This is the bandwidth for combined receive & transmit functions over a single FTM2 link.
Assuming balanced Full Duplex traffic between one module pair, this exceeds the modules' switching capacity.
On a N7 chassis there are 21 (N5=10, N3=3) of these individual slot-to-slot links.

The following chart defines which ports are allocated to individual packet processors. Optimized performance may be attained by considering the port grouping within each packet processor. For instance, you may wish to connect no more than three "power users" to any given gigabit port group, and in extreme cases may wish to leave any remaining ports in that group unused - removing local bandwidth oversubscription considerations for those users. For balanced Full Duplex traffic, backplane utilization should not be a factor.

                                                                      Switching Capacity
Model # Description Port#s by Packet Processor Oversubscribed?5


2G2072-52 48 10/100/1000, 4 MGBIC 01-12,25-36 13-24,37-48 49-52 Y
7G4202-30 30 10/100/1000 01-10 11-20 21-30 Y
4G4202-60 60 10/100/1000 01-20 21-40 41-60 Y
7G4202-60 60 10/100/1000 01-20 21-40 41-60 Y
4G4202-72 72 10/100/1000 01-12,25-36 13-24,37-48 49-72 Y
7G4202-72 72 10/100/1000 01-12,25-36 13-24,37-48 49-72 Y
4G4205-72 72 10/100/1000 01-12,25-36 13-24,37-48 49-72 Y
7G4205-72 72 10/100/1000 01-12,25-36 13-24,37-48 49-72 Y
7G4270-09 9 MGBIC 01-03 04-06 07-09
7G4270-10 10 MGBIC 01-03 04-06 07-09 10
7G4270-12 12 MGBIC 01-04 05-08 09-12 Y
7G4280-19 18 MGBIC, NEM 01-06 07-12 13-18 19-246 Y
4G4282-41 40 10/100/1000, NEM 01-20 21-40 41-466 Y
7G4282-41 40 10/100/1000, NEM 01-20 21-40 41-466 Y
4G4282-49 48 10/100/1000, NEM 01-12,25-36 13-24,37-48 49-546 Y
7G4282-49 48 10/100/1000, NEM 01-12,25-36 13-24,37-48 49-546 Y
4G4285-49 48 10/100/1000, NEM 01-12,25-36 13-24,37-48 49-546 Y
7G4285-49 48 10/100/1000, NEM 01-12,25-36 13-24,37-48 49-546 Y
4H4202-72 72 10/100 01-12,25-36 13-24,37-48 49-72
7H4202-72 72 10/100 01-12,25-36 13-24,37-48 49-72
4H4203-72 72 10/100 01-12,25-36 13-24,37-48 49-72
7H4203-72 72 10/100 01-12,25-36 13-24,37-48 49-72
4H4282-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
4H4283-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
4H4284-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
7H4284-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
4H4285-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
7H4382-25 24 10/100, NEM 01-12,13-24 25-306
7H4382-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
7H4383-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
7H4385-49 48 10/100, NEM 01-12,25-36 13-24,37-48 49-546
7K4290-02 2 10GBase see below7 Y
7K4297-02 2 10GBase see below7 Y
7K4297-04 4 10GBase see below7 Y

1 This rate assumes the connection is already programmed into hardware (5115). Up to 126,000 flows per standard module (42,000 per packet processor) may be set up (programmed into hardware), per second.

2 As is discernable in the above chart, all modules have a maximum of three packet processors, except the 7G4270-10 which has four, the 7G4280-19 which has four when a NEM is installed, and the 7H4382-25 which has two when a NEM is installed. The throughput calculations assume three packet processors.

3 These calculations assume 8 bytes for the preamble and StartFrameDelimiter, present on the wire but not part of the stated frame size. The InterFrameGap is not considered herein.

4 Assuming that the local packet processor made the forwarding decision for traffic being transmitted - which is not the case for traffic received into the System on another port group (possibly another module) in the System.

5 Switching capacity could be exceeded by the use of more than three Full Duplex gigabit ports per packet processor on any module flagged above as "oversubscribed".
Possible workaround: An equivalent switching capacity would be utilized by six unidirectional gigabit ports per packet processor, with the incoming unicast traffic being directed to another packet processor (port group) for transmission. One possible way this infomation could be applied is for server backups, which tend to be data-intensive in only one direction. As long as the average frame size is 150 or greater, this would not exceed the 4.5 Mpps forwarding rate for the local ingress packet processor and thus Line Speed could be attained unidirectionally on all six gigabit ports. Also consider conditions at each of the egress packet processors, and backplane utilization if the traffic is destined to a separate module in the System.

6 Modules which host a "NEM" (Network Expansion Module) have use of their higher port numbers if a NEM (e.g. 7G-6MGBIC) is installed, giving access to its ports and its onboard packet processor.

7 The useable bandwidth per 10GBase port (1669) will not exceed 9 Gbps unidirectionally. The data is internally handled as nine 1 Gbps ports per 10GBase port. Considering a single 10GBase port, three of its 1 Gbps paths are allocated to each packet processor, for a total of nine paths on three processors. This results in no oversubscription (as explained above), assuming that the 10GBase data stream is such that it may be evenly allocated to each of the nine available paths. A second/third/fourth 10GBase port uses the same packet processors in the same manner, sharing the bandwidth with the first 10GBase port. Since all 10GBase activity can at best attain an aggregate of 18 Gbps with Rx and Tx combined (total capacity of the three underlying packet processors), the use of two or more 10GBase ports on one 7K429x-xx is a potentially oversubscribed scenario.

For a discussion of Packet Buffer distribution, please refer to 1668.


INTERNAL INFORMATION


Solution
The Packet Processor chip is identified as "Empire" in various system messages, and are numbered from 0 to 4, corresponding to the port columns shown above. A reference to "Emp=0" always refers to the host module, and a reference to "Emp=1", "Emp=2", or "Emp=3" may refer to either the host or NEM module - depending on the host module type. It is important to know this when troubleshooting to a defective component for RMA. For example, a failed "Emp=1" on a 7H4382-25 would call for RMA of the NEM, rather than RMA of the host 7H4382-25.

See also: 4585, 7138, 8842, 8932, 9044, 9047, and 9117.

SALESFORCE Case #: 494282

Primus History
Status :draft
Audience :internal
Technology :switching
Modified by :ppoyant
Date Modified :1157652200
Owner :ppoyant
Author :ppoyant
Date Created :1153576303
Type :definition
Review Frequency :medium
Hoth Author :

Reply via email to