Adam,

Its right at the bottom in note #7 in Article 1667 which describes how both 10G ports on a 7K4290-02 blade share the same 3 FTM packet processors. I'm not sure if this is the same architecture for the 7K4297-04. I know that the C3/C5 architecture for 10G is completely different and when I queried this I got this response

If you are asking "can I send a single 10Gb stream through the C3 switch to another 10Gb port at a full 10Gb" - the answer is yes.The light buffering in the C3 that could cause packet loss when a 10Gb server is sending packets to multiple 1Gb clients. This is where buffering becomes critical and the C3 will start to drop packets when it runs out of buffering. So the 10Gb server performance will suffer due to all the resending of packets due to the light buffering and the effective rate will not be 10Gb. But if you want to run a test of sending a 10Gb stream through the 10Gb ports on a C3, you will see full 10Gb performance.

S Series and different again can easily run at line rate for host to host transfer. 



Title:
Elaboration of DFE Performance Expectations
Article ID:
1667
Technology :
switching
Previous ID:
ent21567

Products
DFE

Cause
The DFE's performance expectations are generally stated as:

·        "packet forwarding rate" = 13.5 million packets per second, per module
·        "switching capacity" = 18 gigabits per second, per module
·        "backplane capacity" = 20 gigabits per second, per link

This article elaborates upon the standard information set, to explain what is meant by these statements and to suggest ways in which port selection may optimize bandwidth use.

Important point: There are miscellaneous factors which dictate that the numbers stated in this document, though substantially correct, are approximations only.

Solution
Packet forwarding rate = 13.5 Mpps1 per module = 4.5 Mpps per packet processor2
This is how fast the forwarding decisions can be made for received traffic. 
Assuming 64-byte packets3, this would forward 7.776 Gbps of incoming data streams. 
Assuming 75-byte packets, this would forward 9.000 Gbps of incoming data streams. 
Assuming 1518-byte packets, this would exceed the module's switching capacity.

Switching capacity = 18 Gbps per module = 6 Gbps per packet processor. 
This is the bandwidth for port-specific combined receive & transmit functions. 
Assuming 64-byte packets, this would exceed the module's forwarding rate4
Assuming 75-byte packets, this would match the module's forwarding rate4, yielding Line Speed for nine gigabit ports of Full Duplex traffic. 
Assuming 1518-byte packets, this would be 1.474442 Mpps (.737221 Mpps x 2).

FTM2 backplane capacity = 20 Gbps per Link (10 Gbps Rx, 10 Gbps Tx). 
This is the bandwidth for combined receive & transmit functions over a single FTM2 link. 
Assuming balanced Full Duplex traffic between one module pair, this exceeds the modules' switching capacity. 
On a N7 chassis there are 21 (N5=10, N3=3) of these individual slot-to-slot links.

The following chart defines which ports are allocated to individual packet processors. Optimized performance may be attained by considering the port grouping within each packet processor. For instance, you may wish to connect no more than three "power users" to any given gigabit port group, and in extreme cases may wish to leave any remaining ports in that group unused - removing local bandwidth oversubscription considerations for those users. For balanced Full Duplex traffic, backplane utilization should not be a factor.

                                                                      Switching Capacity
 Model #     Description              Port#s by Packet Processor       Oversubscribed?5

 
2G2072-52  48 10/100/1000, 4 MGBIC  01-12,25-36  13-24,37-48  49-52           Y
7G4202-30  30 10/100/1000              01-10        11-20     21-30           Y
4G4202-60  60 10/100/1000              01-20        21-40     41-60           Y
7G4202-60  60 10/100/1000              01-20        21-40     41-60           Y
4G4202-72  72 10/100/1000           01-12,25-36  13-24,37-48  49-72           Y
7G4202-72  72 10/100/1000           01-12,25-36  13-24,37-48  49-72           Y
4G4205-72  72 10/100/1000           01-12,25-36  13-24,37-48  49-72           Y
7G4205-72  72 10/100/1000           01-12,25-36  13-24,37-48  49-72           Y
7G4270-09   9 MGBIC                    01-03        04-06     07-09
7G4270-10  10 MGBIC                    01-03        04-06     07-09  10
7G4270-12  12 MGBIC                    01-04        05-08     09-12           Y
7G4280-19  18 MGBIC, NEM               01-06        07-12     13-18  19-246   Y
4G4282-41  40 10/100/1000, NEM         01-20        21-40     41-466          Y
7G4282-41  40 10/100/1000, NEM         01-20        21-40     41-466          Y
4G4282-49  48 10/100/1000, NEM      01-12,25-36  13-24,37-48  49-546          Y
7G4282-49  48 10/100/1000, NEM      01-12,25-36  13-24,37-48  49-546          Y
4G4285-49  48 10/100/1000, NEM      01-12,25-36  13-24,37-48  49-546          Y
7G4285-49  48 10/100/1000, NEM      01-12,25-36  13-24,37-48  49-546          Y
4H4202-72  72 10/100                01-12,25-36  13-24,37-48  49-72
7H4202-72  72 10/100                01-12,25-36  13-24,37-48  49-72
4H4203-72  72 10/100                01-12,25-36  13-24,37-48  49-72
7H4203-72  72 10/100                01-12,25-36  13-24,37-48  49-72
4H4282-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546
4H4283-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546
4H4284-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546
7H4284-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546
4H4285-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546
7H4382-25  24 10/100, NEM           01-12,13-24     25-306
7H4382-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546
7H4383-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546
7H4385-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546
7K4290-02   2 10GBase               see below7                                Y

1 This rate assumes the connection is already programmed into hardware (5115). Up to 126,000 flows per standard module (42,000 per packet processor) may be set up (programmed into hardware), per second.

2 As is discernable in the above chart, all modules have a maximum of three packet processors, except the 7G4270-10 which has four, the 7G4280-19 which has four when a NEM is installed, and the 7H4382-25 which has two when a NEM is installed. The throughput calculations assume three packet processors.

3 These calculations assume 8 bytes for the preamble and StartFrameDelimiter, present on the wire but not part of the stated frame size. The InterFrameGap is not considered herein.

4 Assuming that the local packet processor made the forwarding decision for traffic being transmitted - which is not the case for traffic received into the System on another port group (possibly another module) in the System.

5 Switching capacity could be exceeded by the use of more than three Full Duplex gigabit ports per packet processor on any module flagged above as "oversubscribed". 
Possible workaround: An equivalent switching capacity would be utilized by six unidirectional gigabit ports per packet processor, with the incoming unicast traffic being directed to another packet processor (port group) for transmission. One possible way this infomation could be applied is for server backups, which tend to be data-intensive in only one direction. As long as the average frame size is 150 or greater, this would not exceed the 4.5 Mpps forwarding rate for the local ingress packet processor and thus Line Speed could be attained unidirectionally on all six gigabit ports. Also consider conditions at each of the egress packet processors, and backplane utilization if the traffic is destined to a separate module in the System.

6 Modules which host a "NEM" (Network Expansion Module) have use of their higher port numbers if a NEM (e.g. 7G-6MGBIC) is installed, giving access to its ports and its onboard packet processor.

7 The useable bandwidth per 10GBase port (1669) will not exceed 9 Gbps unidirectionally. The data is internally handled as nine 1 Gbps ports per 10GBase port. Considering a single 10GBase port, three of its 1 Gbps paths are allocated to each packet processor, for a total of nine paths on three processors. This results in no oversubscription (as explained above), assuming that the 10GBase data stream is such that it may be evenly allocated to each of the nine available paths. The second 10GBase port uses the same packet processors in the same manner, sharing the bandwidth with the first 10GBase port. Since the two 10GBase ports can at best attain an aggregate of 9 Gbps (18 Gbps with Rx and Tx combined), the use of two 10GBase ports on one 7K4290-02 is a potentially oversubscribed scenario.

For a discussion of Packet Buffer distribution, please refer to 1668.




Cheers

Darren



Very interesting and explains a lot! Thanks Reinhard.

Regards,

Andy
NOC Consultant (ECE-Networking)

-----Original Message-----
From: Strebler, Reinhard (SCC) [mailto:reinhard.strebler@kit.edu]
Sent: 06 February 2013 16:19
To: Enterasys Customer Mailing List
Cc: Adam Rainer
Subject: Re: [enterasys] tg in/out discards

N-Series (DFE Platinum) has a special backplane architecture (FTM2).
There are two channels of 10Gbps each. But: Each channel is built by 3 channels providing 3*1.1Gbps each. I'm sure there is/was a knowledge base article describing this.

So the maximum performance for a single flow is 1.1Gbps. Flows are been distributed by a hash algorithm between these 18 backplane channels (channels of 1.1Gbps). Since different flows are distributed by a hash algorithm, several flow may run on the same 1.1Gbps channel - resulting in discards.

I hope this helps
Reinhard


Am 06.02.2013 13:53, schrieb Adam Rainer:
Hi

Yes it is a performance issue. The N-Serie's is not able to handle directly a 10GB traffic, it is possible to transfer 10 times a 1 GB session at the same time, but not a 10GB transfer session. The N-serie's was not designed for 10GB.



Mit freundlichen Grüßen / Best regards

Rainer ADAM
System Engineer



---
To unsubscribe from enterasys, send email to [email protected] with the
body: unsubscribe enterasys [email protected]


---
To unsubscribe from enterasys, send email to [email protected] with the body: unsubscribe enterasys [email protected]

---
To unsubscribe from enterasys, send email to [email protected] with the body: unsubscribe enterasys [email protected]


----------------------------------------------------
Darren Coleman | Operations Manager | Division of Information - Converged Networks | Building #56, The Australian National University, Canberra, ACT, 0200, Australia | E: [email protected] | T: +61 2 6125 4627 | F: +61 2 6125 8199 | W: http://information.anu.edu.au

CRICOS Provider #00120C



Reply via email to