From MLXe Series Hardware Installation Guide, I can only find additional:
Performance in non Turbo mode:
No line-rate even with all SFMs installed.

And for the 2x100GbE CFP2:
Performance is in non turbo mode: No line rate for certain smaller packet sizes (even with all fabrics installed).

There is also a KB article, but it is not so easy to figure the calculation out quickly: https://extremeportal.force.com/ExtrArticleDetail?an=000086074

In a practical sense, it already makes me feel not to insert a G1 into a 100g machine; it is not worth the trouble. Instead, I would plan to add an extra small switch and connect that from the 10g module to support RJ45 and gigabit connections.

Also, when using X-2 cards right now, you will decrease your routing scalability, dropping from the 2 million routes to degraded 768k (with IPv6) support.


On 2 Apr 2022, at 9:38, Tim Warnock wrote:

From the BR-MLX-10GX24-DM doco: " The module can support up to 200Gbps when the system fabric mode is in Turbo mode (i.e. system has only Gen 2 and Gen 3 modules such as 8x10G, 100G or 24x10G modules). The module can support up to 12 10G wire-speed ports when the system fabric mode is in Normal mode (i.e. system also has any Gen 1 modules such as 1G or 4x10G modules)."

I never used one but I believe that makes it 200G/Slot (Turbo) to 120G/Slot (Normal)?

-----Original Message-----
From: foundry-nsp <[email protected]> On Behalf Of Michael Lambert
Sent: Saturday, 2 April 2022 1:50 AM
To: foundry-nsp <[email protected]>
Subject: [f-nsp] MLXe Slot Bandwidth, Normal v Turbo

We have an MLXe with high-speed switch fabric modules. At the moment it’s running in Turbo mode with only Gen2 line cards (10 and 100 GbE). Since the claim is that the box does full line rate, each slot must perform at least at 200 Gb/s full-duplex in this mode. If we put a Gen1 card (BR-MLX-1GCx24-X) into the box, and reboot into Normal mode, what does this do to the per-slot bandwidth? We haven’t been able to track down any numbers (and we don’t have an SE to ask).

Thanks,

_______________________________________________
foundry-nsp mailing list
[email protected]
http://puck.nether.net/mailman/listinfo/foundry-nsp

Reply via email to