On Fri, 22 Mar 2019, Sebastian Wiesinger wrote:

What did bother us was that you are limited (at least on QFX5100) in
the amount of "VLANs" (VNIs). We were testing with 30 client
full-trunk ports per leaf and with that amount you can only provision
around 500 VLANs before you get errors and basically it seems you run
out of memory for bridge domains on the switch. This seems to be a
limitation by the chips used in the QFX5100, at least that's what I
got when I asked about it.

You can check if you know where:

root@SW-A:RE:0% ifsmon -Id | grep IFBD
        IFBD                       :    12884      0

root@SW-A:RE:0% ifsmon -Id | grep Bridge
        Bridge Domain              :     3502       0

These numbers combined need to be <= 16382.

And if you get over the limit these nice errors occur:

dcf_ng_get_vxlan_ifbd_hw_token: Max vxlan ifbd hw token reached 16382
ifbd_create_node: VXLAN IFBD hw token couldn't be allocated for <xe-...>

Workaround is to decrease VLANs or trunk config.

Huh, that's potentially bad... Can you elaborate on the config a bit more? Are you hitting a limit around ~16k bridge domains total?

I've got a few really large layer 2 domains that I'm looking to start breaking up and stitching back together with EVPN+VXLAN in the middle, on the order of a few thousand VLANs apiece. Trying to plan around any likely limitations, but specifics have been hard to come by...

-Rob
_______________________________________________
juniper-nsp mailing list juniper-nsp@puck.nether.net
https://puck.nether.net/mailman/listinfo/juniper-nsp

Reply via email to