Hi all,
> Would you advise avoiding bandwidth-based metrics in e.g. datacenter
> or campus networks as well?
>
> (I am myself running a mostly DC network, with a little bit of campus
> network on the side, and we use bandwidth-based metrics in our OSPF.
> But we have standardized on using 3 Tbit/s as our "reference bandwidth",
> and Junos doesn't allow us to set that, so we set explicit metrics.)
As Adam has already mentioned, DC networks are becoming more and more
Clos-based, so you basically don't need OSPF at all for this.
Fabric uplinks, Backbone/DCI and legacy still exist though, however in the DC
we tend to ECMP it all, so you normally don't want to have unequal bandwidth
links in parallel in the DC.
Workarounds happen, sometimes you have no more 100G ports available and need to
plug, let's say, 4x40G "temporarily" in addition to two existing 100G which are
starting to be saturated. In such a case you'd rather consciously decide
weather you want to ECMP these 200 Gigs among six links (2x100 + 4x40) or use
40GB links as a backup only (might be not the best idea in this scenario).
So, it's not the reference bandwidth itself which is bad in the DC but rather
the use-cases, where it can technically work, are not the best for modern DC
networks.
--
Pavel
_______________________________________________
juniper-nsp mailing list [email protected]
https://puck.nether.net/mailman/listinfo/juniper-nsp