Hi Nick,

Some observations to your comments ...

> - default route origination is a real pain in larger scale networks

True statement in general. But I think DC fabric which is really the
application area of this draft IMHO have slight different
characteristics from large SP transit networks.

Injecting default (or as you/Saku correctly hint anycast default) from
ASBRs seems much more contained and secured as such default will only
reach DC fabric nodes .. precisely the place where it is needed
(assume flat routing).

Building WAN interconnect as POP (standalone cluster) yet in the
East-West traffic pattern seems good idea.


> - in virtualised networks hosting third party tenancies, it is iften useful
> to extend L3 to the hypervisor.  With current tech, running thousands of
> vms per cabinet is not unrealistic, and this number will undoubtedly

Great point !

Let me mention how we are handling this - by creating an local and/or
global overlay.

For local overlay openstack neutron configures all VMs networking and
has complete knowledge of the tenant facing IP addressing. Those are
advertised via single IBGP sessions to TORs with specific host's next
hops. No need to build 1000s of BGP sessions from TOR to each physical
machine.

Going further if one would like to save BGP control plane in the
fabric itself there is simple and proven technology already in place
where you treat tenant networks as a VPN and send over the DC fabric
to ASBRs (or further if needed :). Data plane is still IP and only
carries sufficient reachability to BGP next hops.

> - i think you skimp over the problems associated with bgp session failure
> detection / reconvergence.  Mandating ebgp will get rid of the problems
> associated with the traditional loopback-to-loopback configuration of most
> ibgp networks, but there are still situations where dead link detection is
> going to be necessary using some form of keepalive mechanism which works
> faster than ebgp timers.

One solution is to use LAGs and lower layer OEM. That is I think going
to be the most common one.

Shifting the detection to BFD for parallel links may be a bit a
challenge today. Correct me if I am missing some way to configure this
today, but you can run BFD today over p2p EBGP peering going over
single link or you can run multihop BFD over N links. In the latter
case you use most likely EBGP multihop or disable-connected-check
trick. How BFD is going to help there to detect one of N links down in
the latter case ? For former I think you are not recommending N of
EBGP sessions each p2p over each link ?

Cheers,
R.
_______________________________________________
GROW mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/grow

Reply via email to