Hi Robert,

> Imagine two scenarios which were already highlighted as justification for
> this work:

> *Scenario 1 -* IGP with nodes interconnected with ECMP links

> *Scenario 2 -* IGP nodes interconnected with L2 emulated circuits which in
> turn are riding on telco IP network with ECMPs or LAGs.

> *Questions Ad 1 - *

> Is the idea to use in those cases "ECMP-Aware BFD for LDP LSPs" vendor's
> feature to be able to detect MTU issues on any of the L3 paths ? Is there
> feature extension to accomplish the same without LDP just when using ECMP
> with OSPF ?
The draft does not go into the specific use cases. I think most BFD uses cases 
(certainly in our case) are on p2p IGP/eBGP links. (btw some vendors do not 
support control-plane BFD independence for multihops BFD, making it unreliable 
for fast detection). 


The end to end paths may have multiple multiple-ECMP links/paths. The BFD 
sessions on the individual link along the path will detect large packet issue.


> How do you solve this when there is L2 LAG hashing across N links enabled ?
This is a situation where you need more than standard BFD. It is a reason why 
some customers like us prefer not to run LAG on parallel WAN circuits, so we 
can diagnose interface issues easily via standard tools like ping. It is a 
design compromise.


> *Question Ad 2 - *

> How do you detect it if your L2 circuit provider maps BFD flows to one
> underlay path and some encapsulated data packets is hashed to traverse the
> other path(s) ? Clearly running multiple BFD sessions is not going to help
> much in this scenario .... For example if someone is using v6 flow label it
> may be directly copied to the outer service header.
We won't have control over how the Provider maps our traffic (BFD/data). In my 
experience, the chances of this happening is prob small based on my involvement 
with all of these issues, in that when the issue happened, all packets > 
certain size would fail, not some getting through and some failing.


Thanks

Albert


Reply via email to