Hi Albert,

Thank you for sharing the experience and your use case.

However when we make any protocol extension we need to make sure all
possible deployment cases are covered and it must be well understood how
the proposed extension will operate in basic deployment scenarios I
enumerated. I really do not think we should be standardizing extension for
single use case based on the behavior someone is reporting as "likely to
occur".

We all agree that if you have a p2p fiber link between routers there is no
issue.

The issue surface when you are using emulated circuits as your p2p links.
So the solution should allow to detect the problem in all cases it can
happen. Perhaps BFD is not the right tool for this. Perhaps we need to go
back to BESS WG and report that VPWS or EVPN based p2p emulated circuits
were not design right as they exhibit observed issues.

> We won't have control over how the Provider maps our traffic (BFD/data).

Well of course you do :)  Just imagine if your BFD packets (in set equal to
configured multiplier) would start using random UDP source port which then
would be mapped to different ECMP buckets along the way in provider's
underlay ?

Kind regards,
Robert.


On Mon, Sep 30, 2019 at 6:11 AM Albert Fu (BLOOMBERG/ 120 PARK) <
[email protected]> wrote:

> Hi Robert,
>
>
> > Imagine two scenarios which were already highlighted as justification for
> > this work:
>
> > *Scenario 1 -* IGP with nodes interconnected with ECMP links
>
> > *Scenario 2 -* IGP nodes interconnected with L2 emulated circuits which
> in
> > turn are riding on telco IP network with ECMPs or LAGs.
>
> > *Questions Ad 1 - *
>
> > Is the idea to use in those cases "ECMP-Aware BFD for LDP LSPs" vendor's
> > feature to be able to detect MTU issues on any of the L3 paths ? Is there
> > feature extension to accomplish the same without LDP just when using ECMP
> > with OSPF ?
> The draft does not go into the specific use cases. I think most BFD uses
> cases (certainly in our case) are on p2p IGP/eBGP links. (btw some vendors
> do not support control-plane BFD independence for multihops BFD, making it
> unreliable for fast detection).
>
>
> The end to end paths may have multiple multiple-ECMP links/paths. The BFD
> sessions on the individual link along the path will detect large packet
> issue.
>
>
> > How do you solve this when there is L2 LAG hashing across N links
> enabled ?
> This is a situation where you need more than standard BFD. It is a reason
> why some customers like us prefer not to run LAG on parallel WAN circuits,
> so we can diagnose interface issues easily via standard tools like ping. It
> is a design compromise.
>
>
>
> > *Question Ad 2 - *
>
> > How do you detect it if your L2 circuit provider maps BFD flows to one
> > underlay path and some encapsulated data packets is hashed to traverse
> the
> > other path(s) ? Clearly running multiple BFD sessions is not going to
> help
> > much in this scenario .... For example if someone is using v6 flow label
> it
> > may be directly copied to the outer service header.
> We won't have control over how the Provider maps our traffic (BFD/data).
> In my experience, the chances of this happening is prob small based on my
> involvement with all of these issues, in that when the issue happened, all
> packets > certain size would fail, not some getting through and some
> failing.
>
>
> Thanks
>
> Albert
>
>
>

Reply via email to