Camilo,

On Mon, Mar 24, 2025 at 02:29:02PM -0500, Camilo Cardona wrote:
> First, it seems to me that the extra churn caused by enabling this TLV
> would depend on (1) The BMP feed’s RIB source. (2) Whether reason codes
> are enabled. Do we agree that with reason codes off, churn would be lower?

I agree, without the markup that the churn would be lower.  The reason codes
are a source of churn, which is the point I'm highlighting.

> Also, sourcing from adj-rib-in-pre, for instance, would result in less
> churn than sourcing from loc-rib?

Likely true in real implementations.  Implementations that prioritize their
rib-in feeds over loc-rib might have some churn visible in the loc-rib, but
loc-rib may benefit from state compression if scheduled to report later.

Much like the original BGP RFC, we don't generally discuss prioritization
mechanisms for BMP implementations.  This is left up to local
implementations.

Where we're about to get some interesting discussion is what happens when we
run BMP over QUIC with multiple streams.  In such a case, more churn may be
presented to a station since we're losing the "benefit" of a single TCP
stream that is motivating stronger scheduling behaviors.

> My questions aim to stress that the proposed marking mechanism creates
> scenario-dependent churn—worse in some cases than others. We could add
> text describing the churn in the document, but the end goal of the
> document is still to standardize the TLV (where to mark), rather than
> analyze every situation that stems from it.

I understand, and hence my inquiry about implementations.  If this is
implemented, it would seem that some of these things weren't theory.

But, given the functionality of the markup mechanism, and possible use
cases, it motivates the question about how this impacts the overall behavior
of BMP.

> Regarding your question and the scenario you describe, I personally care
> more about the final state than transient churn. Therefore, it would be
> nice to use as many tricks as we can to hide the churn behind state
> compression, but I accept some churn is unavoidable, and we will have to
> handle this on the receiving end.
> 
> You suggest delayed marking to reduce (not avoid) churn. Is this something
> we could propose to be configurable (something like a timer?), or would
> this be too implementation-dependent to be generalized?

Prior to worrying about the deeper implementation details, understanding the
high level desired functionality is enough to discuss the feature.

In the format provided in the draft, the consequence of the feature is that
if we enable path marking for a fully converged system[1] and do so after
the system has gone quiet, the impact is that we will re-advertise the
entire rib-in feed just to add the markings.  But, in such a quiet case, it
might advertise *once*.

This still seems a bit heavy-weight. However, it's the optimal behavior, I 
think.

Perhaps this motivates the question about the desire for this bit of state?
Is it wanted real-time?  Is it solely to avoid running proxy analysis on the
BMP receiver when it can't fully understand the implementation's route
selection algorithm?

-- Jeff

[1] "Fully converged" is a bit of bgp humor.  The Internet is still rather 
chatty.

_______________________________________________
GROW mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to