Martin, here are the only scenarios I see 1) any pruning algorithm can generate in its component a "single link that is minimal cut of the CDS". We can only reduce if we prune links out the topology during flooding ;-) Imagine a simple spanning tree on a graph, each link in a sense will partition the CDS in two. That's the algorithm's choice, e.g. disttopo says specifically that it's absolutely possible to "double-load-balance" where every node has to be covered e.g. twice with flooding and then (unless topology doesn't allow) there will never be a single link minimal cut in the component running disttopo. Otherwise, the "convergence" is limited only by the flooding involved and the computation involved (as far as I see in case of disttopo the computation will affect a horizon of 2 hops only (discarding the consideration on checking whether a LSP arrived from source direction which is normally done on demand only and can be delayed AFAIR) which is probably the smallest horizon to achieve reduction in a distributed algo but that 's just my gut feeling here. 2) if 2 algorithms are running and the minimal cut between their components is 1 link this is only possible if the graph already had the articulation. If you have more than one link between component X and Y the prunner framework says that the link must be _always_ flooded so there will never be a 1 link minimal cut there.
if I missed the scenario you are concerned about, pls provide example --- tony On Tue, Nov 26, 2024 at 11:26 AM <[email protected]> wrote: > Assume this extreme case – different algorithms create a topology with a > single point of failure – and this single link actually fails. How long > would it take for a typical implementation to set up the new, fixed > flooding graph? > > > > Best regards, Martin > > > > *Von: *Tony Li <[email protected]> > *Datum: *Montag, 25. November 2024 um 17:06 > *An: *Tony Przygienda <[email protected]> > *Cc: *lsr <[email protected]> > *Betreff: *[Lsr] Re: A counter example > > > > Tony, > > > > I’m sorry that you don’t see lack of biconnectivity as a fatal flaw. I > certainly do. > > > > However, there is another extension of that counter-example that results > in zero flooding. If you prefer, I will post that too. > > > > Tony > > > > > > On Nov 21, 2024, at 1:15 AM, Tony Przygienda <[email protected]> wrote: > > > > Thanks Tony, good drill down. I see two points here: > > > > 1. the point I take here is that in the example resulting prunner > framework flooding covers the full graph, i.e. correctness as in > "sufficient flooding" is still assured. > > 2. the solution may be _not_ optimal in terms of constructing a single > CDS, i.e. on the boundaries basically full flooding is mandated by the > prunner framework. Actually the most extreme case is where _every_ node in > the network runs a different algorithm and the prunner framework says > "well, flood on all links with different algorithm on the other side". Then > it all collapses into full flooding again. > > > > If that's my correct reading then please observe that the -prz- draft does > NOT state that in mixed algorithm scenarios _optimal_ flooding in any sense > is guaranteed (optimality here seems to mean "CDS with minimal number of > links"), it only says that prunner framework will guarantee "sufficient" > flooding to build an overall CDS, not less and not more. In fact that's the > paragraph that is possibly bits cryptical to most saying that you'd need a > "meta-prunner" algorithm for such stuff or synchronization of boundaries of > a component (funny enough, the considerations in such design start to be > closely related to arbitrary hierarchy principles ;-). There are other > considerations but they become even more arcane and AFAIS achieving an > "optimal" CDS when components with multiple algorithms are mixed is in > pragmatic terms not possible. > > > > So, if we agree that prunner framework (i.e. miximing multiple algorithms) > does guarantee "sufficient" flooding (i.e. full CDS) but does NOT guarantee > any "only necessary" flooding then we're in sync. And it's perfectly fine > AFAIS if the WG decides that working on "multiple algorithm mix in the > network" is not something to be pursuited, it will be sufficient then to > e.g. extend the 97xx to provide leader-based and leaderless signalling as > two options (just like there is centralized computed and distributed > already) and say that "mixing both modes or multiple algorithms under > leaderless is outside the scope of the document". Not every problem under > the sun needs to be solved by a WG and practical implications of such scope > limitations AFAIS are limited since in practical purposes mixing limits > blast radius on migrations and nothing else really AFAIU ;-) > > > > So I guess I wasn't specific enough when I said that I don't see a counter > example for -prz- framework not being correct. By correctness I always > meant "any mix of algorithms being prunners in a network will always > deliver _sufficient_ flooding" and not implied any kind of flooding > optimality. > > > > thanks > > > > --- tony > > > > > > On Wed, Nov 20, 2024 at 10:57 PM Tony Li <[email protected]> wrote: > > > Hi all, > > Tony P. asked for a counter-example to why neighbor-only algorithm > information is sufficient. This email attempts to articulate just such an > example. > > Suppose that we have a bi-partite network, with two halves, A and B. Part > A contains nodes A1, A2, A3, …. Part B contains nodes B1, B2, B3, …. > > The two halves are connected by three links (A1, B1), (A2, B2), and (A3, > B3). > > The correct flooding topology in this situation is to select exactly two > of the three links. Selecting only one of the links would create a single > point of failure. Selecting all three links leads to unacceptable and > unnecessary flooding. > > Suppose that A1 and B1 are running algorithm X. All other nodes are > running algorithm Y. > > Suppose that under algorithm X, links 2 and 3 are selected. Therefore, A1 > and B1 choose to prune (A1, B1). Further, suppose that under algorithm Y, > links 1 and 2 are selected. Therefore, nodes A3 and B3 choose to prune (A3, > B). Now, only (A2, B2) is selected, creating a single point of failure. > > The key points here are simple: > > - An algorithm makes assumptions about how other nodes in the topology are > going to behave. If multiple algorithms are in play, those assumptions may > not hold. > > - Two concurrent algorithms, while each individually correct, can still > produce a flawed flooding topology if they are asked to interoperate. > > - Full flooding at the boundary between the algorithms is not sufficient > to correct the situation. > > Regards, > Tony > > _______________________________________________ > Lsr mailing list -- [email protected] > To unsubscribe send an email to [email protected] > > >
_______________________________________________ Lsr mailing list -- [email protected] To unsubscribe send an email to [email protected]
