Tony, yes, absolutely, that was the inherent assumption in the draft from
start on and it's entirely my fault if this did not come out clearly so
yes, your example under the assumptions you took helped clarify this
issue.  As detail: the draft actually allows two different algorithms
(prunners) to touch as well by simply forcing them to flood on any
"touching adjacencies". You can of course postulate that this is in fact
"full flooding on this link" and hence all algorithms are always connected
by "normal flooding". I'd go with that ;-)

Let me run a quick write down _why_ not having this condition would prevent
development of independent _distributed_ algorithms.

1. The triangle example has shown that _every_ _distributed_ algorithm must
signal what it's running to build a correct CDS (the
leader-says-implies-must-run-supported-algorithm is just a semantically
more contorted way of saying this).  As footnote: Observe that a component
could run a centralized computation algorithm just fine as long the
centralized computation respects the component boundary if it wants to
interop with other, centralized or distributed algorithms. The draft did
not go into details there.
2. We will have in every network no matter what _two_ algorithms at least,
the "normal" flooding which is implicit by "no signalling as to reduction"
and something else, in case of leader one algo (but still multiple
components connected by full flooding possibly) and in case of leaderless
unless we do something about it, possibly more than two algorithms.
3. the "normal" flooding builds CDS in every component it runs in
obviously.

So let's first go "sufficient".  For multiple algorithms in the network to
work correctly together, it is sufficient for every algorithm to operate
_only_ on the component where it detects its own algorithm running (observe
that this does _not_ imply magically a minimal re-computation blast radius,
a distributed algorithm could very well completely recompute the CDS on
every component change making blast radius component size).  The draft
shows the proof without explaining that this is part of the "prunner"
property as you pointed out.  (I skip the optimization here where the known
semantics of "full flooding" can be used to play tricks on the edges of a
component, we can talk about that later).

Now let's go to the "necessary". First, "normal flooding" (zero prunner) is
semantically understood by _all_ algorithms that will be invented and this
may have confused the issue, i.e. if I introduce a distributed algorithm A
into the network, A will understand that anything not signalling any
algorithm is full flooding and can rely on that and play tricks by using
topology that is not signalling anything. Now, once an algorithm B is
introduced, either A and B exhibit the "sufficient" property and build
independent CDS within their component only _or_, as correctly pointed out,
A and B MUST understand each other's semantics to guarantee an overall CDS.
Imagine we go down that path and introduce algorithm C without meeting the
"sufficient" condition either, now all of a sudden A, B and C must all
understand each other semantics to operate correctly and hence, development
of independent algorithms is impossible, A deployed before C existed must
be scraped and replaced with A' that understands C semantics. So either the
"sufficient" condition is considered "necessary" and independent
distributed algorithms can be developer _or_ A,B,C and any other future
algorithm are basically completely interdepenent as standards, a probably
very undesirable solution.

I hope that clarifies the issue and any text to be added to -prz- to shed
light is of course welcome.

So, in summary: when multiple algorithms are deployed (as a mix of
distributed and "centralized computation" algorithms), each of those must
signal the "running" algorithm somehow (and everyone understands that
"something" is running but has no idea what the semarntics/resulting CDS is
unless it's the same algorithm) and the centralized instance or the
distributed algorithm must only build the CDS within its component. As the
document already says _all_ edges where the algorithm does not agree on
both sides must be flooded to connect all those CDSes together (and yes, as
pointed out previously, the "glued bunch of CDS" is basically held together
by de facto full flooding again). Given that I do not see any use except
node-by-node migration scenarios for operational purposes I think that
doing further optimizations here would be overkill for all practical
purposes.

-- tony


On Wed, Nov 27, 2024 at 12:19 AM Tony Li <[email protected]> wrote:

>
> Hi Tony,
>
>
> A prunner MUST build CDS only within its component, i.e. algorithm X will
> only optimize the A1/B1/A2/B2 component and cannot start to "decide about
> other parts of the network" like assuming what Y will do or tell nodes in Y
> what to do. This would amount to give a simile something like "proxy
> purging" and cannot work since prunners MUST not have any semantics about
> each other (otherwise attempt to build algorithms independently and
> interoperate those  is impossible).
>
>
>
> I think we are making progress.
>
> Let me see if I can restate this:
>
> A distributed algorithm computing a flooding topology must only operate
> upon nodes running the same algorithm (and version). If multiple algorithms
> (and/or versions) are running in the same network, then any given algorithm
> and version defines a subgraph and the algorithm can only optimize flooding
> within its own subgraph. Legacy full flooding must be used between
> subgraphs of different algorithms or versions.
>
> Have I captured that accurately?
>
> Tony
>
>
_______________________________________________
Lsr mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to