Hey Tony,

as to miscabling: yepp, the protocol has either to prevent adjacencies
coming up or one has to deal with generic topologies. If you don't want to
design miscabling prevention/topology ZTP into protocol (like ROLL or RIFT
did) you have to deal with generic graph as you say. I think that if you
have 99.99% of the time a predictable graph it's easier to restrict wrong
cabling than deal with arbitrary topology when tackling this problem but
that's my read.

Then I get you on the centralized story ;-)  But again, if you're willing
to restrict the graph then stuff like RIFT distributed solution is
implemented and works fine (yes, it took 2 scraps of the approach until
Pascal had the flash of intuition fueled by my-bad-ideas how to use MANET
concepts in a novel way with hashes). BTW implementation experience there
changed bunch of things on what's published in -02 due to interesting
effects, that will be put out in -03 ;-)

Another observation though would be that if you have a single mesh then
centralized controller delay on failure becomes your delay bound how long
flooding is disrupted possibly (unless your single covering graph has
enough redundancy to deal with single llink failure, but then you're really
having two as I suggest ;-). That could get ugly since you'll need
make-before-break if installing a new mesh from controller me thinks with a
round-trip from possibly a lot of nodes ...

Overall, I'm agnostic and watch how things will play out and what people
decide needs be done ...

>  iii) change in one of the vertex lifts
>
>
> Sorry, I don’t understand point iii).
>

A mildly stuffed (or mathematically concise ;-) way to say that if you have
one or two covering graphs (and vertex lift is the more precise word here
since "covering graph" can be also an edge lift which is irrelevant here)
and one of those subgraphs gets recomputed & distributed (due to failures,
changes in some metrics, _whatever_) then this should not lead to
disruption. Basically make-before-break as one possible design point,
harder to achieve of course in distributed fashion ...


>
>
>
> > moreover, I observe that IME ISIS is much more robust under such
> optimizations since the CSNPs catch (@ a somehow ugly delay cost) any
> corner cases whereas OSPF after IDBE will happily stay out of sync forever
> if flooding skips something (that may actually become a reason to introduce
> periodic stuff on OSPF to do CSNP equivalent albeit it won't be trivial in
> backwards compatible way on my first thought, I was thinking a bit about
> cut/snapshot consistency check [in practical terms OSPF hellos could carry
> a checksum on the DB headers] but we never have that luxury on a link-state
> routing protocol [i.e. we always operate under unbounded epsilon
> consistency ;-) and in case of BGP stable oscialltions BTW not even that
> ;^} ]).
>
> Emacs
>
>
And that I cannot parse. Emacs? You want LISP code? But then, Dino may get
offended ;-)

--- tony
_______________________________________________
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to