On Tue, Jul 23, 2019 at 5:24 PM Les Ginsberg (ginsberg) <ginsb...@cisco.com>
wrote:

> Tony –
>
>
>
> As usual, you cover a lot of territory – and even after a couple of
> readings I am not sure I got everything.
>

I was being accused of being too flowerly in my prose for many years so I
adopted an acerbic, terse style ;-)

>
> *From:* Tony Przygienda <tonysi...@gmail.com>
> *Sent:* Tuesday, July 23, 2019 1:56 PM
> *To:* Les Ginsberg (ginsberg) <ginsb...@cisco.com>
> *Cc:* Tony Li <tony...@tony.li>; lsr@ietf.org
> *Subject:* Re: [Lsr] Dynamic flow control for flooding
>
>
>
>
>
>
>
> It is a mistake to equate LSP flooding with a set of independent P2P
> “connections” – each of which can operate at a rate independent of the
> other.
>
>
>
>
>
>
>
> At least my experience much disagrees with that and such a proposal seems
> to steer towards slowest receiver in the whole network problem so I wait
> for others to chime in.
>
> *[Les:] This is NOT AT ALL where I am going.*
>
> *If I have a “large network” and I have a node which consistently cannot
> support the flooding rates necessary to deal with Tony Li’s example (node w
> many neighbors fails) then the network has a problem.*
>
> *Slowing down everyone to meet the flooding speed of the slowest speed is
> not something I would expect a customer to accept. The network will not be
> able to deliver the convergence expected. The node in question needs to be
> identified and steps taken to either fix it or upgrade or replace it or…*
>
>
>
> *The point I am also making is trying to run the network with some links
> flooding fast and some links flooding slow isn’t a solution either.*
>

hmm, then I don't know what you propose in normal case except saying
nothing seems to skin the cat properly when your network is loop-sided
enough. On which we agree I guess ...


>
>
> Then, to clarify on Tony's mail, the "problem" I mentioned anecdotally
> yesterday as behavior I saw on things I did in their time was of course
> when processors were still well under 1GHz and links in Gigs and not 10s
> and 100s of Gigs we have today but yes, the limiting factor was the
> flooding rate (or rather effective processing rate of receiver AFAIR before
> it started drop the RX queues or was late enough to cause RE-TX on senders)
> in terms of losses/retransmissions necessary that were causing transients
> to the point it looked to me then the cure seemed worse than the disease
> (while the disease was likely a flu then compared to today given we didn't
> have massively dense meshes we steer towards today). The base spec &
> mandated flooding numbers didn't change but what is possible in terms of
> rates when breaking the spec did change of course in terms of CPU/links
> speed albeit most ISIS implementations go back to megahertz processors
> still ;-) And the dinner was great BTW ;-)
>
>
>
> So yes, I do think that anything that will flood @ reasonable rate without
> excessive losses will work well on well-computed
> double-flood-reduced-graph, the question is how to get the "reasonable" in
> place both in terms of numbers as well as mechanism for which we saw tons
> lively discussions/proposal yesterday, most obvious being of course going
> and manually bumping e'one's implementation to the desired (? ;-) value
> ...  Other consideration is having computation always trying to get more
> than 2 links in minimal cut on the graph of course which should alleviate
> any bottleneck or rather, make the cut less likely. Given quality of
> max-disjoint-node/link graph computation algorithms that should be doable
> by gut feeling. If e.g. the flood rate per link is available the algorithms
> should be doing even better in centralized case.
>
>
>
> *[Les:] Convergence issues and flooding overload as a result of excessive
> redundant flooding is a real issue – but it is a different problem (for
> which we have solutions) and we should not introduce that issue into this
> discussion.*
>

hmm, we are trying to build flood reduction to deal with exactly this
problem I thought and we are trying to find a good solution in the design
space between a hamiltonian path and not reducing any links @ all where on
one hand the specter of long flooding chains & partitions on single link
failures looms while beckoning with very low CPU load and on the other hand
we can do nothing @ all while staring down the abyss of excessivly large,
densely meshed networks and falling of the cliff of melted flooding ...
So, I'm not sure I introduced anything new but if I did, ignore my attempt
@ clarification of what I said yesterday ...

--- tony

>
_______________________________________________
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to