Les Ginsberg (ginsberg) schreef op 2019-07-23 22:29:

It is a mistake to equate LSP flooding with a set of independent P2P
“connections” – each of which can operate at a rate independent
of the other.

Of course, if some routers are slow, convergence in parts of the network
might be slow. But as Stephane has already suggested, it is up to the
operators to determine if slower convergence in parts of their network
is acceptable. E.g. they can chose to put fast/expensive/new routers
in the center of their network, and move older routers to, or buy cheaper
routers for, the edges of their network.


But I have a question for you, Les:

During your talk, or maybe in older emails, I was under the impression
that you wanted to warn for another problem. Namely microloops.
I am not sure I understand correctly. So let me try to explain what
I understood. And please correct me if I am wrong.


Between the time a link breaks, and the first router(s) start to repair
the network, some traffic is dropped. Bad for that traffic of course. But
the network keeps functioning. Once all routers have re-converged and
adjusted their FIBs, everything is fine again.

In the time in between, between the first router adjusting its FIB and
the last router adjusting its FIB, you can have microloops. Microloops
multiply traffic. Which can cause the whole network to suffer of congestion.
Impacting traffic that did not (originally) go over the broken link.

So you want the transition from "wrong FIBs", that point over the broken
path, to "the final FIBs", where all traffic flows correctly, to have
that transition happen on all routers at once. That would make the network go from "drop some traffic" to "forward over the new path" without a stage
of "some microloops" in between.

Am I correct ? Is this what you try to prevent ?
Is this why you want all flooding between routers go at the same speed ?

Thanks in advance,

henk.

_______________________________________________
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to