> Given the current RIPng standard timers, it could also be argued that
> RIPng, as specified, doesn't meet the convergence requirements.
> Minimising convergence time should be a goal in any routed
> environment. It is reasonable to assume that convergence time should
> not be significantly longer than network outages users are accustomed
> to should their CER reboot.
Detecting a dead router in the absence of an explicit retraction takes one
minute, worst case, 45 seconds average. After that, convergence happens
in 30 seconds, worst case, if you're using triggered updates. Assuming
you're also doing split horizon, typical convergence times are on the
order of 4 seconds in many reasonable topologies. (Note that Quagga's
RIPng doesn't implement triggered updates, please don't use it for
testing.)
So we're speaking of 90 seconds worst case -- including the time needed
for virtual link sensing --, and roughly 55 seconds average convergence
times, with the default timers. (Note that the biggest part of this
figure is due to the slow virtual link sensing -- counting to infinity is
not the main flaw of RIPng, contrary to common perception.)
That's abysmally slow when compared with OSPF, but way less than the time
needed for an ADSL modem to reboot and establish ADSL sync. I'm not sure
we should be promoting RIPng at this stage, Acee, but neither should we
disqualify it outright. If somebody comes out with strong arguments in
favour of RIPng, we should be keeping an open mind.
-- Juliusz (who is looking forward to implementing the source-sensitive
extensions to RIPng)
_______________________________________________
homenet mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/homenet