Hi Huaimo,

> The way in which the flooding topology converges in the centralized 
> mode/solution is different from 
> that in the distributed mode/solution. In the former, after receiving the 
> link states for the failures,
> the leader computes a new flooding topology and floods it to every other 
> node, which receives
> and installs the new flooding topology. The working load on every non leader 
> node is light. It has more 
> processing power for a procedure/method for fault tolerance to failures.
> However, in the latter, every node computes and installs a new flooding 
> topology after receiving 
> the link states for the failures. It has less processing power for a 
> procedure/method for fault tolerance.
> It is better to let each of the two modes use its own procedure/method for 
> fault tolerance to failures,
> which is more appropriate to it.


It’s true that a distributed solution will call more on an average node than a 
centralized solution will. However, that is not the steady state for either. In 
the 
steady state, the flooding topology has been computed and has been put in place 
already. Thus, the impact of the topology computation at the time of the 
topology change is nil. 

In addition, the amount of work to temporarily amend the flooding topology 
should also be minimal, and by that, I mean O(log n).  The decision should only
be whether or not to temporarily add a link to flooding, and the only 
information that a node needs to do that is to determine if the node is already 
on the
flooding topology. That should be a lookup in a tree that represents the nodes 
on the topology, and that lookup should be O(log n). In other words, it’s fast
and efficient and not a significant drain on resources.


> In the centralized solution/mode, scheduling an algorithm to compute flooding 
> topology happens 
> only on the leader, and then on the backup leader after the leader fails. The 
> parameters for 
> scheduling on the leader may be different from those for scheduling on the 
> backup leader. 
> However, in the distributed solution/mode, scheduling an algorithm to compute 
> flooding topology
> occurs on every node. The parameters for scheduling on all the nodes need to 
> be the same. 


Actually, that’s not true.  An implementation is free to do its own internal 
scheduling however it chooses, regardless of whether it implements a
distributed or centralized implementation.


> The procedure for achieving this is specific to the distributed mode/solution.


More accurately, it is specific to a given implementation.


> If every particular algorithm for computing flooding topology in the 
> distributed solution/mode
> describes a procedure for scheduling in details itself, there will be 
> duplicated descriptions of 
> the same procedure in multiple algorithms, one of which is selected to 
> compute flooding
> topology on every node. It is better for the same scheduling procedure for 
> multiple algorithms 
> to be described in one document.  


Actually, since the IETF should not be specifying the details of scheduling as 
it is an implementation detail, as they do not affect the behavior of the 
protocol, it should not be
discussed in any documents.

Regards,
Tony

_______________________________________________
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to