Hi Robert,
>> > - Bypass paths maybe already saturated with traffic causing even further >> > traffic oscillations >> >> This is true, but then that implies that the network is not prepared to >> handle link failure of the protected link. If the network is >> under-engineered to begin with, this feature will not magically improve >> things. Capacity is a zero-sum game and this feature assumes that there is >> adequate capacity. > > Not really. Really. If your network is capacity challenged, then this feature will not fix your network. > First let's observe that most networks are engineered to handle single > failure of a node or a link. Properly handling multiple simultaneous failures > is in the vast majority of cases not the case. Of course it also depends on > the locality of the multiple failures. We’re not suggesting that a network need to handle multiple simultaneous correlated failures. > But there is a much more important point to be stated in respect to handling > FRR as a result of link or remote node failure vs triggering FRR based on the > link congestion threshold being crossed. > > The former is a local event and link/node failures are an isolated events. > Congestion is not. If congestion happens on a core link of the node it is > very likely it happened on many nodes in the same time which were unfortunate > to sit on the path of subject flows. If congestion is network wide, then yes, TTE will not help. Again, capacity is a zero-sum game. We can only shed load and we need capacity to redirect it to. If there are many links along a path that are congested, then TTE may help. It can activate prefixes on each of the congested links, thereby alleviating congestion along the path. > Because of this observation the network wide effect of the former can not be > compared 1:1 with effect of the latter. Both link failure and TTE activation are going to shift traffic onto the bypass path. If your network can’t support that, then the use of bypass paths is not recommended. >> As stated, TTE is meant to be used in conjunction with classical TE >> operating on a much longer time scale. If classical TE corrects the overload >> situation (which itself will require path changes and impact end-to-end >> protocols), then TTE will deactivate prefixes and labels and return traffic >> to the primary path. > > That is actually one question I forgot to ask .. when or based on what > network event do you "deactivate" TTE ? When the utilization on the protected link falls below the low threshold, then TTE will redirect traffic to the protected link. >> Some people actually like delivering their best effort traffic. > > Again this was not the point to say that best effort traffic is not > important. > > It was to say that observation of "congestion" should consider configured QoS > queues not bits sent out the interface. Some people choose to act prior to queue build up. As you know, traffic is bursty. Queues can wax and wane very rapidly. Responding to just queue depth would result in a very high frequency oscillation which would be an inappropriate time frame for traffic redirection. Further, we want to be able to address congestion before there is significant queue occupancy. Using TTE, an operator could, for example, start to redirect traffic when a link exceeds an unusual traffic level but well before there was significant queuing. > If I have massive congestion I may really want to "protect" priority flows > and only trigger it when priority class get's full. If you have massive congestion and QoS, then you could have TTE redirect your BE flows, keeping your priority flows on the primary link. > Now I am sure creative folks will go step further and ask to "protect" in > such a way based on increased delay or loss on link (with additional > measurements). And honestly such new triggers would be safer than congestion > trigger as those again are localized and are not chained by their nature > across many nodes. Again, those measures are likely to result in high-frequency oscillation, which we very much want to avoid. Tony
_______________________________________________ rtgwg mailing list [email protected] https://www.ietf.org/mailman/listinfo/rtgwg
