Hi Robert,
>> > "reduce the paths of the packets" - I meant to say reduce number of paths >> > (presumably TE end to end paths) the packets may take to traverse a domain >> > from ingress to egress. Apologies for the shortcut. >> >> If you’re asking if this changes the number of LSPs in the network, it does >> not. The goal is to consolidate existing LSPs together. > > Before the optimization each ingress will have two LSPs (likely equal cost) > to egress. If the optimization happens each ingress will be left with just > one LSP. To me this is a reduction of the number of LSPs from 2 to 1 from > each ingress node. I am not familiar with the term "LSP consolidation" nor > "LSP de-consolidation" in the realm of RSVP-TE paradigm. Apparently, we are not communicating. An LSP is an MPLS Label Switched Path. A continous set of links and nodes in the network from point A to point B is known as a path. This is distinct from an LSP. Yes, this confusing, don’t shoot me, I didn’t make this up. Our goal is consolidation of paths, not LSPs. Afterwards, you would still have two LSPs to the egress, but they would likely be routed on the same path. By consolidating more LSPs onto fewer paths, there are hopefully links that are idle and can be put into power sleep mode. >> That would be either poor path computation or colossally bad luck. >> Presumably Ingress_1 has decided to move traffic for a good reason, based on >> the available data. Even with make-before-break, latencies change and can be >> performance impacting, so moving traffic is never done lightly. >> >> For Ingress_2 to look at the same state of the network and make the opposite >> decision strongly suggests a bug in its path computation logic. >> >> If the path computations are not synchronized, then Ingress_2 would see the >> results of Ingress_1’s action, providing further data to Ingress_2’s >> computation. > > OK I think I am starting to get where you are heading ... yet still many > parts of the machinery are cryptic. > > But at least for CSPF I think you are suggesting introduction of "stickiness" > behaviour to alternative LSPs to traverse nodes with the destination of the > specific egress. It's pretty interesting as so far we have been excluding > links or nodes based on a zoo of flooded data. Here the action seems > reversed. Smells like a patent to me :) Yes, this is a variant on CSPF based on power consumption. I’m not sure that ’stickiness’ is the adjective that I would use. >> That is out of scope for this document. Our job is to simply remove the >> LSPs from the path that should be placed in sleep. Further mechanisms are >> required. > > Well sure you can say a number of things are out of scope. But don't you > think that adding more flooding to link state protocols should at least be > based on some framework or architecture document illustrating how new > information passed in ISIS or OSPF is going to be used ? After all, don't we > need a bit of interoperability when it comes to the ultimate goal which is > power saving ? We do not need interoperability. Each head-end is free to perform their own path computations in any way that they see fit. In fact, this is key to incremental deployment. As such, we also do not need an architecture document. This is not an architectural change. >> Further mechanisms are required. Some of these may be implementation >> dependent. Regardless, these are out of scope for this document and LSR. > > I am not sure if shutting a line card or SFPs is an implementation matter. > Quite contrary I would think they need to be very well and clearly defined to > make vendor A to work in concern with vendor H in production networks when it > comes to accomplishing the goal of some power savings. I agree that shutting down a link is best done with some coordination, but the IGP is not the correct mechanism for doing this. I disagree that shutting down any other infrastructure requires or benefits from any external mechanism. T
_______________________________________________ Lsr mailing list -- [email protected] To unsubscribe send an email to [email protected]
