Tony,

On 19/02/2020 08:52, [email protected] wrote:

Les,

Overall, I think you are making  general statements and not providing needed specifics.


I’m sorry it’s not specific enough for you.  I’m not sure that I can help to your satisfaction.


Maybe it’s obvious to you how a receiver based window would be calculated – but it isn’t obvious to me – so please help me out here with specifics. What inputs do you need on the receive side in order to do the necessary calculation?


Well, there can be many, as it depends on the receiver’s architecture. Now, I can’t talk about things that are under NDA or company secret, so I’m pretty constrained.  Talking about any specific implementation is going to not be very helpful, so I propose that we stick with a simplified model to start: a box with N interfaces and a single input queue up to the CPU.  The input queue is the only possible bottleneck.

above is nowhere close to what the reality is, especially in the distributed system. In such system, packets traverses via multiple queues on both LC and RP and application like IGP has no visibility to these queues.

thanks,
Peter


 Further, the avoid undue complexity (for the moment — it may return), let’s assume that the input queue is in max-MTU sized packets, so that knowing the free entries in this queue is entirely sufficient.  Let the number of free entries be F.

As previously noted, we will want some oversubscription factor.  For the sake of a simple model, let’s consider this a constant and call it O.  [For future reference, I suspect that we will want to come back and make this more sophisticated, such as a Kalman filter, but again, to start simply… ]

Now, we want to report the free space devoted to the interface, but derated by the oversubscription factor, so we end up reporting F*O/N.

Is that specific enough?


What assumptions are you making about how an implementation receives, punts, dequeues IS-IS LSPs?


None.


And how will this lead to better performance than having TX react to actual throughput?


The receiver will have better information. The transmitter can now convey useful things like “I processed all of your packets but my queue is still congested”, this would be a PSN that acknowledges all outstanding LSPs but shows no free buffers.

And please do not say  “just like TCP”. I have made some specific statements about how managing the resources associated with a TCP connection is not at all similar to managing resources for IGP flooding.
If you disagree – please provide some specific explanations.


I disagree with your disagreement.  A control loop is a very simple primitive in control theory.  That’s what we’re trying to create.  Modulating the receive window through control feedback is a control theory 101 technique.


It can look at its input queue and report the current space.  ~”Hi, I’ve got buffers available for 20 packets, totalling 20kB.”~ */[Les2:] None of the implementations I have worked on (at least 3) work this way./*


Well, sorry, some of them do.  In particular the Cisco AGS+ worked exactly this way under IOS Classic in the day.  It may have morphed.


    */For me how to do this is not at all obvious given common
    implementation issues such as:/*
    *//*

      * */Sharing of a single punt path queue among many incoming
        protocols/incoming interfaces/*

The receiver gets to decide how much window it wants to provide to each transmitter. Some oversubscription is probably a good thing. */[Les2:] That wasn’t my point. Neither of us Is advocating trying to completely eliminate retransmissions and/or transient overload./* */And since drops are possible, looking at the length of an input queue isn’t necessarily going to tell you whether you are indeed overloaded and if so due to what interface(s)./*


Looking at the length of the input queue does give you a snapshot at your congestion level.  You are correct, it does NOT ascribe it to specific interfaces.  A more sophisticated implementation might modulate its receive window inversely proportional to its interface input rate.


*//*
*/Tx side flow control is agnostic to receiver implementation strategy and the reasons why LSPs remain unacknowledged../*


Yes, it’s ignorant.  That doesn’t make it better.  The point is to maximize the goodput.  Systems theory tells us that we improve frequency response when we provide feedback.  That’s all I’m suggesting.


      * */Distributed dataplanes/*

This should definitely be a non-issue. An implementation should know the data path from the interface to the IS-IS process, for all data planes involved, and measure accordingly.
*/[Les2:] Again, you provide no specifics. Measure “what” accordingly?/*


The input queue size for the data path from the given interface.


*/IF I do not have a queue dedicated solely to IS-IS packets to be punted (and implementations may well use a single queue for multiple protocols) what should I measure? How to get that info to the control plane in real time?/*


You should STILL use that queue size.  That is still the bottleneck.

You get that to the control plane by doing a PIO to the queue status register in the dataplane ASIC.  This is trivial.


    If we are to introduce new behaviors, they must be helpful.
    Estimates that do not utilize the available information may be
    sufficiently erroneous as to be harmful (see silly window syndrome).

*/[Les2:] Again – you try to apply TCP heuristics to IGP flooding. Not at all intuitive to me that this applies – I have stated why./*
*//*


Please take a class in systems theory, analog electronics, or control theory.  Just stating that it does not apply does not make it true.

Tony


_______________________________________________
Lsr mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to