Hi,

This is me trying to conclude this with a text proposal fort he draft, for the 
record and for the group - in line:


> On Nov 13, 2017, at 10:36 AM, Tommy Pauly <[email protected]> wrote:
> 
> 
> 
>> On Nov 13, 2017, at 10:08 AM, Michael Welzl <[email protected]> wrote:
>> 
>> 
>>> On Nov 13, 2017, at 7:54 AM, Tommy Pauly <[email protected]> wrote:
>>> 
>>> The code I work with does TCP_NOTSENT_LOWAT by default, so we have a fair 
>>> amount of experience with it.
>> 
>> I figured  :-)   AFAIK think Stuart invented it…
>> 
>> 
>>> If you're using sockets with TCP_NOTSENT_LOWAT, and you're doing 
>>> asynchronous non-blocking socket operations, then you don't have the 
>>> annoyance of getting these "empty" callbacks you're referring to—it just 
>>> changes how aggressively the writable event fires, making it back off a bit.
>> 
>> Ah, ok, sure.
>> 
>> 
>>> With a Post-like API, having something directly like TCP_NOTSENT_LOWAT 
>>> doesn't make much sense. Instead, the implementation may internally use 
>>> that, but the equivalent application feedback is the completion that is 
>>> returned based on the write messages. The timing of when the stack 
>>> indicates when something has been written allows the application to 
>>> understand the back-pressure properties of the transport, and if it able to 
>>> generate or fetch data more slowly to match the back-pressure, it can. 
>>> Otherwise, it can simply keep writing and the data will get enqueued within 
>>> the library.
>> 
>> I mean, the way you describe it here, the application has no means to say 
>> how much it wants the layers below to buffer. I think that would be useful, 
>> no?
>> A big buffer is more convenient (based on the number write returns), a 
>> smaller buffer lets the application have control over the data until the 
>> last minute.
>> But then, doesn’t this amount to the same "nuisance vs. control of data 
>> until the last minute" trade-off that I described?
> 
> It can control exactly how much is buffered, since it knows when the data has 
> been actually sent. If you never fail the enqueue of data into the API, you 
> can put as much or as little into the buffer as you want. Since you don't 
> just have a single "writable" callback from a socket, you actually get much 
> more control.

Tommy and I talked in person; Tommy explained to me that the application knows 
about the state of the buffer below because it gets a callback whenever the 
post sockets system is sending something. This may mean a lot of callbacks, but 
I guess that isn’t a real problem. This sounds good to me - you don’t need to 
adjust the buffer this way, you just keep it small by waiting for a callback if 
you want - but then, the text in the post-sockets draft needs to be very clear 
about this procedure in my opinion.

Specifically, now there’s only this text in there:
***
   Carriers also
   provide .OnSent(), .OnAcked(), and .OnExpired() calls for binding
   default send event handlers to the Carrier, and .OnClosed() for
   handling passive close notifications.
***
… which should be expanded to explain exactly when these events are fired. So, 
OnSent fires whenever post sockets sends a message (and returns what? Any 
parameters?)
And FWIW, from this text, OnAck could be a transport layer event, which may not 
even be visible… when is this expected to fire?



>>> Dependencies between the messages that are being written, then, doesn't 
>>> actually come into play much here. Dependencies are hints to the 
>>> implementation and protocol stack of how to order work when scheduling to 
>>> send (PRE-WRITE). TCP_NOTSENT_LOWAT or the completions to indicate that 
>>> writes have completed are all about signaling back to the application to 
>>> provide back-pressure (POST-WRITE).
>> 
>> I understand the functionality is separate, but you can achieve the same 
>> with it: from an application’s point of view, if I have blocks with 
>> dependencies and if you allow me to tune the buffer below the write call, 
>> then I can decide until the last minute that I’d rather not send a certain 
>> data block.
> 
> Since the app knows which blocks are outstanding, you can always wait to 
> schedule the block, and you don't need to add dependencies. The dependencies 
> are useful when you want to express that if there is internal re-ordering 
> within a protocol, it can make sure that certain relationships between data 
> are maintained.

Also for the record, from the conversation, we agree that:
- this is useful to have,
- but it should be static, i.e. not allowing the application to change 
dependencies it has once specified.

This should also be written in the draft.

Cheers,
Michael

_______________________________________________
Taps mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/taps

Reply via email to