Ok, I think I get it, let me know if I have this right:

The correct thing to do is to handle congestion/flow control for
multiple calls on each object individually, using something like
the mechanisms provided by the C++ implementiation's streaming
construct. This is important so that calls on different objects
originating from the same vat do not cause problems with one
another (whereas from TCP's standpoint, they're the same stream,
so it can't help). This also provides backpressure wrt. the
receiving vat; it avoids queueing too many calls on the connection
overall.

Another check of my understanding: am I correct in thinking that in a
client implementation that's just a single threaded loop calling
methods on one object in sequence, it would not cause a problem to
skip in-process flow control and let the TCP connection deal with it,
since there is only one stream involved anyway? Assuming of course there
is something like the flow limit provided by the vat on the other side
of the connection.

-Ian

Quoting Kenton Varda (2021-11-23 17:32:44)
>    On Tue, Nov 23, 2021 at 3:59 PM Ian Denhardt <[1][email protected]>
>    wrote:
> 
>      What are apps *supposed* to do here? It isn't clear to me where else
>      the
>      backpressure is supposed to come from?
> 
>    Apps should cap the number of write()s they have in-flight at once.
>    (`-> stream` helps a lot with this, as it'll automatically figure out
>    how many is a good number of requests to have in flight.)
> 
>      Most apps are using sandstorm-http-bridge anyway, so they're just
>      acting
>      like normal http servers -- which generally write out data to the
>      response stream as fast as the socket will take it. Skimming
>      sendRequest() in the bridge's source, it looks like it just copies
>      that
>      data directly into the response stream. So I am confused as to what
>      a
>      "well written" app would do?
> 
>    sandstorm-http-bridge currently only does one outstanding write RPC at
>    a time. The code is convoluted but look at pumpWrites() -- it waits for
>    each send() to complete before performing the next one.
>    Historically there was a time where it didn't implement such a limit
>    and would just pump the whole response as fast as it got it, which led
>    to the need to do some enforcement on the supervisor side.
>    -Kenton
> 
> Verweise
> 
>    1. mailto:[email protected]

-- 
You received this message because you are subscribed to the Google Groups 
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/capnproto/163770835576.11740.7320320383419454803%40localhost.localdomain.

Reply via email to