Hi Edward.

h2 doesn't mandate a dynamic or static N. But it does allow multiple
outstanding requests and responses as well as a form of pipelining. And per
stream flow control. However, the implementation must also support such.

So, yes, the protocol has the ability to break the RTT bounds on throughput.

HTTP/1.1 can also. But to do so starts to be much less than standard.

REST over h2 is a great place to be. But, if you are desiring something
simple, h2 and REST mapping on it is a lot more complex than you might want.

On Fri, Aug 18, 2017 at 4:55 PM, Edward Sargisson <[email protected]> wrote:

> I'm finding this discussion an extremely good summary.
>
> For clarity, is HTTP/2 an example of dynamic N and/or properly pipelined?
> (I may have the definitions of these two confused)
>
> In other words, if you use HTTP/2 do you manually break the RTT bounds?
>
> Cheers,
> Edward
>
> On Friday, August 18, 2017 at 8:37:06 AM UTC-7, Todd L. Montgomery wrote:
>
>>
>>
>> On Thu, Aug 17, 2017 at 5:48 PM, Ziad Hatahet <[email protected]> wrote:
>>
>>> Hey Todd! I didn't know you frequented this list, I really appreciate
>>> the reply.
>>> Great talk by the way.
>>>
>>
>> Thanks! Glad you liked it!
>>
>>
>>>
>>> > To allow full async communication, the basic communication block must
>>> be simplex
>>> > (1-way) and not have response dependency at the application nor
>>> protocol level.
>>>
>>> Something similar to UDP then? How would we deal with reliability in
>>> that case?
>>> Otherwise, if you mean for levels higher up in the stack, how do we deal
>>> with
>>> error situations if we should not have a response dependency?
>>>
>>
>> TCP is fine for most situations demanding simplex. ACKs act as both
>> reliability and flow/congestion control.
>> While ACKs couple, what it provides is extremely useful and TCP itself
>> doesn't introduce any unnecessary coupling
>> (in most cases) by requiring ACKs for progress. It's a very good tradeoff.
>>
>> So, UDP may not be necessary. However, when it is, adding reliability
>> with negative acknowledgements
>> (NAKs aka ARQ = automated repeat request) is a lot simpler than you might
>> think.
>>
>> Aeron, for example, provides reliability and adds flow (and optionally
>> congestion) control as well on top of UDP.
>> And can still go very very fast.
>>
>> But flow control, for example, is quite different than a response
>> dependency that something like
>> HTTP demands. HTTP responses are a very poor flow control signal. In
>> fact, HTTP/1.1, with a lack of HTTP pipelining,
>> means that no effective muxing can be done. This introduces head-of-line
>> blocking and limits throughput and
>> introduces unbounded latency. HTTP/2 is better in this regard, but
>> responses alone are not enough. Which
>> is why HTTP/2 uses window updates and per stream flow control. Only once
>> all of that is in use does the benefits
>> of muxing start to show up.
>>
>> Handling errors in a simplex manner is quite easy. NAK-style
>> retransmission handles a multitude of
>> error cases and is, in general, more flexible than positive ACKs saying
>> "200 OK" (for example).
>>
>> Also, in most systems treating errors as messages is a lot simpler of a
>> technique. (A NAK-style
>> retransmission is a message).
>>
>>
>>>
>>> A project I worked on previously used actors deployed in a pipeline
>>> manner,
>>> where each actor would receive data from the previous actor, process the
>>> data,
>>> then forward them on to the following actor. The main information flow
>>> at the
>>> data processing level was one-way; however, we still had to implement
>>> flow
>>> control with back-pressure awareness, which made the "control plane"
>>> part of the
>>> protocol be two-way. How would this fit into what you mentioned?
>>>
>>
>> Indeed. Quite easily. Pipelines are extremely effective. For flow
>> control, in that style of
>> system, each stage can (and should) have it's own back pressure and push
>> back upstream. End-to-end
>> flow control is only needed if the last stage/end processing needs to
>> push back and induce a rate that is
>> slower than the individual stages flow control back pressure. In a well
>> designed pipeline, this is
>> almost never the persistent (always happening) case, though.
>>
>> BTW, per stage flow control is a pre-req for end-to-end flow control in a
>> stable system as they
>> both address different concerns. Without per stage flow control,
>> end-to-end loss MUST be handled
>> because the only option is to drop data in the middle to handle the case
>> of flow control being overrun.
>>
>>
>>>
>>>
>>> > For example, an extremely common problem with some systems using REST
>>> is
>>> > throughput being restricted by RTT due to response coupling
>>> (application or
>>> > protocol).
>>>
>>> If I understand this correctly, what you're saying is that we won't be
>>> able to
>>> batch requests for example due to this tight coupling between requests
>>> and
>>> responses, among potentially other performance improvements as mentioned
>>> in the
>>> talk.
>>>
>>
>> Indeed. What normally happens in HTTP/1.1 (and most systems using REST
>> still use HTTP/1.1) is that a client sends a single request and waits for
>> the response.
>> This limits throughput to the RTT.
>>
>> Next step is someone says, lets have N requests outstanding at a time. So,
>> correlation of responses is introduced in some way. Usually very brittle
>> and usually buggy. And N is a static number.
>> Throughput goes up somewhat, but lags where it could be. One problem is
>> that N is normally static and it's hard to
>> determine what N should be. Maybe 5? Maybe 6? Let's try 7..... or 8?
>>
>> Most systems stop there as it is tough to know how to make things better.
>> But notice that throughput is limited to N * RTT
>> still. Better, but still limited by RTT.
>>
>> One trick is to make N dynamic. Which is emulating flow control. But in
>> this case flow control couples requests and responses.
>>
>> About this time, most systems also start to realize that HTTP/1.1 can not
>> multiplex requests/responses and that HTTP pipelining is
>> hard to do well.
>>
>> The trick a lot of times is to decouple requests from responses (treat
>> each as separate messages in a truly async fashion) and decouple
>> flow control and flow control each (request and response) separately
>> instead of coupling them to one another.
>>
>>
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "mechanical-sympathy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "mechanical-sympathy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to