Got it. Thank you for the clarification

On Thu, 15 Aug 2019 at 5:56 PM, Eric Anderson <ej...@google.com> wrote:

> The window is a *receive* window. So changing the client's window won't
> change any sending behavior. That's why I was mentioning the server's
> window, since the server will be receiving. There could also be window size
> options within the proxy itself.
>
> On Thu, Aug 15, 2019 at 5:54 PM Rama Rao <ramaraochav...@gmail.com> wrote:
>
>> Thanks for the detailed explanation. This case is reverse proxy so gRPC
>> client is initiating the connection so probably setting the window might
>> change the behaviour looks like? Thanks again for the explanation
>>
>> On Thu, 15 Aug 2019 at 2:07 PM, Eric Anderson <ej...@google.com> wrote:
>>
>>> On Thu, Aug 15, 2019 at 11:55 AM Rama Rao <ramaraochav...@gmail.com>
>>> wrote:
>>>
>>>> We are proxying the requests with 2mb via a proxy and seeing that proxy
>>>> is seeing partial request so interested in understanding the behaviour bit
>>>> more.
>>>>
>>>
>>> Hmmm... Depending on what you are seeing there can be a lot of
>>> explanations.
>>>
>>> 1. HTTP/2 flow control prevents gRPC from sending. Note there is
>>> stream-level and connection-level flow control. By default grpc-java uses 1
>>> MB as the window here, although your proxy is what is providing the window
>>> to the client. I only mention the 1 MB because the proxy will need to send
>>> data to a gRPC server and will be limited to 1 MB initially. Other
>>> implementations use 64 KB and auto-size the window.
>>>
>>>    - On the server-side there is a delay before we let more than 1 MB
>>>    be sent. We wake up the application and route the request, run
>>>    interceptors, and eventually the application stub will request the 
>>> request
>>>    message. At that point the server allows more to be sent. If you
>>>    *suspect* this may be related to what you see, you can change the
>>>    default window
>>>    
>>> <https://grpc.github.io/grpc-java/javadoc/io/grpc/netty/NettyServerBuilder.html#flowControlWindow-int->
>>>  and
>>>    see if you see different behavior. Be aware that the proxy will have its
>>>    own buffering/flow control.
>>>
>>> 2. More than one request is being sent, so their are interleaved. We are
>>> currently interleave in 1 KB chunks
>>> 3. The client is slow doing protobuf encoding. We stream out data while
>>> doing protobuf encoding, so (with some other conditions I won't get into)
>>> it is possible to see the first part of a message before the end of the
>>> message has been serialized on the client-side. This is purely CPU-limited,
>>> but could be noticed during a long GC, for example
>>> 4. Laundry list of other things, like dropped packets
>>>
>>> Can you explain more on “fully buffered on unbuffered depending on what
>>>> user is truly asking”? Are you saying there a way to control this behaviour
>>>> while making the request? If yes, can you point me to that?
>>>>
>>>
>>> I wasn't referring to something controllable. Basically the client
>>> provides an entire request, all at once. gRPC is not "buffering" that
>>> message waiting for it to be sent; it sends it now. But it *can't* 
>>> necessarily
>>> send it *right now*. It takes time to be sent and the message must sit
>>> in a buffer while we wait on TCP and TCP has buffers and there are buffers
>>> in the network, blah blah blah. We also have a "network thread" for doing
>>> the I/O. There is a buffer to enqueue work to that other thread. So lots
>>> and lots of buffers, but it all depends on your perspective and your
>>> definition of "buffer."
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAFfmCFFXpMXVUCAy-C8NXOUtGU%2BT0Kh5sHLAz7DrGuZxeKqWDA%40mail.gmail.com.

Reply via email to