"gRPC uses the HTTP/2 default max size for a data frame of 16kb. A message 
over 16kb may span multiple data frames, whereas a message below that size 
may share a data frame with some number of other messages." - grpc.io 
<https://grpc.io/blog/grpc-on-http2/#footnotes>

"The size of a frame payload is limited by the maximum size that a receiver 
advertises in the SETTINGS_MAX_FRAME_SIZE setting. This setting can have 
any value between 2^14 (16,384) and 2^24-1 (16,777,215) octets, inclusive." 
- RFC 7540 <https://datatracker.ietf.org/doc/html/rfc7540#section-4>

I hope it helps.
On Monday, August 22, 2022 at 3:53:31 AM UTC-3 al...@identiq.com wrote:

> HI again
> We tcpdump the traffic.
> We see the first 16kb frame is sent from the client to the server, but the 
> 2nd one is not sent until an ack is returned from the server.
> In our case, our msg size is 30KB, and round trip latency is ~220ms.
> So because of this, call takes ~440ms instead of ~200ms.
>
> We set the InitialWindowSize and InitialConnWindowSize to 1MB, on both 
> server and client - no change.
> We checked both unary RPCs and Stream RPCs - the same.
> We set the WriteBufferSize and ReadBufferSize to zero (write/read directly 
> to/from the wire) - 95% latency remained the same, but Avg latency dropped 
> by 100ms - not sure why it had this effect.
>
> Again, in all of the conditions above, if we increase to rate of messages 
> to more then 2 or 3 per second, the latency drops to 450ms
> Looking at http2debug=2 logs, we see it seems that in the higher rate the 
> when latency is low, grpc somehow uses a previous opened stream used by 
> another RPC to sent the new RPC....
>
> Anyone has encountered a similar behaviour. ? 
> Anyone that understand GRPC implementation (Go lang) can explain why it 
> behaves this why and if there is a way to make it work better ?
> Any help would be very much appreciated 
> Thanks !
>
>
> On Wednesday, August 17, 2022 at 4:44:28 PM UTC+3 Alon Kaftan wrote:
>
>> Also, if we run few dummy (small payload) calls per seconds, in parallel 
>> to the big payload calls, on the same connection => the latency of the big 
>> payload calls is reduced to 250ms as well
>> Thoughts ?
>>
>> On Wednesday, August 17, 2022 at 1:46:20 PM UTC+3 Alon Kaftan wrote:
>>
>>> Hi
>>> we have a go grpc client running in US and a go grpc server on APAC.
>>> Both on AWS.
>>> on the same connection we make Unary calls
>>> when the message size is lower than 16KB, roundtrip latency is 250ms
>>> when the message size is crossing the 16KKB, round trip latency jumps to 
>>> 450ms
>>> when the message size is crossing the 100KB, round trip latency jumps to 
>>> 650ms
>>>
>>> Also, if we increase the rate of the 16KB+ messages from 1/sec to 2/sec 
>>> or more, the latency drops to 250ms
>>>
>>> we ruled out load balancers as we connected the client & server pods 
>>> directly and observed the behaviour remains the same
>>>
>>> Any ideas where to start with this kind of behaviour ?
>>>
>>> Thanks!
>>
>>
-- 
 Este e-mail, incluindo seus anexos, é confidencial e de uso exclusivo 
do 
destinatário. Seu conteúdo não deve ser revelado a terceiros.
 Caso você 
não seja o destinatário, por favor, notifique o remetente e 
elimine esta 
mensagem imediatamente. Alertamos que esta mensagem 
transitou por rede 
pública de comunicação, estando, portanto, sujeita 
aos riscos inerentes a 
essa forma de comunicação.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/49933cbd-6402-422c-a505-816d6ab4dbc6n%40googlegroups.com.

Reply via email to