Any update on this? Does anybody know how to optimize for latency?

On Tuesday, March 13, 2018 at 10:33:12 PM UTC+1, Mass wrote:
>
>
> I have been doing some latency measurement with different modes of grpc. 
> The application that I have is time critical, and I need to ensure that 
> requests are being completed in sub millisecond delay. In my test runs, I 
> noticed that the average latency that I get for number of calls is around 
> 300us, but there are number of request that are completed around 10 ms 
> which are not acceptable. 
> I have been trying to find a way to optimize for latency, and it seems to 
> me the source of this jitter is batching that it is done in grpc. Hence, I 
> found out in streaming mode you can give WriteOptions().set_write_through
> () when making the write call in order to send the packet instantly. But 
> it didn't really help, I can still see that packets are sent in batches.
>
> Is set_write_through() the right option to use? or there exist a better 
> way to achieve this?
>  
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/9ea02270-3aac-4c37-b961-80a41249b3cc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to