I have been doing some latency measurement with different modes of grpc.
The application that I have is time critical, and I need to ensure that
requests are being completed in sub millisecond delay. In my test runs, I
noticed that the average latency that I get for number of calls is around
300us, but there are number of request that are completed around 10 ms
which are not acceptable.
I have been trying to find a way to optimize for latency, and it seems to
me the source of this jitter is batching that it is done in grpc. Hence, I
found out in streaming mode you can give WriteOptions().set_write_through()
making the write call in order to send the packet instantly. But it didn't
really help, I can still see that packets are sent in batches.
Is set_write_through() the right option to use? or there exist a better way
to achieve this?
You received this message because you are subscribed to the Google Groups
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to firstname.lastname@example.org.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit
For more options, visit https://groups.google.com/d/optout.