I have used a bidirectional, long-lived streaming RPC of grpc, over which then 
messages are sent in both directions. In that case, the observed latencies 
where lower than 2ms (I think I recall it was in the order of hundred 
microseconds, but will need to double-check), over 40 Gbps ethernet. This was 
using the C++ library, async implementation.

This approach moves the connection and RPC setup cost out of the individual 
latency of the messages. If you tested with standard single request/response 
RPC's, that might explain the higher latency. Also, my messages where rather 
simple, and I expect the protobuf serialization to be quite fast. Larger 
messages will incur some latency due to the more complex serialization. Maybe 
using Flatbuffers as the serialization layer could help you out there, but I 
don't have any experience there.

Koen

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/8195ab20-29a1-4e67-a7d7-f1d4a9575e10%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to