Hey All,

I have been testing and benchmarking my application with gRPC, I'm using 
gRPC C++. I have noticed a performance difference with following cases:

1. sending large size payload (100 MB+) with a single unary rpc
2. breaking the payload into pieces of 1 MB and sending them as messages 
using client streaming rpc.

For both cases, server side will process the data after receiving all of 
them and then send a response. I have found that 2 has smaller latency than 
1.

I don't quite understand in this case why breaking up larger message into 
smaller pieces out performs the unary call. Wondering if anyone has any 
insight into this.

I have searched online and found a related github issue regarding optimal 
message size for streaming large payload: 
https://github.com/grpc/grpc.github.io/issues/371

Would like to hear any ideas or suggestions. 

Thx.

Best,
Kevin

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/26219adc-254e-4dc2-82a0-2b7f9513d41a%40googlegroups.com.

Reply via email to