Is the server doing anything? One reason why streaming typically would
outperform unary is because you can begin processing as soon as you receive
the first chunk. Whereas with a unary RPC, your handler cannot be called
until the entire request has been received and unmarshalled.

If this is a load test, where you are sending a significant load at the
server and measuring the difference, then the memory access pattern of
streaming may be friendlier to your allocator/garbage collector since you
are allocating smaller, shorter-lived chunks of memory. (And there is of
course the obvious advantage for memory use that you don't need to buffer
the entire 100mb when you do streaming.)

If this is a no-op server, I would not expect there to be much difference
in performance -- in fact, streaming may have a slight disadvantage due to
the envelope and less efficient capability for compression (if you are
using compression). Depending on the runtime implementation, there could be
an advantage just due to pipelining: it's possible that your handler thread
is handling unmarshalling of a message in parallel with a framework thread
handling I/O and decoding the wire protocol. Whereas with a unary call,
it's all handled sequentially.

----

Josh Humphries

FullStory <https://www.fullstory.com/>  |  Atlanta, GA

Software Engineer

j...@fullstory.com


On Thu, May 21, 2020 at 6:17 PM <kevinya...@gmail.com> wrote:

> Hey All,
>
> I have been testing and benchmarking my application with gRPC, I'm using
> gRPC C++. I have noticed a performance difference with following cases:
>
> 1. sending large size payload (100 MB+) with a single unary rpc
> 2. breaking the payload into pieces of 1 MB and sending them as messages
> using client streaming rpc.
>
> For both cases, server side will process the data after receiving all of
> them and then send a response. I have found that 2 has smaller latency than
> 1.
>
> I don't quite understand in this case why breaking up larger message into
> smaller pieces out performs the unary call. Wondering if anyone has any
> insight into this.
>
> I have searched online and found a related github issue regarding optimal
> message size for streaming large payload:
> https://github.com/grpc/grpc.github.io/issues/371
>
> Would like to hear any ideas or suggestions.
>
> Thx.
>
> Best,
> Kevin
>
> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/26219adc-254e-4dc2-82a0-2b7f9513d41a%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/26219adc-254e-4dc2-82a0-2b7f9513d41a%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAO78j%2BL2xigkqDLLeY73yRGP39pE6A_9ieH1RhL6CXiOs0j2%2BA%40mail.gmail.com.

Reply via email to