But I will also save you some time - it is a fraction of the time spent in IO - 
so don’t even both measuring it. gRPC is simple buffer translation at its heart 
- trivially simple.

MAYBE if you had a super complex protocol message you could get it to register 
CPU time in those areas with any significance in compared to the IO time, but 
doubtful.

By IO time, I mean even on a local machine with no “physical network”.

Any CPU time used will be dominated by malloc/free - so a no dynamic memory 
messaging system will probably out perform gRPC - but still it will be 
dominated by the IO.

This is based on my testing of gRPC in Go.



> On Jan 14, 2019, at 6:11 PM, robert engels <[email protected]> wrote:
> 
> If you use the “perf report per thread” you should have all the information 
> you need, unless you are using a single threaded test.
> 
> Stating “convoluted” doesn’t really help - maybe an example of what you mean?
> 
>> On Jan 14, 2019, at 5:59 PM, Kostis Kaffes <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Hi folks,
>> 
>> As part of a research project, I am trying to benchmark a C++ gRPC 
>> application. More specifically, I want to find out how much time is spent in 
>> each layer of the stack as it is described here 
>> <https://grpc.io/blog/grpc-stacks>. I tried using perf but the output is too 
>> convoluted. Any idea on tools I could use or existing results on this type 
>> of benchmarking?
>> 
>> Thanks!
>> Kostis
>> 
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io <http://grpc.io/>" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected] 
>> <mailto:[email protected]>.
>> To post to this group, send email to [email protected] 
>> <mailto:[email protected]>.
>> Visit this group at https://groups.google.com/group/grpc-io 
>> <https://groups.google.com/group/grpc-io>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/26259f10-a18c-45c1-a247-5356424bd096%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/grpc-io/26259f10-a18c-45c1-a247-5356424bd096%40googlegroups.com?utm_medium=email&utm_source=footer>.
>> For more options, visit https://groups.google.com/d/optout 
>> <https://groups.google.com/d/optout>.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> To post to this group, send email to [email protected] 
> <mailto:[email protected]>.
> Visit this group at https://groups.google.com/group/grpc-io 
> <https://groups.google.com/group/grpc-io>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/C296B1F6-90D7-451A-A6FB-A8E909AB40B4%40earthlink.net
>  
> <https://groups.google.com/d/msgid/grpc-io/C296B1F6-90D7-451A-A6FB-A8E909AB40B4%40earthlink.net?utm_medium=email&utm_source=footer>.
> For more options, visit https://groups.google.com/d/optout 
> <https://groups.google.com/d/optout>.

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/A2D357CB-7A45-45BE-AF58-2D25DE5DF2A2%40earthlink.net.
For more options, visit https://groups.google.com/d/optout.

Reply via email to