Hi Kostis,

One tool you might find useful is FlameGraph, which will visualize data 
collected from perf (https://github.com/brendangregg/FlameGraph). 

I will describe the in process transport architecture a bit so you get a 
better idea of what gRPC overheads are included in your measurements. The 
architecture centers around the following ideas:

   - Avoid serialization, framing, wire-formatting
   - Transfer metadata and messages as slices/slice-buffers, unchanged from 
      how they enter the transport (note that while this avoids serializing 
from 
      slices to HTTP2 frames, this still performs serialization from protos to 
      byte buffers)
   - Avoid polling or other external notification
   - Each side of a stream directly triggers the scheduling of other side’s 
      operation completion tags
   - Maintain communication and concurrency model of gRPC core
   - No direct invocation of procedures from opposite side of stream
      - No direct memory sharing; data shared only as RPC requests and 
      responses
   
Some possible performance optimizations for gRPC/ in process transport: 

   

   - Optimized implementations of structs for small cases
   - 
   - E.g., investigate more efficient completion queue for small # of 
   concurrent events
   - Where can we replace locks with atomics or avoid atomics altogether
   

   
For tiny messages over the in process transport, it should be feasible to 
get a few microseconds of latency, but it may not be possible with 
moderately sized messages because of serialization/deserialization costs 
between proto and ByteBuffer.

Hope this helps!

On Monday, January 14, 2019 at 4:44:19 PM UTC-8, robert engels wrote:
>
> Lastly, you have a lot of “unknown”. You need to compile without the stack 
> frame being omitted, and make sure you have all debug symbols.
>
> On Jan 14, 2019, at 6:42 PM, robert engels <[email protected] 
> <javascript:>> wrote:
>
> I think the tree view rather than the graph would be easier to understand.
>
> On Jan 14, 2019, at 6:34 PM, Kostis Kaffes <[email protected] 
> <javascript:>> wrote:
>
> Thanks! I have tried the per thread option. Attached you will find a call 
> graph and see what I mean by convoluted. There are also some unknowns that 
> do not help the situation.
>
> I am using the in-process transport in order to avoid being dominated by 
> IO. My goal is to see if it is feasible to lower gRPC latency to a few μs 
> and what that might require.
> Hence, even small overheads might matter.
>
> On Monday, January 14, 2019 at 4:16:21 PM UTC-8, robert engels wrote:
>>
>> But I will also save you some time - it is a fraction of the time spent 
>> in IO - so don’t even both measuring it. gRPC is simple buffer translation 
>> at its heart - trivially simple.
>>
>> MAYBE if you had a super complex protocol message you could get it to 
>> register CPU time in those areas with any significance in compared to the 
>> IO time, but doubtful.
>>
>> By IO time, I mean even on a local machine with no “physical network”.
>>
>> Any CPU time used will be dominated by malloc/free - so a no dynamic 
>> memory messaging system will probably out perform gRPC - but still it will 
>> be dominated by the IO.
>>
>> This is based on my testing of gRPC in Go.
>>
>>
>>
>> On Jan 14, 2019, at 6:11 PM, robert engels <[email protected]> wrote:
>>
>> If you use the “perf report per thread” you should have all the 
>> information you need, unless you are using a single threaded test.
>>
>> Stating “convoluted” doesn’t really help - maybe an example of what you 
>> mean?
>>
>> On Jan 14, 2019, at 5:59 PM, Kostis Kaffes <[email protected]> wrote:
>>
>> Hi folks,
>>
>> As part of a research project, I am trying to benchmark a C++ gRPC 
>> application. More specifically, I want to find out how much time is spent 
>> in each layer of the stack as it is described here 
>> <https://grpc.io/blog/grpc-stacks>. I tried using perf but the output is 
>> too convoluted. Any idea on tools I could use or existing results on this 
>> type of benchmarking?
>>
>> Thanks!
>> Kostis
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>> To post to this group, send email to [email protected].
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/26259f10-a18c-45c1-a247-5356424bd096%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/grpc-io/26259f10-a18c-45c1-a247-5356424bd096%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>> To post to this group, send email to [email protected].
>> Visit this group at https://groups.google.com/group/grpc-io.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/grpc-io/C296B1F6-90D7-451A-A6FB-A8E909AB40B4%40earthlink.net
>>  
>> <https://groups.google.com/d/msgid/grpc-io/C296B1F6-90D7-451A-A6FB-A8E909AB40B4%40earthlink.net?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>>
>>
> -- 
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] <javascript:>.
> To post to this group, send email to [email protected] <javascript:>
> .
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/587f4e91-c3fc-4f56-96a2-81755f8efe72%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/grpc-io/587f4e91-c3fc-4f56-96a2-81755f8efe72%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
> <output.png>
>
>
>
> -- 
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] <javascript:>.
> To post to this group, send email to [email protected] <javascript:>
> .
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/grpc-io/2C3483F0-FB62-4E59-B69F-01B71F74E4B8%40earthlink.net
>  
> <https://groups.google.com/d/msgid/grpc-io/2C3483F0-FB62-4E59-B69F-01B71F74E4B8%40earthlink.net?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/1dcd55d8-e4fd-4a64-ab00-e6328b38a0f7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to